id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
1b1e244896303239289da474d1782661207d6d31
Abstract: This article deals with enterprise interoperability in order to propose lines of research about information system design in a collaborative context. The static and dynamic dimensions of collaboration are exposed and discussed. The proposed IS design approach is based on process model translation. This article also presents several perspectives, especially about IS flexibility and process model liability. 1 Introduction Nowadays, companies are opening up to their partners (using instituted, regular or sporadic relations) and it is an inescapable characteristic of the market evolution. This need of networking rests on several aspects: concurrency, exchange improvement (of information as for goods), products increasing complexity… The capacity of enterprises to collaborate with each other (furthermore in an efficient way) becomes, by this way, a determining factor for their evolution and their ability to survive. This necessary ability to evolve and react according to their environment is based on companies’ capacity to interact efficiently with the industrial ecosystem in which they live. The notion of collaboration is a vast concept that can be applied to numerous situations: group based on the complementary nature of partners (covering increasing), groups concerning enterprises focused on the same service (power increasing), groupings of providers from a same decision-maker (optimization and safety), common buying platforms (improvement of negotiation capacity) and other original networks, permanent or limited (due to specific constraints of the milieu). The question of enterprises collaboration and their ability to interact efficiently across collaborative networks or industrial ecosystems naturally concerns the field of information systems. Indeed, the behavior, reactivity and all the dynamic aspects of enterprise highly depend on the information system and the processes, services and data it manages. This article deals with the topic of collaboration (definitions, levels…) and proposes several main ideas related to information system design in a collaborative context. The section two deals with the concept of collaboration and how this notion impacts on information systems. The third part deals with collaborating information system design according to the previous section remarks. Finally, part four presents several works in progress and future works related to these topics. ### 2 Collaboration and Information System Collaboration is a large concept and it is necessary to position it according to numerous concepts, classifications and definitions. In this article we voluntarily chose not to discuss terms and words (in order to avoid a lexical debate on “collaboration”, “cooperation”, “communication” and so on) but to focus on conceptual collaboration levels. Next we aim to refine this study from the general concept of collaboration to the specific information systems collaboration. #### 2.1 Enterprise Collaboration On the basis of some research work about enterprise collaboration, we will try to build a synthetic characterization of collaboration. Several levels of collaboration will be described according to two dimensions (static and dynamic), which will be used to structure this article. First of all, [Ko05a] builds a synthesis of the IEC TC 6/290/DC standard (cf. [I02]). This standard deals with enterprise compatibility measurement. We propose to extract from this article the following levels of compatibility (native denominations, given in brackets, have been converted in compatibility levels): - **Level 1 (Coexistent):** may exist independently in a single network, - **Level 2 (Interconnectable):** may share or exchange information, - **Level 3 (Interworkable):** may share functionalities or services, - **Level 4 (Interoperable):** may work according to a predefined collective behavior. This classification scale is shown in the following schema (figure 1): This study highlights two consequences. The first one can be directly deduced: compatibility levels introduce collaboration levels. The second consequence is indirect and puts on view temporal aspects of collaboration: in order to collaborate, enterprises have to collectively build the path, which will bring them from heterogeneity to the right level of understanding. This is a dynamic act, which brings partners to instituted, regular or sporadic relations. [LPT03] proposes a study on the levels of understanding between enterprises (based on the concept of “common objective”). That research work, based on different results from the supply chain management field, provides the following understanding levels (native denominations, given in brackets, have been converted in compatibility levels): - **Level 1** (communication): sporadic data exchange, - **Level 2** (coordination): structured and instituted data sharing, - **Level 3** (collaboration): sporadic data and applications exchange, - **Level 4** (cooperation): structured and instituted data and applications sharing. Our point of view is to use the integration concepts of data and applications and to add a third concept: processes. Continuously with [LPT03], we propose to complete this classification using two complementary understanding levels based on integration using data, applications and processes (figure 2). The two levels added represent higher degrees of understanding, dealing with processes sharing or synchronizing (in a context of common objective). This proposal to enrich [LPT03] results is coherent with [Ko05b] (as taking into account the "established collective behavior") and [CD03] (which claims that interoperability is achieved if it concerns the three levels data, resources and processes are concerned). The fifth level can be illustrated using a group answering to a call for proposal and the collective achievement of this project. The sixth level can be illustrated by the gathering of partners, which synchronize and harmonize their behavior in order to build a co-enterprise able to react as one according to some contexts (this stage is the last step before merging in to one single entity). Such research works, among which the preceding study, allow us to identify two considerations on which the structure of this article will be based: the collaboration level of a group of partners can be characterized by its two main components: static and dynamic views. Our proposal of characterization of collaboration levels can be illustrated using the following graphic (figure 3): ![Figure 3: collaboration characterization proposal](image-url) This proposal of typology will be used throughout this article to study the design of appropriate information system in a collaborative context. 2.2 Enterprise collaboration and information system To confront the previous considerations with the problematic of information system design in a collaborative context, we will study the abstract and conceptual views of information systems (IS). - Reix, in [Re02], defines the concept of information system as an organized set of resources (hardware, software, people, data, processes) able to acquire, treat, store and export information (as data, text, pictures, sounds, etc.) in the organizations. - Bernus, in [BS98], considers that an information system must ensure that the right information is available at the right place at the right time. The notions of right place and right time refer to a process management system which synchronizes the treatment and the carrying of information inside the IS. - Morley, in [Mo02], presents the IS as the composition of two subsystems: - the information management system (including actors, data and processes), - the computing system (including hardware and software resources, database and functions). Those three points of view (Reix, Bernus and Morley) emphasize several invariants of the field of information system modeling. We can sum up these principles according to the following representation: The information system supports the enterprise’s processes by firstly, managing services and functions available in the company and secondly, dealing with information (carrying and storing). Actors and resources of the enterprise, totally involved in the whole information system, are located outside our illustration (figure 3) because of the nature of our main objective: IS design. Figure 4: a vision of the information system That view of information system (and particularly of its computable part) offers the advantage of being coherent with the points exposed at paragraph 2.1 about collaboration levels and the static characterization criteria (data, applications and processes). Consequently, we will use that logical representation of IS to study the consequences of collaboration on IS of partners: • **Data conversion:** to be flowing, interactions between information systems need tools able to convert data efficiently (style, format). It is crucial that data can be transmitted effectively between partners of the collaboration. In other words, even if each partner should be able to preserve its privacy and to establish the specific access rights to the informational field it allows, collaboration between enterprises implies to clearly separate the semantic meaning of data and their syntactic form for an optimal informational integration. • **Management of applications:** generally, services and applications of diverse partners are not built to be compatible. Nevertheless, IS collaboration implies the interaction between partners’ applications to be as flowing as possible (even if they are provided by heterogeneous IS). It is crucial to be able to manage external accesses to those applications (and the associated rights). Technical solutions such as EAI (for Enterprise Application Integration) or ESB (for Enterprise Service Bus) provide concrete support to that kind of services and applications interoperability. • **Orchestration of processes:** processes can be seen as the “musical partition” to be played and accompanied by the information system (and the workflow management system) by piloting the data management and the calls of services or applications. In a collaborative context, the running of collective processes (impacting all partners) must be transparent but should impact the running of internal processes (in IS of partners). That is why, collaborative processes must include components coming from private processes of partners (cf. [Ad05]). Finally, these private processes should be protected against an external malicious reading but they should also offer a partial access: at least the definition of the applications they provide, data they need and information they send out… This is probably the price of a pertinent building of collaborative processes. Besides, if collaboration between information systems may be formalized by using those three concrete levels (data, applications and processes), there are also several secondary components of that point of view: interactions between different levels (applications create, use, modify data; processes need applications and transmit data, etc.). As for the characterization of the dynamical dimension of the collaboration between information systems, the projection of the temporal criteria on the plane of the IS modeling leads us to the following remarks: • **Internal own-knowledge and conceptual compartmentalization:** temporal discontinuity and fractioning of collaboration imply that the potential partners master and know their own IS perfectly. Indeed, it is necessary to efficiently define formats of exchanged data, modes of access and use of applications. Public and private parts of each partner should also be defined. Finally, components of one IS (of one partner) may be prepared to be compartmentalized in order to deal with the possible involvement of an enterprise into several distinct collaboration networks, for different timing, at different moments (but eventually overlapping). • **Flexibility and safety of IS:** because of the previously mentioned hypotheses of an enterprise belonging to several distinct networks (versatility of partnership), information systems have to be extremely flexible. Answering to new, unexpected and innovative collaboration requests is a strong component of that view of the collaboration concept. Like wise, integrating and managing evolutions and changes of a running collaboration is a fundamental and substantial issue. After all, the variety of types of collaboration (instituted, regular or sporadic) implies an increasing need of information systems to be safe and secure while adapted to this opening (this is the price to be a trustable partner). • **Robustness of processes:** the third level of collaboration (cf. §2.1) is characterized by the establishing of a collaborative behavior. This principle rests on one (or several) collaborative process(es), defining the dynamical part of the collaboration and certifying that partners share the same vision of the collective behavior of the network. The collaborative process (or a model of this process) is a key-point of the collaboration. It must be trustable (without being necessarily stable because of the flexibility constraints) and a point of reference for the partnership. Hence, it seems to be legitimate to think that the building of such a collaborative process should be complemented with activities of risk management and robustness improvement. The aim is then to propose a solution including answers to these questions, definitions and requirements. ### 3 Proposal about Collaborative IS design The purpose of this third section is, first to propose a logical structure able to carry the partnership between enterprises (and their IS) by meeting the main requirements identified among the study of the collaboration concept (part 2). Secondly, this section aims at showing a design method of such a kind of collaborative structure (as a software system). This part essentially treats the static view of the collaboration (the dynamic one will be discussed in the fourth section). #### 3.1 The concept of collaborative information system (CIS) Dealing with *data conversion* between partners, *application management* in the network and *processes orchestration* according to the global behavior of the collaboration, reveal the necessity of: - acceding to several components and characteristics of the IS of partners, - having an independent and intermediate entity available (trustable third party), located in the middle of the network. This mediator should manage the specificities of each partner but also the structural and functional conventions specific to the collaboration. We call that entity, support of interoperability, *Collaborative Information System (CIS):* This CIS is based on connectors, plugged into the partners’ information systems and able to deal with the public and private parts. These connectors can access the public data applications, and processes of partners in order to provide the CIS the information needed (access rights and other specific features have to be managed at this level). Thanks to these modules (connectors), the CIS may be able to carry out the collaboration by driving and controlling collaborative processes, by managing calls of applications from partners and by carrying and translating data from one partner to the other (when it is necessary and legitimate according to global processes). This concept of CIS relates to a logical architecture, technical point of view is not mentioned here and might differ from that conceptual view. Concerning the public/private notions, visibility and access to processes, applications and data of partners’ information systems come from enforcement of pre-established collective conventions (connectors and CIS control the conventional decisions taken by the community of partners involved in the network). According to this point of view, internal processes can be private, public or semi-private (if only part of a process is visible, for instance inputs and outputs of a service), applications can be private, public or controlled (if the access is restricted) and data can be private or public. 3.2 Elements to define the Collaborative Information System Based on the previous delimitation of the concept of CIS, seen as the cornerstone of collaboration, one can reasonably ask for the contents of such a system: which knowledge should be available to define this CIS? What concrete characteristics of the network and of partners should be assembled to be in a position to design this Collaborative Information System? Morley, in [Mo02], assumes that if numerous authors focused on the crucial role of the concept of information in the IS field, nowadays standard reference approaches are oriented onto the concept of process. [AD02] points out that an inter-organizational information system has the specific function to support processes going through the organization’s boundaries. Furthermore, Vernadat in [Ve99] defines a process as a *set of partially ranked steps, executed in order to achieve at least one objective*. Thus, from simple information management systems, information systems become systems in charge of driving informational activities (thanks particularly to workflow management tools). According to these ideas, we can infer that defining and designing such a CIS can be based on modeling the specific inter-organizational processes involved in one collaboration, as precisely as possible. Formalizing those collaborative processes (seen as knowledge supplier) appears to be the next important question. In addition, formalizing the CIS to support the collaboration is a central question too. Modeling processes is a classical topic (cf. [BMM05]) and Touzi in [TBP05] proposes to use BPMN language (Business Process Modeling Notation from BPMI) to describe collaboration’s specific processes. Furthermore, Touzi suggests using UML (Unified Modeling Language established by OMG) to model CIS: ![Figure 6: main objective](image) One can summarize the general principle of our approach as follows: we aim at building a UML model (of the adequate collaborative information system) by using the global knowledge (about the concerned collaboration) contained into BPMN models. However, it is relevant and essential to wonder if such an equation is homogeneous: - what surface of the knowledge space of the collaborative network does a BPMN diagram cover? - what are the points of view covered by the UML model of an IS? --- 1 Business Management Initiative. International consortium working on processes field. The CIMOSA method is dedicated to enterprise modeling. CIMOSA describes four points of view of enterprise (seen as system), which give elements to answer the first question: functional view (scenarios and processes), informational view (information and data), organizational view (hierarchical structures and organizational charts) and resources view (competences and availabilities). As processes models, BPMN diagrams are mainly centered on the functional view. Nevertheless, BPMN formalism is one of the languages designed in [Sa04] as able to completely describe a process but also to link it with other views of the enterprise: a BPMN diagram includes connections with information (by dealing with exchanges of messages) and with resources (by allowing parallelism and synchronization). Concerning the second question, UML is a language dedicated to several complementary points of view (essentially of software systems and thus significant for IS modeling). Considering [BJR04] and [Ro04], we can propose to focus on four dimensions: the architectural dimension (structure of the physical components: component and deployment diagrams), the behavioral dimension (dynamic description: sequence, activity, collaboration and state-transition diagrams), the functional dimension (tree of the functions available in the system: use-case diagram) and the structural dimension (logical structure: class and object diagrams). In our context, the problematic of generating UML model from BPMN diagrams may be illustrated by figure 7 (where arrows between points of view and dimensions mean “provide the knowledge for”). ![Figure 7: covered areas and relations between enterprise modeling and IS modeling](image) One can notice that collaborative process model in BPMN allows the definition of behavioral and functional views of IS model (arrows A and B). Besides, structural dimension (links C, E and F) and architectural view (link D) are just partially covered. It seems obviously difficult to be able to produce the model of an IS architecture (no matter if it is a logical one, based on structural view, or a physical one, based on architectural view) starting from process model without providing additional knowledge (kind of architecture, nature of components). Furthermore, if BPMN goes over the edge on informational, organizational and resources dimensions, it is not sufficient to cover the whole partnership and its attributes: it is necessary to complete that knowledge with information from model of partners and the collaboration itself. 3.3 Design method proposed (compliant with MDA practice) Previous observations showed the necessity of completing extracted information (from BPMN processes models) with additional knowledge describing logical and technological structures of information systems. We propose the following approach in four steps: - **Stage 1**: the translator extract from BPMN diagram the knowledge describing the enterprises network considered. - **Stage 2**: that knowledge is injected in the logical architecture specifically chosen for the CIS (depending on the translator culture). - **Stage 3**: the obtained result is enriched with complementary knowledge from enterprise modeling field (specific model of partners and of the network itself). This step implies human intervention. [BMM05] proposes to use the concept of agent for such an enrichment task. - **Stage 4**: finally, the obtained logical model is projected on the chosen technological architecture in order to provide an exploitable UML model. These stages are here formally identified in order to clarify the method. In fact, there are not so obviously distinguished in real implementation. Steps 1, 2 and 3 are in fact one single global stage dedicated to logical modeling (for instance it might be an iterative cycle involving the three steps almost simultaneously). This approach (cf. [TBP06a]) seems to be compliant with the MDA\(^3\) practice. Indeed, according to [O03], the following connections might be done: - **Platform Independent Model (PIM):** that part covers stages 1, 2 and 3 from figure 7, that is to say the building of BPMN models of collaborative processes and the translation into UML diagrams modeling the CIS, - **Platform Model (PM):** that element describes the generic technological architecture chosen for the CIS (for instance based on a UML profile) - **Platform Specific Model (PSM):** that part refers to the results of stage 4. Figure 9: approach MDA (Model Driven Architecture) and CIS design The CIS design method proposed finally is coherent with results presented in [EI05], which underline the need of enterprises integration by means of conceptual models (logical point of view) then technical models (physical point of view) both implemented from business analysis (collaborative processes could be a good example of such an analysis). 4 Perspectives on collaborative IS design This section will show some of our current research works and their connections with the previous considerations. The third part refers abundantly to research works concerning IS design from the “frozen knowledge” encapsulated in collaborative processes models (that is to say the static part of the collaboration concept as shown in part 2). We now present some research work of PhD in progress. Such works will particularly be located on the dynamic dimension of the concept of collaboration (according to Figure 3 in the second section). 4.1 Collaborative processes cartography We currently try to define a system of reference for collaborative processes. This ambition comes from several statements made while studying the topic of BPMN model translation into UML model: 1. BPMN language is not yet universally used or recognized, 2. The CIS design method to propose (cf. section 3) needs a partial “automated translation” which implies incoming BPMN models to respect some conformity conventions: current results on translation rules (cf. [TBP06b]) show how crucial it is to use collaborative processes models following specific standards in order to manipulate and to translate easily. 3. The will of collaborating taken by a group of partner does not systematically imply the definition of the precise collective behavior of that partnership. Building processes models and furthermore collaborative processes models is a delicate and demanding activity. Thanks to these observations, we believe that the building of collaborative processes models takes on a significant weight. It seems to be reasonable to build a formal phase of design of collaborative BPMN diagrams (in order to obtain trustable and adequate models). The approach we propose will be based on two steps: - Building of a collaborative processes cartography: the field of those particular processes is not a well-defined universe. We propose to define several pertinent characterization criteria to locate collaboration and the associated processes in the space of collaborative behavior. The goal is to obtain a system of reference allowing the compartmentalization of collective processes and therefore the use of generic or dedicated models (to build specific processes diagrams). - Proposal of a method to build collaborative process models. It will be based on the cartography built to assist the modeling phase of BPMN diagrams drawing. The models thus obtained will then be compliant with the requirements of the translator (as built according to a specific method whose purpose is to meet the specific conventions of the translator). Starting from the results of [KD96], the first characterization criteria on which we will base the cartography of this unknown field are the following: - **Nature of the collaboration:** [KD96] proposes four groups of collaboration types (seen as a base of constructs) which are the following: enterprise / client, enterprise / supplier, enterprise / service provider, enterprise / concurrent. These relations help to locate one particular collaboration case and the associated processes on a first axis. - **Nature of the network:** [KD96] and [AD02] propose different types of network from centralized network (where a principal collaborates with several partners more or less significant) to chaotic network (where each partner may have specific relation with one another, including supply chain network or other well-known standard models. - **Dynamic aspect of the collaboration:** This characteristic has been exposed in section 2. It concerns temporal aspects of collaboration. It helps to define if a network is based on sporadic, regular, cyclic or instituted relations. This list of criteria underlines the first ideas on which we base our current activities on collaboration characterization. This study will assist the modeling of collaborative processes as exploitable BPMN diagrams. ### 4.2 Robustness of processes and flexibility of information systems The previously presented notion of processes cartography brings two additional perspectives. The first one deals with adaptability and flexibility of collaborative information system. The second one is linked to risk management in collaborative processes models. First of all, being able to evolve in the field of collaborative processes give us some flexibility as far as our propositions in this article are concerned. Indeed, processes (and collaborative processes) may dynamically evolve and the collaboration may also change with time. CIS supporting the partnership should follow the change and this fact is a critical issue of these research works. Thus, being able to instantaneously locate the collaboration (and the involved processes) in a system of reference is a significant asset for CIS evolution. One can imagine modifying the collaborative processes models in an almost continuous way (and, at the same time, the CIS supporting these processes). Such considerations allow us to think we will be able to follow the dynamic change of collaboration. Secondly, using generic models of collaborative processes (associated to specific places in the collaboration system of reference) brings the possibility of categorizing activities. Stamping activities can help identifying risks associated to one nature of activity and by this way to one complete process. Indeed, we can assume that one particular sort of activity will carry its specific sets of generic risks (usually linked to that kind of task). For instance, one activity of transport will systematically be concerned by the same kind of risks (different from the ones associated to one activity stamped conception). Such considerations should help the risk identification, their analysis and treatment. We believe these observations should leads to further research works in the field of interoperability management and especially in the fields of collaboration support in a robust and dynamic way. Conclusion This article is based on three parts. The first one explores the notion of collaboration and assumes that it particularly implies information systems collaboration. This observation brings us to discuss the concept of collaboration itself and its two identified dimensions (static and dynamic). Information systems interoperability is also discussed by introducing the notion of collaborative information system (as a mediator between partners’ IS), which is a vector of data transmission, application management and process orchestration. The second part presents a proposal of CIS design method. This suggestion rests on models translation: from collaborative processes BPMN diagrams to CIS UML model. It includes several stages (such as knowledge extraction and logical architecture filling). The partial model so obtained is a trustable and advanced base to be enriched with additional information on partners and the network itself. The third part exposes the perspectives and lines of research appeared thanks to the first results. The topics mentioned in that section concern flexibility and adaptability of collaborative information system (thanks to collaboration localization in a collaborative processes cartography) but also robustness of collaborative processes (through a risk management approach). This article shows a view of the concept of enterprises’ collaboration based on the characterization of the partnership, which, from our viewpoint, seems to us to be the most relevant one: modeling of collaborative processes of the network. References
{"Source-Url": "https://imt-mines-albi.hal.science/hal-03998310/document", "len_cl100k_base": 5760, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 32555, "total-output-tokens": 6533, "length": "2e12", "weborganizer": {"__label__adult": 0.0004668235778808594, "__label__art_design": 0.003032684326171875, "__label__crime_law": 0.0010747909545898438, "__label__education_jobs": 0.005924224853515625, "__label__entertainment": 0.00018489360809326172, "__label__fashion_beauty": 0.0002377033233642578, "__label__finance_business": 0.018829345703125, "__label__food_dining": 0.0006017684936523438, "__label__games": 0.001064300537109375, "__label__hardware": 0.0017404556274414062, "__label__health": 0.0007023811340332031, "__label__history": 0.0006618499755859375, "__label__home_hobbies": 0.0002536773681640625, "__label__industrial": 0.0034542083740234375, "__label__literature": 0.0007958412170410156, "__label__politics": 0.0007147789001464844, "__label__religion": 0.0005440711975097656, "__label__science_tech": 0.2685546875, "__label__social_life": 0.00022661685943603516, "__label__software": 0.09051513671875, "__label__software_dev": 0.5986328125, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0013256072998046875, "__label__travel": 0.0003199577331542969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31630, 0.00919]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31630, 0.29377]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31630, 0.93159]], "google_gemma-3-12b-it_contains_pii": [[0, 1595, false], [1595, 3960, null], [3960, 5353, null], [5353, 6759, null], [6759, 8833, null], [8833, 12042, null], [12042, 14853, null], [14853, 16271, null], [16271, 18817, null], [18817, 21371, null], [21371, 22696, null], [22696, 23808, null], [23808, 26475, null], [26475, 28914, null], [28914, 31630, null], [31630, 31630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1595, true], [1595, 3960, null], [3960, 5353, null], [5353, 6759, null], [6759, 8833, null], [8833, 12042, null], [12042, 14853, null], [14853, 16271, null], [16271, 18817, null], [18817, 21371, null], [21371, 22696, null], [22696, 23808, null], [23808, 26475, null], [26475, 28914, null], [28914, 31630, null], [31630, 31630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31630, null]], "pdf_page_numbers": [[0, 1595, 1], [1595, 3960, 2], [3960, 5353, 3], [5353, 6759, 4], [6759, 8833, 5], [8833, 12042, 6], [12042, 14853, 7], [14853, 16271, 8], [16271, 18817, 9], [18817, 21371, 10], [21371, 22696, 11], [22696, 23808, 12], [23808, 26475, 13], [26475, 28914, 14], [28914, 31630, 15], [31630, 31630, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31630, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
968225ff5f19d3418292af53899f298d091a6e09
INSTRUCTIONS - You have 2 hours to complete the exam. - The exam is open book, open notes, closed computer, closed calculator. The official CS 61A midterm 1 and 2 study guides will be provided. - Mark your answers on the exam itself. We will not grade answers written on scratch paper. <table> <thead> <tr> <th>Last name</th> <th></th> </tr> </thead> <tbody> <tr> <td>First name</td> <td></td> </tr> <tr> <td>Student ID number</td> <td></td> </tr> <tr> <td>BearFacts email (@berkeley.edu)</td> <td></td> </tr> <tr> <td>Room in which you are taking this exam</td> <td></td> </tr> <tr> <td>TA</td> <td></td> </tr> <tr> <td>Name of the person to your left</td> <td></td> </tr> <tr> <td>Name of the person to your right</td> <td></td> </tr> </tbody> </table> I pledge my honor that during this examination I have neither given nor received assistance. (please sign) Reference. Some questions make use of the following class definitions from labs and homework: class Link: empty = () def __init__(self, first, rest=empty): assert rest is Link.empty or isinstance(rest, Link) self.first = first self.rest = rest def __repr__(self): if self.rest is not Link.empty: rest_str = ', ' + repr(self.rest) else: rest_str = '' return 'Link({0}{1})'.format(repr(self.first), rest_str) def __len__(self): return 1 + len(self.rest) def __getitem__(self, i): if i == 0: return self.first else: return self.rest[i-1] def __str__(self): string = '<' while self.rest is not Link.empty: string += str(self.first) + ', ' self = self.rest return string + str(self.first) + '> class Tree: def __init__(self, label, children=()): self.label = label self.children = list(children) def __repr__(self): if self.children: children_str = ', ' + repr(self.children) else: children_str = '' return 'Tree({0}{1})'.format(self.label, children_str) def is_leaf(self): return not self.children 1. (12 points) Pointers For each of the following code fragments, add arrows and values to the object skeletons to the right to show the final state of the program. Single boxes are variables that contain pointers. Double boxes are Links. Not all boxes need be used. (a) (3 pt) ``` L = Link(1, Link(2)) P = L Q = Link(L, Link(P)) P.rest.rest = Q ``` (b) (3 pt) ``` L = Link.empty for i in range(3): L = Link(i, L) ``` (c) (3 pt) For the next two problems, show the result of executing the code on the left on the initial conditions displayed on the right. We’ve done the first statement for you in each case, so that the diagrams on the right show the state at the point marked # START. Use the empty object skeletons only for newly created `Link` objects. If any pointer is modified, neatly cross out the original pointer and draw in the replacement. Show only the final state, not any intermediate states. ```python P = Link(0, Link(1, Link(2))) # START def crack1(L): if L is Link.empty: return (Link.empty, Link.empty) L1, L2 = crack1(L.rest) return (Link(L.first, L2), L1) Q, R = crack1(P) ``` (d) (3 pt) ```python P = Link(0, Link(1, Link(2))) # START def crack2(L): if L is Link.empty: return (Link.empty, Link.empty) L1, L2 = crack2(L.rest) L.rest = L2 return (L, L1) Q, R = crack2(P) ``` 2. (6 points) Complexity As indicated in lecture, an assertion such as \( \Theta(f(n)) \subseteq \Theta(g(n)) \) means “any function that is in \( \Theta(f(n)) \) is also in \( \Theta(g(n)) \).” (a) (1.5 pt) Circle each of the following that is true. A. \( \Theta(f(n)) \subseteq O(f(n)) \) B. \( \Theta(2x^2 + 1000x) \subseteq \Theta(x^2) \) C. \( \Theta(x^2) \neq \Theta(2x^2 + 1000x) \) D. \( O(1/n) \subseteq O(1) \) E. \( \Theta(1/n) \subseteq \Theta(1) \) (b) (1.5 pt) Assume that \( M \) is an \( N \times N \) array (an \( N \)-long Python list of \( N \)-long lists). Consider the following program: ```python def search(M, x): N = len(M) Li, Uj = 0, N-1 while Li < N and Uj >= 0: if M[Li][Uj] < x: Li += 1 elif M[Li][Uj] > x: Uj -= 1 else: return True return False ``` Circle the order of growth that best describes the worst-case execution time of a call to `search` as a function of \( N \). A. \( \Theta(N) \) B. \( \Theta(N^2) \) C. \( \Theta(\log N) \) D. \( \Theta(2N^2) \) E. \( \Theta(2^N) \) (c) **(1.5 pt)** Consider the following implementation of `count`, which takes in a linked list of numbers `lst` and an unordered Python list of numbers `nums`, and returns a count of the number of values in `lst` that appear in `nums`: ``` def count(lst, nums): """The number of elements in linked list LST that appear appear in the unordered Python list NUMS. >>> L = Link(2, Link(4, Link(2, Link(3, Link(1)))))) >>> count(L, [2, 1, 5]) 3" curr = lst count = 0 while curr != Link.empty: if curr.first in nums: count += 1 curr = curr.rest return count ``` Circle the order of growth that best describes the worst-case execution time of `count`, as a function of `n`, the length of `nums`, and `m`, the length of `lst`. Since `nums` is a Python list, the `in` operator uses simple linear search. A. $\Theta(n)$ B. $\Theta(m)$ C. $\Theta(n^2)$ D. $\Theta(n + m)$ E. $\Theta(nm)$ F. $\Theta(mn^2)$ (d) **(1.5 pt)** Consider the following function for computing powers of a polynomial: ``` def polypow(P, k): """P ** k, where P is a polynomial and K is a non-negative integer."" result = Poly(1) while k != 0: if k % 2 == 1: result = result.mult(P) P = P.mult(P) k = k // 2 return result ``` Circle the order of growth that best describes the worst-case execution time of `polypow`, as a function of `k`, where execution time is measured in the number of times that the `.mult` method is called. A. $\Theta(k)$ B. $\Theta(k^2)$ C. $\Theta(\sqrt{k})$ D. $\Theta(\log k)$ E. $\Theta(2^k)$ 3. (8 points) Seeing Double Fill in the functions below to produce linked lists in which each item of the original list is repeated immediately after that item. Your solutions should be iterative, not recursive. (a) (4 pt) The function `double1` is non-destructive, and produces a new list without disturbing the old. ```python def double1(L): """Returns a list in which each item in L appears twice in sequence. It is non-destructive. >>> Q = Link(3, Link(4, Link(1))) >>> double1(Q) Link(3, Link(3, Link(4, Link(4, Link(1, Link(1)))))) >>> Q Link(3, Link(4, Link(1))) >>> double1(Link.empty) () """ result = _______________________ last = None while L is not Link.empty: if last is None: _____________________________________________ _____________________________________________ else: _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ return result ``` (b) (4 pt) The function `double2` is destructive, and reuses `Link` objects in the original list wherever possible. ```python def double2(L): """Destructively modifies L to insert duplicates of each item immediately following the item, returning the result. >>> Q = Link(3, Link(4, Link(1))) >>> double2(Q) Link(3, Link(3, Link(4, Link(4, Link(1, Link(1)))))) >>> Q Link(3, Link(3, Link(4, Link(4, Link(1, Link(1)))))) """ result = ____________________________ while L is not Link.empty: _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ _____________________________________________ return result 4. (1 points) Extra Last September, twin LIGO detectors observed gravitational waves that emanated from the merger of two black holes. In the process of this merger, three solar masses (roughly $6 \times 10^{30}$ kg) were converted into gravitational energy. How many planets the size of earth (roughly $6 \times 10^{24}$ kg) could this much energy accelerate to 1% of lightspeed (about 3000 km/sec)? 5. (8 points) Heaps of Trouble A (min-)heap is a tree with the special property (the heap property) that every node has a label that is less than the labels of all its child nodes. This means that the minimum element of the heap is at the root, so it can be found in constant time. For example: ``` 2 /| 4 30 / | | 90 9 5 ``` Suppose we have a heap containing at least two values. To remove and return its smallest element, while maintaining the heap property, we use the following function: ```python def remove_smallest(H): """Destructively remove and return the smallest value from heap H, restoring the heap property. Assumes H has at least two elements."" result = H.label H.label = remove_leaf(H) # Step 1 reheapify(H) # Step 2 return result ``` The function `remove_leaf` removes one of the leaves from the heap, returning its label. The diagram on the left below shows the state of the heap above after executing Step 1 of `remove_smallest`. In general (as shown), this will cause the root to violate the heap property. To restore it, we use the function `reheapify`, which first swaps the root’s label with that of its smallest child (giving the tree in the middle below). If as a result, the heap property is still violated (as in the example), `reheapify` repeats the process down the tree until the value inserted at the top reaches a point where it is smaller than all its children, which will always be true if it reaches a leaf, as happens in the example below (shown on the right), but can also happen before that. (a) (4 pt) Write the function `remove_leaf` to remove a leaf from a heap destructively and return its label. Any leaf will do, but to be specific, have it remove the leftmost leaf of the leftmost child of the leftmost child...of the root. Again, we assume that there are at least two values in the heap. ```python def remove_leaf(H): """Destructively remove far leftmost leaf of H, returning its label"" child = H.children[0] if ________________________: v = child.label H.children = _____________________________ return v else: return _____________________________ ``` (b) (4 pt) Write the function `reheapify` to restore the heap property of a heap destructively, assuming that initially it is violated (if at all) only at the root. ```python def reheapify(H): """Destructively restore the heap property of H, assuming it is violated only at H itself, if at all."" if ________________________: return else: s = H.children[0] for c in H.children: if ________________________: s = c if ________________________: s.label, H.label = _____________________________ ``` 6. (8 points) OOPs Given the class definitions on the left, fill in the blanks to show what the Python interpreter would print. Print "ERROR" for cases that would cause an exception. Put "<None>" for cases where the Python interpreter would print nothing. ```python class Person: name = "Outis" def get_name(self): return self.name def response(self, question): v = self.cogitate(question) if v is None: return "I do not know" else: return v def cogitate(self, question): return None def set_name(self, new_name): self.name = new_name def __str__(self): return self.name class Learner(Person): def __init__(self): self.facts = {} def learn(self, question, answer): self.facts[question] = answer return 'Got it' def cogitate(self, question): if question in self.facts: return self.facts[question] class Beginner(Learner): def __init__(self, name): Learner.__init__(self) self.set_name(name) def response(self, question): r = Person.response(self, question) return "I think " + r >>> odysseus = Learner() >>> odysseus.learn('god', 'Athena') >>> hipp = Beginner('Hippothales') >>> hipp.learn('favorite person', 'Lysis') >>> odysseus.get_name() >>> hipp.get_name() >>> Person.name = "Nemo" >>> hipp.get_name() >>> odysseus.get_name() >>> odysseus.set_name(odysseus.get_name()) >>> Person.name = "Nobody" >>> odysseus.get_name() >>> someone = Person() >>> someone.learn('Earth mass', '5.972e24 kg') >>> someone.response('Earth mass') >>> hipp.response('favorite person') >>> odysseus.response('god') ``` 7. (8 points) Evicted! An LRU cache (stands for “least recently used”) is a kind of dictionary that can only hold a fixed, finite number of keys (its capacity) and corresponding values. When addition of a new key would exceed that capacity, the least recently accessed key in the cache is removed (“evicted”) and replaced with the new value. Such caches are used to speed up access to some relatively slow, but much larger dictionaries. For example, most computers have a large main memory and various caches for saving and retrieving recently accessed memory values; the latter can be 200 times faster than the former. (a) (2 pt) Consider the following “slow” dictionary implementation: class SlowData: """Simulates a basic read-only memory store of KEY => VALUE mappings" >>> slow_data = SlowData(((0, 'a'), (1, 'b'), (2, 'c'))) >>> slow_data[1] 'b' >>> slow_data[2] 'c' """ def __init__(self, data): self._data = data # A sequence of (KEY, VALUE) tuples def __getitem__(self, key): """Get the value associated with KEY, or None if there is none.""" for curr_key, curr_value in self._data: if key == curr_key: return curr_value return None If mem is a SlowData containing N tuples, what is the worst-case execution time for the following code fragment? result = 0 for i in range(N): result += mem[i] Circle the correct answer below. A. \( \Theta(N) \) B. \( \Theta(N \log N) \) C. \( \Theta(N^2) \) D. \( \Theta(N^3) \) (b) (4 pt) An LRUCache object is intended to provide access to values from a SlowData in such a way that the results of some recent accesses to the SlowData object are saved and subsequently accessed quickly. To do this, the cache keeps a list of key/value tuples whose size has a fixed upper limit. If a key that is in the cache is accessed, its corresponding value is fetched from this list without consulting the SlowData object. If a key is not in the cache, it is fetched from the SlowData object. Each time a value is referenced, it is placed at or moved to the end of the cache list, and if that makes the list too long (longer than the capacity), the first item in the list is removed (so that it will have to be retrieved from the SlowData object if accessed again). Fill in the code below to have this behavior. (A convenient way to remove the item at index k from a list L is del L[k].) class LRUCache: def __init__(self, capacity, slow_data): self._capacity = capacity self._slow_data = slow_data self._cache = [] def __getitem__(self, key): for i in range(len(self._cache)): pair = self._cache[i] if ______________________: ______________________ ______________________ ______________________ return pair[1] v = self._slow_data[key] self._cache____________________________ if len(self._cache) > self._capacity: del ________________________________ return v (c) (1 pt) If mem is a SlowData containing N tuples, what is the worst-case execution time for the following code fragment? cached_mem = LRUCache(4, mem) result = 0 for i in range(N): result += cached_mem[i] Circle the correct answer below. A. Θ(N) B. Θ(N log N) C. Θ(N^2) D. Θ(N^3) (d) (1 pt) If cached_mem is as above, what is the worst-case execution time for the following code fragment? result = 0 for i in range(N): result += cached_mem[i % 4] A. Θ(N) B. Θ(N log N) C. Θ(N^2) D. Θ(N^3)
{"Source-Url": "http://inst.eecs.berkeley.edu/~cs61a/sp16/assets/pdfs/61a-sp16-mt2.pdf", "len_cl100k_base": 4378, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24342, "total-output-tokens": 5129, "length": "2e12", "weborganizer": {"__label__adult": 0.0006737709045410156, "__label__art_design": 0.0007576942443847656, "__label__crime_law": 0.0006422996520996094, "__label__education_jobs": 0.09051513671875, "__label__entertainment": 0.00017440319061279297, "__label__fashion_beauty": 0.0003693103790283203, "__label__finance_business": 0.0004529953002929687, "__label__food_dining": 0.0012760162353515625, "__label__games": 0.0016002655029296875, "__label__hardware": 0.002025604248046875, "__label__health": 0.0008563995361328125, "__label__history": 0.0007581710815429688, "__label__home_hobbies": 0.0005631446838378906, "__label__industrial": 0.0012102127075195312, "__label__literature": 0.0008449554443359375, "__label__politics": 0.0004732608795166016, "__label__religion": 0.001178741455078125, "__label__science_tech": 0.03253173828125, "__label__social_life": 0.0005898475646972656, "__label__software": 0.01081085205078125, "__label__software_dev": 0.84912109375, "__label__sports_fitness": 0.0009312629699707032, "__label__transportation": 0.0011873245239257812, "__label__travel": 0.0005483627319335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16584, 0.01924]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16584, 0.81654]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16584, 0.71925]], "google_gemma-3-12b-it_contains_pii": [[0, 743, false], [743, 2022, null], [2022, 2449, null], [2449, 3377, null], [3377, 4471, null], [4471, 6082, null], [6082, 8096, null], [8096, 10093, null], [10093, 11294, null], [11294, 12950, null], [12950, 14496, null], [14496, 16584, null]], "google_gemma-3-12b-it_is_public_document": [[0, 743, false], [743, 2022, null], [2022, 2449, null], [2449, 3377, null], [3377, 4471, null], [4471, 6082, null], [6082, 8096, null], [8096, 10093, null], [10093, 11294, null], [11294, 12950, null], [12950, 14496, null], [14496, 16584, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16584, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16584, null]], "pdf_page_numbers": [[0, 743, 1], [743, 2022, 2], [2022, 2449, 3], [2449, 3377, 4], [3377, 4471, 5], [4471, 6082, 6], [6082, 8096, 7], [8096, 10093, 8], [10093, 11294, 9], [11294, 12950, 10], [12950, 14496, 11], [14496, 16584, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16584, 0.02452]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5ffc1efd2a600914765e957d88ffb14059c1c93c
Test Case Generation for Flexible Real-Time Control Systems Nilsson, Robert; Henriksson, Dan Published in: Emerging Technologies and Factory Automation, 2005. ETFA 2005. 10th IEEE Conference on Published: 2005-01-01 Citation for published version (APA): Test Case Generation for Flexible Real-Time Control Systems Robert Nilsson School of Humanities and Informatics University of Skövde Box 408, SE-54128 Skövde, Sweden robert.nilsson@his.se Dan Henriksson Department of Automatic Control Lund University Box 118, SE-22100 Lund, Sweden dan.henriksson@control.lth.se Abstract Temporal correctness is crucial for the dependability of real-time control systems. A problem with testing such systems is the dependency on the execution orders of tasks. Mutation-based testing criteria have been proposed to determine which execution orders need to be exercised to verify that real-time systems are timely. For flexible control systems, timeliness in itself may only be relevant for a sub-set of tasks, whereas maintained control performance in the presence of worst-case jitter and disturbances is essential. This paper presents an extension to the co-simulator tool TrueTime, to support mutation-based testing of control performance and timeliness. Further, an approach for automatic generation of test cases using genetic algorithms is presented. A conclusion is that testing criteria for timeliness can be used to increase confidence in the dependability of flexible control systems. 1. Introduction Current real-time control systems must be both flexible and dependable. On the other hand, there is a desire to increase the number of services that real-time systems offer while using few, off-the-shelf hardware components. This increases system complexity and introduces sources of temporal non-determinism. Thus we need methods to detect violation of timing constraints and poor control performance using computer architectures where we cannot to rely on accurate off-line assumptions. Timeliness is the ability for software to meet timing constraints. For example, a timing constraint can be that it should never take more than 100 ms between an alarm is activated until a robot arm enters a safe state. If system timeliness is violated, a timeliness failure has occurred. Response times of concurrent tasks depend on the order in which the tasks execute. This is particularly evident in event-triggered and dynamically scheduled systems because sporadic interrupts can continuously influence the execution order and schedule. Also, hardware caches affect the execution times of tasks, causing response times to become non-deterministic with respect to the inputs, and thus, complicate verification and accurate estimations. Timeliness of embedded real-time systems is traditionally analyzed and maintained using scheduling analysis or regulated online through admission control and contingency schemes [19]. However, such techniques make assumptions about the tasks execution behavior and request patterns. Further, doing full scheduling analysis with non-trivial system models is complicated. Thus, analysis must be complemented with timeliness testing. Many real-time systems have tasks that implement control applications. Such applications interact with physical processes through sensors and actuators to achieve a control goal. For example, a painting robot may have a control application that periodically samples joint angles and sets different motor torques so that the robot movement becomes smooth and aligned with the painted object. We use the concept flexible control system to denote a real-time system that is event-triggered and dynamically scheduled and has a mix of reactive hard and soft tasks. The hard tasks must always meet their timing constraints, whereas soft tasks are more tolerant to delay and irregularities, and typically some deadlines of soft tasks can be missed before the system fails. Most control applications are based on feedback principles, which means that they are inherently robust against occasional timeliness failures. An occasional deadline miss for a periodic controller task is not fatal for system stability, but can rather be seen as a disturbance acting on the control system. Hence, controllers can be implemented with soft tasks. Soft tasks are also referred to as adaptive tasks [5], in that missing single deadlines does not jeopardize correct system behavior, but only leads to a performance degradation. One example is EDF scheduling during overload, which will effectively lead to re-scaling of the sampling periods of all tasks. In this case actually all tasks will miss all their deadlines, however, the performance of the control loops may still be acceptable with the slightly longer sampling intervals. Instead, control algorithms contain built-in timing constraints that are more subtle than response time deadlines. The delay in each sample between the readings of the inputs and the generation of the outputs is known as input-output latency. Excessive input-output latency will compromise the performance of the control system, and may even cause instability. Further, depending on the process under control and the controller design, there will be a maximum variation (jitter) in the sampling instants and the input-output latency that can be tolerated to guarantee system stability [6]. If these constraints are violated, the control application may fail its control goal or even become unstable. We call this a control failure. Moreover, since faults in the estimation of temporal properties may result in unanticipated behavior, it is relevant to test control performance of the soft controller tasks. In this paper we present an extension to the real-time co-simulation tool TrueTime to support test case generation. We also evaluate the capability of revealing failures in flexible control systems in a proof-of-concept test case generation experiment. Our results indicate that a test case generation method for testing of timeliness can also be used for generating test cases for revealing control failures. 2. Automated test case generation When testing software, a test criterion is typically set up to define the test requirements that must be satisfied. Examples of test criteria include 'execute all statements' and 'cover all transitions' in a state machine. The mutation-based testing technique presented in this paper is mainly inspired by a specification-based method for automatic test case generation presented by Ammann Black and Majurski [2]. The main idea behind the technique is to systematically "guess" what faults a real-time design contains and then evaluate what the worst effect of such faults could be. Each hypothesized fault is represented as a copy of the system specification containing that fault; such a specification is called a mutant*. As a part of test case generation, the mutant models are analyzed and classified as benign or malignant. Mutants containing faults with bad consequences are classified as malignant and specialized test cases are constructed that aim to reveal those faults if they exist in the final implementation. For a more detailed overview of mutation-based test case generation, consider Figure 1. The inputs to mutation-based testing are a specification of a real-time system and a test criterion. The test criterion specifies the mutation operators to use when creating mutants, and thus, determines the kind of test cases that are produced. An advantage of using mutation-based testing criteria is that the testing effort can be estimated and quantified by the number of malignant mutants. *Note: These mutants are not related to the mutations and crossovers performed when using genetic algorithms. A mutant generator applies the mutation operators and sends the mutated specifications to an execution order analyzer that determines if and how a mutation can lead to a timeliness or control failure (see Figure 1). If the analysis reveals a missed hard deadline or an unstable controller, it is marked as killed, otherwise the mutation is considered to be benign and discarded. For pure timeliness testing, this execution order analysis can be done using model checking [14]. Traces from the killed mutants are sent to a test case generation filter that converts the traces to input sequences and their corresponding expected and critical execution orders. These are used as test cases for the target system. Test execution is focused on running the generated input sequences and trying to detect the derived critical execution orders using, for example, prefix-based or non-deterministic testing techniques [10, 13, 17]. 2.1. System model Real-time application behavior is typically modelled by a set of periodic and sporadic tasks that compete for system resources. Periodic tasks are requested with fixed inter-arrival times, thus the times when the task will be requested are known. Sporadic tasks can be requested as a response to some event at any time. However, to simplify analysis, sporadic tasks are specified with a minimum inter-arrival time. If no minimum inter-arrival time can be determined, the task is aperiodic. Each real-time task has a specified deadline and sometimes an offset, which denotes the time before any task of that type is requested. In this paper we use a subset of Timed Automata with Tasks (TAT) [15, 7] to specify the assumptions about the system under test. Timed Automata (TA) [1] have been used to model different aspects of real-time systems. A timed automaton is a finite state machine extended with a collection of real-valued clocks. Each transition can have a guard, an action and a number of clock resets. A guard is a condition on clocks and variables, e.g., a time constraint. An action can perform operations such as assigning values to variables. The clocks increase uniformly from zero until they are individually reset in a transition. When a clock is reset, it is instantaneously set to zero and then starts to increase at the same rate as the other clocks. Within TAT, TA is used for specifying activation pattern of tasks, i.e., the points in time when a task is requested for execution. In this paper we focus on sporadic and periodic tasks with generic automata templates. TAT extends the TA notation with a set of real-time tasks \( P \). The set \( P \) represents tasks that perform computations in response to requests. Elements in \( P \) express information about tasks as quadruples \((c, d, SEM, PREC)\). \( c \) is the required execution time and \( d \) is the relative deadline. These values are used to create a new task instance as it is released by a TAT automaton action. Shared resources are modelled by a set of system-wide semaphores \( R \). \( SEM \) is a set of tuples of the form \((s, t_1, t_2)\) where \( t_1 \) and \( t_2 \) are the relative lock and unlock times of semaphore \( s \in R \) when an instance of the task is executed. Precedence constraints are relations between pairs of tasks \( A \) and \( B \) stating that an instance of task \( A \) must have completed before the execution of two consecutive instances of task \( B \). For example, such constraints can model a blocking producer-consumer relation between task \( A \) and \( B \). Hence, \( PREC \) is a subset of \( P \) that specifies which tasks must precede a task of this type. A specification of the execution of a task, including the points in time when resources are locked and unlocked, is called an execution pattern in this paper. Figure 2 shows the execution pattern of task \( A \) in Table 1. In TAT, task execution times are fixed. This may appear unrealistic if the input data to a task is allowed to vary. However, to divide the testing problem, this test case generation step assumes that each task is associated with a particular (typical or worst-case) equivalence class of input data so that the only variance in execution times comes from non-deterministic components and the target platform. Several complementary methods exist for deriving such classes of input data for real-time tasks [16, 12]. Further, when a malignant mutant is found, tasks can be run in a critical execution order to see if a failure can be reproduced in the real system using other input data. ### 2.2. Mutation operators A mutation-based test criterion is defined by a set of mutation operators. Mutation operators have previously been presented for testing of timeliness and formally defined for TAT-specification models [14]. In this paper, we summarize the relevant operators informally and discuss the faults generated by the operators from a flexible control system perspective. In many of the operators, some property of the execution pattern is modified slightly, so \( \Delta \) is used to denote the size of the change. **Execution time operators:** Execution time mutation operators increase or decrease the assumed execution time of a task by a constant \( \Delta \). These mutants represent an overly optimistic estimation of the worst-case (longest) execution time of a task or a overly pessimistic estimation of the best-case (shortest) execution time. Estimating execution times is generally very hard [16]. The execution time of a task running concurrently with other tasks may also be slightly different than running the task uninterrupted, if there are caches and pipelines in the target system. **Lock time operators:** Lock time mutation operators increase or decrease the time when a particular resource is locked relative to the start of that task. In one mutant the lock time is increased with \( \Delta \) and in the other mutant the lock time is decreased by \( \Delta \). An increase in the time a resource is locked increases the maximum blocking time for a higher priority task. Further, if a resource is held for less time than expected, the system can allow execution orders that may result in timeliness or control violations. This mutation operator requires test cases that can distinguish an implementation where a resource is locked too early from one where it is not. **Unlock time operators:** Unlock time mutation operators change the time when a resource is unlocked. For each task and each resource that the task uses, two mutants can be created. One increases the unlock time and one decreases the unlock time of that particular resource. This mutation operator requires test cases that can distinguish an implementation where a resource is held too long from one where it is not. **Inter-arrival time operators:** This operator decreases or increases the inter-arrival time between requests for a task execution by a constant time \( \Delta \). This reflects a change in the system environment that causes requests to be more frequent than expected. The resulting test cases will stress the system to reveal its sensitivity to higher frequencies of requests. For periodic tasks (E.g., controllers) a decrease in invocation frequency may also result in failures. **Pattern offset operators:** Recurring requests can have patterns that are assumed to have fixed offsets relative to each other; for example, periodic tasks with harmonic activation patterns. This operator changes the offset between such patterns by increasing or decreasing the offset with \( \Delta \) time units. 3. Flextime: A test generation extension Flextime is an add-on tool for the real-time control systems simulator TrueTime [8]. The purpose of the Flextime add-on is primarily to support automated analysis and mutation-based test case generation. For this purpose TrueTime must be adapted to (i) do efficient simulation of TAT system models, (ii) support structured parametrization of simulations, and (iii) simplify extensions that are consistent with TAT specifications. When Flextime is used for mutation-based test case generation, TAT models should be mapped to simulation entities. The following subsections describe the TrueTime tool and how TAT task sets and activation patterns are mapped to simulations by the Flextime extension. 3.1. TrueTime TrueTime is a real-time kernel simulator based on MATLAB/Simulink. The main feature of the simulator is that it offers the possibility of co-simulation of task execution in a real-time kernel and continuous-time dynamics modeling controlled plants. The simulator is mainly used for integrated design of controllers and schedulers, and can be used to analyze the effects of timing nondeterminism on the performance of the control systems. The TrueTime kernel is flexible and highly configurable. Both periodic and aperiodic tasks are supported, and the attributes of the tasks may be changed dynamically during simulation. The scheduling algorithm used by the kernel is configurable by the user. Synchronization between tasks is supported by events and shared resources can be protected with mutual exclusion monitors. Each task in TrueTime is implemented in a separate code function that defines the execution behavior. The code function includes everything from interaction with resources and I/O ports and networks to specification of execution time of different segments. The TrueTime code functions may be written either as C++ functions or as MATLAB m-files. 3.2. Task sets and execution patterns For the purpose of automated analysis and mutation-based test case generation, we find it useful to separate between application functionality and execution behavior. Therefore, in the Flextime extension, execution times, resource requirements, and precedence constraints are specified separately from code functions. This specification style makes it possible to specify execution patterns of large task sets without having to generate a specific code function for each type. The role of code functions in Flextime is specialized to perform control-related calculations and to interact with external Simulink blocks. Figure 3 shows a subset of the class diagram of Flextime. The class ftTask is an abstract class that maps down to the TrueTime tasks. This means that when objects of any of the sub-classes to this class are created, a TrueTime task is also created and initialized. The abstract ftTask class contains basic information about tasks, such as periods, deadlines and offsets. Moreover, the ftTask class extends TrueTime tasks with a list of execution items that define the execution pattern for each instance of this task. The sub-classes of ftTask and ftResource are primarily used for supporting different concurrency control protocols, but other types of execution environment extensions are also supported. For example, one pair of sub-classes can be used to simulate tasks and resources under the immediate priority ceiling protocol [18], whereas another pair may be used for simulation of tasks under EDF scheduling and the stack resource protocol [4]. The reason why sub-classes are needed for both types of entities is that such protocols often require specific data to be kept with task and resource representatives. When an ftTask begins its execution, a virtual do_seg method is called sequentially on each item in the execution item list. Execution items of type takeRes and releaseRes specify that a particular resource is to be locked or unlocked. The do_seg function in these execution items simply invokes a corresponding virtual take and release function in the ftTask class with the resource identifier as a parameter. In this way, the logic associated with acquiring and releasing resources can be implemented in the protocol-specific sub-classes of ftTask; and execution item classes remain protocol independent. Execution items of type executeWork are generic and specify that execution of code should be simulated for some duration, and optionally, that a segment of a Flextime code function should be executed. 3.3. Activation patterns The activation patterns from environment automata triggering periodic tasks are deterministic and can simply be included in the static configuration of the simulator. The activation pattern for sporadic tasks should be varied for each iteration of the simulation to find execution orders that can lead to timeliness or control failures. Consequently, an input to the simulation of a particular system (corresponding to a TAT model) is the activation patterns for the sporadic tasks. The relevant output from the simulation is an execution order trace where the sporadic requests has been injected according to the activation pattern. A “positive” output from a mutation testing perspective is an execution order trace that contains violated time constraints or simulated control failure. By treating test case generation like an optimization-based search problem, different heuristic methods can be applied in order to find a feasible activation pattern for revealing failures. In Section 5 we present an experiment where genetic algorithms are used for this purpose. Genetic algorithms have previously been used on various noncontinuous search problems [11], and for testing other aspects of control systems [20]. Figure 4 contains an annotated TAT-automaton for describing activation patterns of sporadic tasks. The template has two parameters that are common for a particular mutant. The constant OFS denotes the assumed minimum offset, i.e., the minimum delay before any instance of this task can be requested. The constant MIAT denotes the assumed minimum inter-arrival time between instances of this task. An array of delay values \( t(0..m) \) defines the variable part of the intervals between requests of this task. The constant \( m \) is the maximum number of arrivals that can occur in the simulated interval. By combining the arrays for all sporadic tasks we get a matrix \( t(1..s, 0..m) \) of real values, where each row corresponds to an activation pattern of a sporadic task. Flextime supports importing activation pattern matrices of this type from the global MATLAB workspace. Moreover, relevant information from the simulation run is logged and exported to MATLAB where it can be analyzed, filtered and converted to test cases. 4. Using Flextime for test generation Figure 5 gives an overview of how Flextime is used together with other tools to perform automated test case generation. As seen in the figure, a task-set specification must be supplied as input to Flextime simulations and mutation operators. Task-set parameters and execution item lists can be initialized in two different ways in the Flextime tool. One way is to define the execution item lists and task set parameters statically in the TrueTime initialization code. Figure 6 shows the C++ syntax required for initializing the task-set of Table 1. If no specific C++ initialization file is given, Flextime assumes that the system characteristics for a simulation is given through two matrices, TS and XP, in the global MATLAB workspace. As seen in Table 2, the TS matrix contains one row for each task, specifying its type, priority, period, offset, and deadline. Depending on the type of the task, some fields are interpreted differently. For example, if the type field specifies a hard sporadic task, then the period and offset fields are interpreted as the values for the MIAT and OFS parameters for the template in Figure 4. The rows in an XP matrix contain the execution patterns for the tasks with the same row number in the TS matrix. All positive values are translated to simulated execution time. All negative values are assumed to be integers and are used for locking and unlocking the shared resources with the specified index. The XP matrix for the task set in Table 1 is given in Table 3. The first occurrence of ‘-1’ in Table 3 means that the resource with index ‘1’ should be locked, whereas the second occurrence means that it should be unlocked. The matrix representation of task sets has the advantage that different mutation operators easily can be applied to create new mutants. If new task types, concurrency control mechanisms, or scheduling protocols are used, the C++ initialization file must be customized accordingly. Returning to Figure 5, the task set specification is used as input to mutation operators, which automatically create mutants containing hypothesized faults. For the purpose of the following experiments, these operators have been implemented for the matrix representation. 4.1. Applying genetic algorithms For each created mutant, a search is performed to find an activation pattern that forces the mutant to miss a hard deadline or causes a control failure. A genetic algorithm drives the simulation of a particular mutant by providing a population of initially random activation pattern matrices as input. For each activation pattern matrix, the execution order and control performance traces from the simulation are used to calculate a fitness value. The fitness value reflects the ability of the activation pattern to show a bad behavior of the mutant. Based on their fitness value, the best activation patterns are copied and changed according to stochastic heuristics. The newly created activation patterns replace some of the least optimal activation patterns in the set and the evaluation is iterated in a new generation. This way, the different execution orders of a mutant are searched for missed deadlines and bad control performance. Consequently, an application-specific fitness function must be provided to use genetic algorithms. For testing of flexible controllers, both potential timeliness violations and poor control quality must be used to calculate a good fitness value to drive the heuristic search. The slack of a real-time task is the time between the actual response time of a task instance and its absolute deadline. For the timeliness factor, the minimum slack among hard tasks provides an intuitive fitness value. All slack times can be recorded during the simulation. For evaluation of control performance it is common to use weighted quadratic cost functions. For a scalar system, with one output $y$ and one input $u$, the cost can be written $$J = \int_{0}^{T_{sim}} \left( y^2(t) + \rho u^2(t) \right) dt$$ where the weight factor, $\rho$, expresses the relation between the two counteracting objectives of control design, i.e., to keep the regulated output close to zero and to keep the control effort small. The controllers designed for the inverted pendulum control described in the next section are explicitly designed to minimize a cost function of the type given by Equation (1). This is called LQ control [3]. The higher the cost during a simulation run, the worse the control performance. Therefore, in this context, we assume that $1/J$ is proportional to the control performance. The fitness of a simulation trace is defined as $$F = \sum_{k} \frac{1}{J_k} - S_{min} * w,$$ where $S_{min}$ denotes the least slack observed for any hard real-time task. The variable $J_k$ denotes the value of $J$ for flexible controller $k$ at the end of the simulation. A weight variable $w$ is used for adjusting the minimum slack so that the timeliness factor is of the same magnitude as the control quality factor. Apart from calculating the general fitness that drives the genetic algorithm heuristics towards evaluating more optimal solutions, the fitness function can also be used to detect failures and halt the search. Relevant failure conditions are that (i) a hard critical deadline is missed, (ii) the control system becomes unstable (the cost, $J$, exceeds some threshold value), or (iii) a control constraint is violated, for example, the motion of a robot arm becomes too irregular. Failure condition (i) and (ii) can easily be detected by checking the minimum slack of hard tasks and the value of the cost function for the controller tasks. Failure condition (iii) is application-specific and might require specific values to be traced during simulation. ```cpp // String ID IPC_Resource Rl("State_Sem"); // TYPE, StringID, Priority , MIAT, OFS, Deadline FP_IPC_Task A(SFOR, "Safety_check", 1, 0.040, 0.0, 0.020); A << 0.001 << Rl << 0.005 << -Rl << 0.002 << FINISHED; FP_IPC_Task B(FER, "Pendulum", 2, 0.040, 0.020, 0.040); B << 0.002 << +Rl << 0.006 << -Rl << 0.001 << FINISHED; ``` Figure 6. C++ Syntax for initializing task-set 5. Proof-of-concept experiment The purpose of this experiment is to investigate if a mutation-based testing technique can generate test cases for revealing timeliness and control failures in flexible control systems. Hence, the experiment should evaluate whether the mutation operators can create malignant mutants and how effective our genetic algorithm based tool set is in finding such malignant mutants. For this experiment we simulate a real-time system with fixed priorities and shared resources under the immediate priority ceiling protocol [18]. The task set consists of three soft periodic tasks that implement flexible controllers for balancing three inverted pendulums. The linearized equations of the pendulums are given as $$\ddot{\theta} = \omega_0^2 \theta + \omega_0^2 u$$ where $\theta$ denotes the pendulum angle and $u$ is the control signal, $\omega_0$ is the natural frequency of the pendulum. The controllers were designed using LQ-theory with the objective of minimizing the cost function $$J = \int \left( \dot{\theta}^2(t) + 0.002 u^2(t) \right) dt$$ Further, the system has four sporadic real-time tasks with hard deadlines, assumed to implement logic for responding to frequent but irregular events, for example, external interrupts or network messages. The system also has two resources that must be shared with mutual exclusion between tasks. Examples of resources are data structures containing shared state variables and non-reentrant library functions. Table 4 lists the exact properties of the simulated task set. The first column (‘ID’) contains task identifiers. Columns two to five contain the TAT task set description tuple as described in Section 2.1. The column ‘IAT’ contains the assumed inter-arrival times of tasks; periodic tasks are released with fixed inter-arrival times and the minimum inter-arrival times of the sporadic tasks are defined using the ‘MIAT’ parameter in Figure 4. Column ‘OFS’ contain the corresponding parameter value for the sporadic task template; for periodic tasks, this column contains the offset. Three continuous-time blocks modeling the inverted pendulums were included in the simulation and connected in a feedback loop to the TrueTime block with the flexible control system. Each pendulum has slightly different natural frequency, ω₀, and the goal of the control application is to balance the pendulums to an upright position. The pendulums have an initial angle of 0.1 radians from the upright position when the simulation starts. An application-specific control failure is assumed to occur when the angle of a pendulum becomes greater than or equal to π/8 (~0.39) radians. A set of mutants was generated by applying the mutation operators described in Section 2.2 on the extended task set in Table 4 using a Δ of two time units for the first three mutation operator types and four time units for the last two. The total number of mutants generated for each operator type is listed in column "T" of Table 5. The genetic algorithm toolbox [9], developed at North Carolina University was used to construct a genetic algorithm that could interact with the Flextime tool. For the genetic algorithm setup, we used a population size of 25 activation pattern matrices. A mix of generic cross-over functions supplied with the genetic algorithm toolbox and heuristic cross-over functions customized for revealing timeliness failures was used for stochastically changing activation patterns. The fitness function defined in Section 4.1 was used when analyzing the simulation traces. First, the unmodified system was simulated for 200 generations to gain confidence in the assumed correct specification. This was repeated five times with different random seeds to protect against stochastic variance. No failures were detected in the original model. Second, each mutant was simulated for 100 generations or until a timeliness or control failure was detected. When a mutant was killed, the same activation pattern was applied on the assumed correct model. The motivation for this extra step is to further increase the confidence in the correctness of the specification model. The experiment was repeated five times to assess the reliability of the approach. Table 5 summarizes the results for each mutation operator and failure type. The number of mutants that was classified as malignant in any of the experiments is listed in columns marked "K". Columns marked "A" lists the average number of malignant mutants that were killed per experiment. The average number of generations needed to kill malignant mutants is listed in column "G". As seen in Table 5, our mutation-based approach that uses the Flextime tool automatically generates test cases for revealing both timeliness and control failures. Further, the malignant mutants that cause timeliness failures were killed in all of the experiments. This result indicates that the genetic algorithm is effective in revealing critical execution orders in flexible control systems of this size. The low average of generations needed to reveal these failures suggests that many execution orders lead to failures in the malignant mutant specifications. The relatively low average of killed mutants causing control failures indicates that finding a critical scenario with respect to control is more difficult. A possible explanation is that the optimization problem contains local optima with respect to control performance fitness. A possible way to increase the reliability is to redo the search multiple times using a fresh initial population. Since the approach for searching the mutant specifications is fully automated, the additional cost of searching multiple times may be acceptable. Lastly, for this system we actually observe a relatively large number of malignant mutants that lead to control failures. This result suggests that mutation operators for testing of timeliness indeed is useful for testing control performance. ### 6. Conclusions This paper has presented an extension to the real-time co-simulator TrueTime that prepares it for interacting with heuristic search algorithms for generation of test cases. The extension tool maps configurable TAT task set specifications to TrueTime task entities. This makes it possible to use existing mutation-based testing criteria while exploiting the TrueTime ability to interact with Simulink. Further, the paper presents a mutation-based method for generation of tests cases for testing of timeliness and control performance of flexible real-time systems. A proof-of-concept case study shows that mutation operators for testing of timeliness also can be used to produce mutants that cause control failures in flexible real-time control systems. Apart from producing test cases, the test case generation process provides a limited form of automated analysis that may increase confidence in control. ### Table 4. Case study task set <table> <thead> <tr> <th>ID</th> <th>e</th> <th>d</th> <th>SEM</th> <th>PREC</th> <th>IAT</th> <th>OFS</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>3</td> <td>7</td> <td>{(R2,0,2)}</td> <td>{}</td> <td>≥ 30</td> <td>10</td> </tr> <tr> <td>B</td> <td>5</td> <td>15</td> <td>{(R1,0,3)(R2,2,5)}</td> <td>{}</td> <td>≥ 40</td> <td>20</td> </tr> <tr> <td>C</td> <td>4</td> <td>20</td> <td>{(R1,0,3)}</td> <td>{}</td> <td>≥ 40</td> <td>0</td> </tr> <tr> <td>D</td> <td>5</td> <td>26</td> <td>{(R2,2,5)}</td> <td>{}</td> <td>≥ 50</td> <td>28</td> </tr> <tr> <td>E</td> <td>5</td> <td>20</td> <td>{(R1,4,5)}</td> <td>{}</td> <td>20</td> <td>1</td> </tr> <tr> <td>F</td> <td>5</td> <td>29</td> <td>{(R2,0,4)}</td> <td>{}</td> <td>29</td> <td>1</td> </tr> <tr> <td>G</td> <td>5</td> <td>35</td> <td>{(R1,0,3)}</td> <td>{}</td> <td>35</td> <td>1</td> </tr> </tbody> </table> ### Table 5. Mutants killed in case study <table> <thead> <tr> <th>Mutant</th> <th>Timeliness</th> <th>Control</th> </tr> </thead> <tbody> <tr> <td>Failure type</td> <td>Timeliness</td> <td>Control</td> </tr> <tr> <td>Execution time</td> <td>T</td> <td>K</td> </tr> <tr> <td>Lock time</td> <td>13</td> <td>2</td> </tr> <tr> <td>Unlock time</td> <td>15</td> <td>1</td> </tr> <tr> <td>Inter-arrival</td> <td>14</td> <td>0</td> </tr> <tr> <td>Pattern offset</td> <td>13</td> <td>0</td> </tr> <tr> <td>Total</td> <td>69</td> <td>4</td> </tr> </tbody> </table> 729 VOLUME 2 system models robustness against variations in activation patterns and deviations from assumptions. A limitation in the presented approach is that the implementation currently assumes that a specific TAT automaton template generates the activation patterns of aperiodic tasks. The mapping function can be generalized to support a larger class of TAT automata templates, and thus, allow a better modeling of inherent causal dependencies between aperiodic events occurring in the environment. Future work includes investigating the scalability of the approach when generating test cases for larger and more complex control systems. In this context it is also relevant to investigate heuristics that increases the genetic algorithms ability to reveal control failures. References
{"Source-Url": "http://lup.lub.lu.se/search/ws/files/5909933/625577.pdf", "len_cl100k_base": 7700, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 43649, "total-output-tokens": 9419, "length": "2e12", "weborganizer": {"__label__adult": 0.0004177093505859375, "__label__art_design": 0.0005769729614257812, "__label__crime_law": 0.00047969818115234375, "__label__education_jobs": 0.0012006759643554688, "__label__entertainment": 0.00010693073272705078, "__label__fashion_beauty": 0.00022161006927490232, "__label__finance_business": 0.0004277229309082031, "__label__food_dining": 0.0004405975341796875, "__label__games": 0.0009007453918457032, "__label__hardware": 0.0038242340087890625, "__label__health": 0.0007023811340332031, "__label__history": 0.00048232078552246094, "__label__home_hobbies": 0.00018584728240966797, "__label__industrial": 0.001944541931152344, "__label__literature": 0.0002963542938232422, "__label__politics": 0.0003848075866699219, "__label__religion": 0.0005440711975097656, "__label__science_tech": 0.400634765625, "__label__social_life": 0.00010162591934204102, "__label__software": 0.01216888427734375, "__label__software_dev": 0.57177734375, "__label__sports_fitness": 0.0004086494445800781, "__label__transportation": 0.0017271041870117188, "__label__travel": 0.00025200843811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41217, 0.03897]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41217, 0.67142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41217, 0.89174]], "google_gemma-3-12b-it_contains_pii": [[0, 532, false], [532, 532, null], [532, 5191, null], [5191, 10454, null], [10454, 15642, null], [15642, 20970, null], [20970, 25455, null], [25455, 30682, null], [30682, 36582, null], [36582, 41217, null]], "google_gemma-3-12b-it_is_public_document": [[0, 532, true], [532, 532, null], [532, 5191, null], [5191, 10454, null], [10454, 15642, null], [15642, 20970, null], [20970, 25455, null], [25455, 30682, null], [30682, 36582, null], [36582, 41217, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41217, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41217, null]], "pdf_page_numbers": [[0, 532, 1], [532, 532, 2], [532, 5191, 3], [5191, 10454, 4], [10454, 15642, 5], [15642, 20970, 6], [20970, 25455, 7], [25455, 30682, 8], [30682, 36582, 9], [36582, 41217, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41217, 0.10778]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
51e118f58d18a5d6252e60121a87a3b8b7fd8be9
Lecture 8 Hashing Announcements • Midterm approaching: Thu, Feb 15 (6pm – 9pm) • Midterm covers up to (and incl.) lecture 7 (up to and including homework 4). This week’s lectures are not included. Today: hashing n=9 buckets \[ \begin{array}{c} 1 & \rightarrow & \text{NIL} \\ 2 & \rightarrow & 22 & \rightarrow & \text{NIL} \\ 3 & \rightarrow & 13 & \rightarrow & 43 & \rightarrow & \text{NIL} \\ 9 & \rightarrow & 9 & \rightarrow & \text{NIL} \\ \end{array} \] • **Hash tables** are another sort of data structure that allows fast **INSERT/DELETE/SEARCH**. - like self-balancing binary trees - The difference is we can get better performance in expectation by using randomness. • **Hash families** are the magic behind hash tables. • **Universal hash families** are even more magical. Goal • We want to store nodes with keys in a data structure that supports fast **INSERT/DELETE/SEARCH**. Last time • Self balancing trees: • $O(\log(n))$ deterministic \textbf{INSERT/DELETE/SEARCH} Today: • Hash tables: • $O(1)$ expected time \textbf{INSERT/DELETE/SEARCH} • Worse worst-case performance, but often great in practice. \#evensweeterinpractice \textit{eg, Python’s dict, Java’s HashSet/HashMap, C++'s unordered_map} Hash tables are used for databases, caching, object representation, ... One way to get $O(1)$ time - Say all keys are in the set \{1,2,3,4,5,6,7,8,9\}. **INSERT:** - 9 - 6 - 3 - 5 **DELETE:** - 6 **SEARCH:** - 3 - 2 Are we delegating to hardware/memory? What are the assumptions behind our model of computation? This is called “direct addressing” That should look familiar • Kind of like CountingSort from Lecture 6. • Same problem: if the keys may come from a "universe" $U = \{1,2, \ldots, 10000000000\}$, it takes a lot of space. Solution? Put things in buckets based on one digit **INSERT:** ``` 21 345 13 101 50 234 1 ``` ``` 0 1 2 3 4 5 6 7 8 9 50 101 21 13 234 345 ``` It’s in this bucket somewhere... go through until we find it. Now **SEARCH** ``` 21 ``` Problem INSERT: 22 34 52 12 102 12 342 22 Now SEARCH 22 ....this hasn’t made our lives easier... Hash tables • That was an example of a hash table. • not a very good one, though. • We will be more clever (and less deterministic) about our bucketing. • This will result in fast (expected time) INSERT/DELETE/SEARCH. But first! Terminology. - **U** is a *universe* of size **M**. - **M** is really big. - But only a few (at most **n**) elements of **U** are ever going to show up. - **M** is waaaaayyyyyyyyy bigger than **n**. - But we don’t know which ones will show up in advance. All of the keys in the universe live in this blob. Example: **U** is the set of all strings of at most 280 ascii characters. (\(128^{280}\) of them). The only ones which I care about are those which appear as trending hashtags on twitter. #hashinghashtags *There are way fewer than \(128^{280}\) of these.* **Hash Functions** - A *hash function* $h: U \rightarrow \{1, \ldots, n\}$ is a function that maps elements of $U$ to buckets 1, ..., $n$. All of the keys in the universe live in this blob. Universe $U$ Example: $h(x) =$ least significant digit of $x$. - $h(13) = 3$ - $h(22) = 2$ For this lecture, we are assuming that the number of things that show up is the same as the number of buckets, both are $n$. This doesn’t have to be the case, although we do want: $\#\text{buckets} = O(\#\text{things which show up})$ Hash Tables (with chaining) - Array of n buckets. - Each bucket stores a linked list. - We can insert into a linked list in time $O(1)$. - To find something in the linked list takes time $O(\text{length(list)})$. - A hash function $h: U \to \{1, \ldots, n\}$. - For example, $h(x) =$ least significant digit of $x$. **INSERT:** 13 22 43 9 **SEARCH 43:** Scan through all the elements in bucket $h(43) = 3$. **DELETE 43:** Search for 43 and remove it. Aside: Hash tables with open addressing • The previous slide is about hash tables with chaining. • There’s also something called “open addressing” • You don’t need to know about it for this class. This is a “chain” Hash Tables (with chaining) - Array of $n$ buckets. - Each bucket stores a linked list. - We can insert into a linked list in time $O(1)$ - To find something in the linked list takes time $O(\text{length(list)})$. - A hash function $h: U \rightarrow \{1, \ldots, n\}$. - For example, $h(x) =$ least significant digit of $x$. For demonstration purposes only! This is a terrible hash function! Don’t use this! **INSERT:** - 13 - 22 - 43 - 9 **SEARCH 43:** Scan through all the elements in bucket $h(43) = 3$. **DELETE 43:** Search for 43 and remove it. What we want from a hash table 1. We want there to be not many buckets (say, n). - This means we don’t use too much space 2. We want the items to be pretty spread-out in the buckets. - This means it will be fast to SEARCH/INSERT/DELETE $n=9$ buckets VS. $n=9$ buckets Worst-case analysis - Goal: Design a function $h: U \rightarrow \{1, \ldots, n\}$ so that: - No matter what $n$ items of $U$ a bad guy chooses, the buckets will be balanced. - Here, balanced means $O(1)$ entries per bucket. - If we had this, then we’d achieve our dream of $O(1)$ INSERT/DELETE/SEARCH Can you come up with such a function? Think-Share Terrapins This is impossible! No deterministic hash function can defeat worst-case input! We really can’t beat the bad guy here. - The universe U has M items - They get hashed into n buckets - At least one bucket has at least M/n items hashed to it. - M is waayyyy bigger than n, so M/n is bigger than n. - **Bad guy chooses n of the items that landed in this very full bucket.** Solution: Randomness The game 1. An adversary chooses any \( n \) items \( u_1, u_2, \ldots, u_n \in U \), and any sequence of INSERT/DELETE/SEARCH operations on those items. 2. You, the algorithm, choose a \textbf{random} hash function \( h: U \to \{1, \ldots, n\} \). 3. \textbf{HASH IT OUT} \#hashpuns \begin{align*} 13 & \quad 22 & \quad 43 & \quad 92 & \quad 7 \\ \end{align*} INSERT 13, INSERT 22, INSERT 43, INSERT 92, INSERT 7, SEARCH 43, DELETE 92, SEARCH 7, INSERT 92 Example of a random hash function - Say that $h : U \to \{1, \ldots, n\}$ is a uniformly random function. - That means that $h(1)$ is a uniformly random number between 1 and $n$. - $h(2)$ is also a uniformly random number between 1 and $n$, independent of $h(1)$. - $h(3)$ is also a uniformly random number between 1 and $n$, independent of $h(1), h(2)$. - ... - $h(M)$ is also a uniformly random number between 1 and $n$, independent of $h(1), h(2), \ldots, h(M-1)$. Randomness helps Intuitively: The bad guy can’t foil a hash function that he doesn’t yet know. Why not? What if there’s some strategy that foils a random function with high probability? We’ll need to do some analysis... What do we want? It’s **bad** if lots of items land in $u_i$’s bucket. So we want **not that**. More precisely • We want: • For all ways a bad guy could choose \(u_1, u_2, \ldots, u_n\), to put into the hash table, and for all \(i \in \{1, \ldots, n\}\), \[E[ \text{number of items in } u_i \text{'s bucket} ] \leq 2.\] • If that were the case: • For each INSERT/DELETE/SEARCH operation involving \(u_i\), \[E[ \text{time of operation} ] = O(1)\] Note that the expected size of \(u_i\)'s linked list is not the same as the expected \{maximum size of linked lists\}. What is the latter? So we want: • For all $i=1, ..., n$, $E[\text{number of items in } u_i\text{'s bucket }] \leq 2.$ Aside - For all $i=1, \ldots, n$, $$E[\text{number of items in } u_i \text{'s bucket}] \leq 2.$$ **VS** - For all $i=1, \ldots, n$: $$E[\text{number of items in bucket } i] \leq 2$$ Suppose that: Then $E[\text{number of items in bucket } i] = 1$ for all $i$. But $E[\text{number of items in } 43\text{'s bucket}] = n$ This distinction came up on your pre-lecture exercise! - Solution to pre-lecture exercise: - \( E[\text{number of items in bucket 1}] = \frac{n}{6} \) - \( E[\text{number of items that land in the same bucket as item 1}] = n \) So we want: • For all $i=1, \ldots, n$, $$E[\text{number of items in } u_i\text{'s bucket }] \leq 2.$$ Expected number of items in $u_i$'s bucket? - $E[\cdot] = \sum_{j=1}^{n} P\{ h(u_i) = h(u_j) \}$ - $= 1 + \sum_{j \neq i} P\{ h(u_i) = h(u_j) \}$ - $= 1 + \sum_{j \neq i} 1/n$ - $= 1 + \frac{n-1}{n} \leq 2$. That's what we wanted! $h$ is uniformly random A uniformly random hash function leads to balanced buckets • We just showed: • For all ways a bad guy could choose $u_1, u_2, \ldots, u_n$, to put into the hash table, and for all $i \in \{1, \ldots, n\}$, $$E[ \text{number of items in } u_i \text{'s bucket } ] \leq 2.$$ • Which implies: • No matter what sequence of operations and items the bad guy chooses, $$E[ \text{time of INSERT/DELETE/SEARCH } ] = O(1)$$ • So, our solution is: Pick a uniformly random hash function? What’s wrong with this plan? • Hint: How would you implement (and store) a uniformly random function $h: U \rightarrow \{1, \ldots, n\}$? • If $h$ is a uniformly random function: • That means that $h(1)$ is a uniformly random number between 1 and $n$. • $h(2)$ is also a uniformly random number between 1 and $n$, independent of $h(1)$. • $h(3)$ is also a uniformly random number between 1 and $n$, independent of $h(1), h(2)$. • ... • $h(n)$ is also a uniformly random number between 1 and $n$, independent of $h(1), h(2), \ldots, h(n-1)$. A uniformly random hash function is not a good idea. - In order to store/evaluate a uniformly random hash function, we’d use a lookup table: <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>x</td> <td>h(x)</td> </tr> <tr> <td>AAAAAA</td> <td>1</td> </tr> <tr> <td>AAAAAAB</td> <td>5</td> </tr> <tr> <td>AAAAAAC</td> <td>3</td> </tr> <tr> <td>AAAAAAD</td> <td>3</td> </tr> <tr> <td>...</td> <td></td> </tr> <tr> <td>ZZZZZY</td> <td>7</td> </tr> <tr> <td>ZZZZZZ</td> <td>3</td> </tr> </tbody> </table> - Each value of h(x) takes \( \log(n) \) bits to store. - Storing M such values requires \( M \log(n) \) bits. - In contrast, direct addressing (initializing a bucket for every item in the universe) requires only M bits. Another way to say this - There are lots of hash functions. - There are $n^M$ of them. - Writing down a random one of them takes $\log(n^M)$ bits, which is $M \log(n)$. All of the hash functions $h:U \rightarrow \{1,...,n\}$ Solution • Pick from a smaller set of functions. A cleverly chosen subset of functions. We call such a subset a hash family. All of the hash functions $h: U \rightarrow \{1, \ldots, n\}$ We need only $\log |H|$ bits to store an element of $H$. Outline - **Hash tables** are another sort of data structure that allows fast INSERT/DELETE/SEARCH. - like self-balancing binary trees - The difference is we can get better performance in expectation by using randomness. - **Hash families** are the magic behind hash tables. - **Universal hash families** are even more magic. Hash families - A hash family is a collection of hash functions. “All of the hash functions” is an example of a hash family. Example: a smaller hash family - $H = \{ \text{function which returns the least sig. digit, function which returns the most sig. digit} \}$ - Pick $h$ in $H$ at random. - Store just one bit to remember which we picked. This is still a terrible idea! Don’t use this example! For pedagogical purposes only! The game 1. An adversary (who knows H) chooses any \( n \) items \( u_1, u_2, \ldots, u_n \in U \), and any sequence of INSERT/DELETE/SEARCH operations on those items. 2. You, the algorithm, chooses a random hash function \( h: U \rightarrow \{0, \ldots, 9\} \). Choose it randomly from \( H \). 3. HASH IT OUT \[ h_0 = \text{Most\_significant\_digit} \] \[ h_1 = \text{Least\_significant\_digit} \] \[ H = \{h_0, h_1\} \] I picked \( h_1 \) **INSERT** 19, **INSERT** 22, **INSERT** 42, **INSERT** 92, **INSERT** 0, **SEARCH** 42, **DELETE** 92, **SEARCH** 0, **INSERT** 92 **#hashpuns** This is not a very good hash family - $H = \{ \text{function which returns least sig. digit, function which returns most sig. digit} \}$ - On the previous slide, the adversary could have been a lot more adversarial... The game 1. An adversary (who knows H) chooses any \( n \) items \( u_1, u_2, \ldots, u_n \in U \), and any sequence of INSERT/DELETE/SEARCH operations on those items. 2. You, the algorithm, chooses a \textbf{random} hash function \( h: U \to \{0, \ldots, 9\} \). Choose it randomly from \( H \). 3. \textbf{HASH IT OUT} #hashpuns \begin{align*} h_0 &= \text{Most\_significant\_digit} \\ h_1 &= \text{Least\_significant\_digit} \\ H &= \{h_0, h_1\} \end{align*} Outline • **Hash tables** are another sort of data structure that allows fast **INSERT/DELETE/SEARCH**. • like self-balancing binary trees • The difference is we can get better performance in expectation by using randomness. • **Hash families** are the magic behind hash tables. • **Universal hash families** are even more magic. How to pick the hash family? • Definitely not like in that example. • Let’s go back to that computation from earlier.... Expected number of items in $u_i$’s bucket? - $E[\cdot] = \sum_{j=1}^{n} P\{ h(u_i) = h(u_j) \}$ - $= 1 + \sum_{j \neq i} P\{ h(u_i) = h(u_j) \}$ - $= 1 + \sum_{j \neq i} \frac{1}{n}$ - $= 1 + \frac{n-1}{n} \leq 2.$ All that we needed was that this is $\frac{1}{n}$ Strategy • Pick a small hash family $H$, so that when I choose $h$ randomly from $H$, \[ \text{for all } u_i, u_j \in U \text{ with } u_i \neq u_j, \quad P_{h \in H} \{ h(u_i) = h(u_j) \} \leq \frac{1}{n} \] • A hash family $H$ that satisfies this is called a \textbf{universal hash family}. So the whole scheme will be Choose $h$ randomly from a **universal hash family** $H$ We can store $h$ using $\log|H|$ bits. Probably these buckets will be pretty balanced. Universal hash family H is a *universal hash family* if, when h is chosen uniformly at random from H, \[ \text{for all } u_i, u_j \in U \text{ with } u_i \neq u_j, \quad P_{h \in H}\{ h(u_i) = h(u_j) \} \leq \frac{1}{n} \] Example • **H** = the set of all functions $h: U \rightarrow \{1, \ldots, n\}$ - We saw this earlier – it corresponds to picking a uniformly random hash function. - Unfortunately, this H is really really large. • Pick a small hash family H, so that when I choose h randomly from H, $$\Pr_{h \in H} \{ h(u_i) = h(u_j) \} \leq \frac{1}{n}$$ for all $u_i, u_j \in U$ with $u_i \neq u_j$. Non-example • $h_0 = \text{Most\_significant\_digit}$ • $h_1 = \text{Least\_significant\_digit}$ • $H = \{h_0, h_1\}$ NOT a universal hash family: $$P_{h \in H}\{h(101) = h(111)\} = 1 > \frac{1}{10}$$ A small universal hash family?? • Here’s one: • Pick a prime $p \geq M$. • Define \[ f_{a,b}(x) = ax + b \mod p \] \[ h_{a,b}(x) = f_{a,b}(x) \mod n \] • Define: \[ H = \{ h_{a,b}(x) : a \in \{1, \ldots, p - 1\}, b \in \{0, \ldots, p - 1\} \} \] • Claim: $H$ is a universal hash family. Say what? - Example: \( M = p = 5, \ n = 3 \) - To draw \( h \) from \( H \): - Pick a random \( a \) in \{1,...,4\}, \( b \) in \{0,...,4\} - As per the definition: - \( f_{2,1}(x) = 2x + 1 \mod 5 \) - \( h_{2,1}(x) = f_{2,1}(x) \mod 3 \) \[ U = \] This step just scrambles stuff up. No collisions here! This step is the one where two different elements might collide. h takes $O(\log M)$ bits to store - Just need to store two numbers: - $a$ is in $\{1, \ldots, p-1\}$ - $b$ is in $\{0, \ldots, p-1\}$ - So about $2\log(p)$ bits - By our choice of $p$ (close to $M$), that’s $O(\log(M))$ bits. - Also, given $a$ and $b$, $h$ is fast to evaluate! - It takes time $O(1)$ to compute $h(x)$. - Compare: direct addressing was $M$ bits! - Twitter example: $2\log(M) = 2 \times 280 \log(128) = 3920$ vs $M = 128^{280}$ Why does this work? • This is actually a little complicated. • See lecture note if you are curious. • You are NOT RESPONSIBLE for the proof in this class. • But you should know that a universal hash family of size $O(M^2)$ exists. Try to prove that this is a universal hash family! But let’s check that it **does** work - Check out the Python notebook for lecture 8 M=200, n=10 So the whole scheme will be Choose a and b at random and form the function $h_{a,b}$ We can store h in space $O(\log(M))$ since we just need to store a and b. Probably these buckets will be pretty balanced. Outline • **Hash tables** are another sort of data structure that allows fast **INSERT/DELETE/SEARCH**. • like self-balancing binary trees • The difference is we can get better performance in expectation by using randomness. • **Hash families** are the magic behind hash tables. • **Universal hash families** are even more magic. Want $O(1)$ **INSERT/DELETE/SEARCH** - We are interested in putting nodes with keys into a data structure that supports fast **INSERT/DELETE/SEARCH**. We studied this game 1. An adversary chooses any n items $u_1, u_2, \ldots, u_n \in U$, and any sequence of L INSERT/DELETE/SEARCH operations on those items. 2. You, the algorithm, choose a **random** hash function $h: U \rightarrow \{1, \ldots, n\}$. 3. **HASH IT OUT** - INSERT 13, INSERT 22, INSERT 43, INSERT 92, INSERT 7, SEARCH 43, DELETE 92, SEARCH 7, INSERT 92 - 1: 43 - 2: 22 - 3: 13 - n: 92, 7 Uniformly random $h$ was good - If we choose $h$ uniformly at random, for all $u_i, u_j \in U$ with $u_i \neq u_j$, $$P_{h \in H}\{ h(u_i) = h(u_j) \} \leq \frac{1}{n}$$ - That was enough to ensure that all INSERT/DELETE/SEARCH operations took $O(1)$ time in expectation, even on adversarial inputs. Uniformly random $h$ was bad - If we actually want to implement this, we have to store the hash function $h$. - That takes a lot of space! - We may as well have just initialized a bucket for every single item in $U$. - Instead, we chose a function randomly from a smaller set. Universal Hash Families H is a universal hash family if: - If we choose h uniformly at random in H, for all $u_i, u_j \in U$ with $u_i \neq u_j$, $$P_{h \in H}\{ h(u_i) = h(u_j) \} \leq \frac{1}{n}$$ This was all we needed to make sure that the buckets were balanced in expectation! - We gave an example of a really small universal hash family, of size $O(M^2)$ - That means we need only $O(\log M)$ bits to store it. Conclusion: • We can build a hash table that supports \textsc{INSERT}/\textsc{DELETE}/\textsc{SEARCH} in $O(1)$ expected time. • Requires $O(n \log(M))$ bits of space. • $O(n)$ buckets • $O(n)$ items with $\log(M)$ bits per item • $O(\log(M))$ to store the hash function Hashing a universe of size $M$ into $n$ buckets, where at most $n$ of the items in $M$ ever show up. That’s it for data structures (for now) Achievement unlocked Data Structure: RBTrees and Hash Tables Now we can use these going forward! Next Time • Graph algorithms! Before Next Time • Pre-lecture exercise for Lecture 9 • Intro to graphs
{"Source-Url": "https://stanford-cs161.github.io/winter2024/assets/files/lecture8-slides.pdf", "len_cl100k_base": 6220, "olmocr-version": "0.1.50", "pdf-total-pages": 65, "total-fallback-pages": 0, "total-input-tokens": 97684, "total-output-tokens": 8812, "length": "2e12", "weborganizer": {"__label__adult": 0.0005054473876953125, "__label__art_design": 0.0006461143493652344, "__label__crime_law": 0.00070953369140625, "__label__education_jobs": 0.015045166015625, "__label__entertainment": 0.00014495849609375, "__label__fashion_beauty": 0.0002675056457519531, "__label__finance_business": 0.00027060508728027344, "__label__food_dining": 0.0007519721984863281, "__label__games": 0.001331329345703125, "__label__hardware": 0.0022563934326171875, "__label__health": 0.0010833740234375, "__label__history": 0.0005793571472167969, "__label__home_hobbies": 0.00033736228942871094, "__label__industrial": 0.0010223388671875, "__label__literature": 0.00044417381286621094, "__label__politics": 0.0004477500915527344, "__label__religion": 0.0008535385131835938, "__label__science_tech": 0.12939453125, "__label__social_life": 0.00035691261291503906, "__label__software": 0.01068878173828125, "__label__software_dev": 0.8310546875, "__label__sports_fitness": 0.0005564689636230469, "__label__transportation": 0.0009284019470214844, "__label__travel": 0.00033473968505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18935, 0.08124]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18935, 0.3618]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18935, 0.81759]], "google_gemma-3-12b-it_contains_pii": [[0, 18, false], [18, 198, null], [198, 465, null], [465, 795, null], [795, 901, null], [901, 1310, null], [1310, 1591, null], [1591, 1778, null], [1778, 2025, null], [2025, 2125, null], [2125, 2348, null], [2348, 2930, null], [2930, 3463, null], [3463, 3928, null], [3928, 4145, null], [4145, 4710, null], [4710, 4989, null], [4989, 5358, null], [5358, 5439, null], [5439, 5730, null], [5730, 5751, null], [5751, 6213, null], [6213, 6691, null], [6691, 6914, null], [6914, 7011, null], [7011, 7515, null], [7515, 7616, null], [7616, 7950, null], [7950, 8183, null], [8183, 8288, null], [8288, 8545, null], [8545, 9036, null], [9036, 9589, null], [9589, 10092, null], [10092, 10319, null], [10319, 10568, null], [10568, 10901, null], [10901, 11028, null], [11028, 11335, null], [11335, 11930, null], [11930, 12149, null], [12149, 12615, null], [12615, 12952, null], [12952, 13074, null], [13074, 13342, null], [13342, 13637, null], [13637, 13812, null], [13812, 14037, null], [14037, 14433, null], [14433, 14637, null], [14637, 14946, null], [14946, 15326, null], [15326, 15785, null], [15785, 16075, null], [16075, 16173, null], [16173, 16383, null], [16383, 16720, null], [16720, 16873, null], [16873, 17297, null], [17297, 17601, null], [17601, 17886, null], [17886, 18310, null], [18310, 18690, null], [18690, 18829, null], [18829, 18935, null]], "google_gemma-3-12b-it_is_public_document": [[0, 18, true], [18, 198, null], [198, 465, null], [465, 795, null], [795, 901, null], [901, 1310, null], [1310, 1591, null], [1591, 1778, null], [1778, 2025, null], [2025, 2125, null], [2125, 2348, null], [2348, 2930, null], [2930, 3463, null], [3463, 3928, null], [3928, 4145, null], [4145, 4710, null], [4710, 4989, null], [4989, 5358, null], [5358, 5439, null], [5439, 5730, null], [5730, 5751, null], [5751, 6213, null], [6213, 6691, null], [6691, 6914, null], [6914, 7011, null], [7011, 7515, null], [7515, 7616, null], [7616, 7950, null], [7950, 8183, null], [8183, 8288, null], [8288, 8545, null], [8545, 9036, null], [9036, 9589, null], [9589, 10092, null], [10092, 10319, null], [10319, 10568, null], [10568, 10901, null], [10901, 11028, null], [11028, 11335, null], [11335, 11930, null], [11930, 12149, null], [12149, 12615, null], [12615, 12952, null], [12952, 13074, null], [13074, 13342, null], [13342, 13637, null], [13637, 13812, null], [13812, 14037, null], [14037, 14433, null], [14433, 14637, null], [14637, 14946, null], [14946, 15326, null], [15326, 15785, null], [15785, 16075, null], [16075, 16173, null], [16173, 16383, null], [16383, 16720, null], [16720, 16873, null], [16873, 17297, null], [17297, 17601, null], [17601, 17886, null], [17886, 18310, null], [18310, 18690, null], [18690, 18829, null], [18829, 18935, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18935, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18935, null]], "pdf_page_numbers": [[0, 18, 1], [18, 198, 2], [198, 465, 3], [465, 795, 4], [795, 901, 5], [901, 1310, 6], [1310, 1591, 7], [1591, 1778, 8], [1778, 2025, 9], [2025, 2125, 10], [2125, 2348, 11], [2348, 2930, 12], [2930, 3463, 13], [3463, 3928, 14], [3928, 4145, 15], [4145, 4710, 16], [4710, 4989, 17], [4989, 5358, 18], [5358, 5439, 19], [5439, 5730, 20], [5730, 5751, 21], [5751, 6213, 22], [6213, 6691, 23], [6691, 6914, 24], [6914, 7011, 25], [7011, 7515, 26], [7515, 7616, 27], [7616, 7950, 28], [7950, 8183, 29], [8183, 8288, 30], [8288, 8545, 31], [8545, 9036, 32], [9036, 9589, 33], [9589, 10092, 34], [10092, 10319, 35], [10319, 10568, 36], [10568, 10901, 37], [10901, 11028, 38], [11028, 11335, 39], [11335, 11930, 40], [11930, 12149, 41], [12149, 12615, 42], [12615, 12952, 43], [12952, 13074, 44], [13074, 13342, 45], [13342, 13637, 46], [13637, 13812, 47], [13812, 14037, 48], [14037, 14433, 49], [14433, 14637, 50], [14637, 14946, 51], [14946, 15326, 52], [15326, 15785, 53], [15785, 16075, 54], [16075, 16173, 55], [16173, 16383, 56], [16383, 16720, 57], [16720, 16873, 58], [16873, 17297, 59], [17297, 17601, 60], [17601, 17886, 61], [17886, 18310, 62], [18310, 18690, 63], [18690, 18829, 64], [18829, 18935, 65]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18935, 0.02278]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
d17dd5175a3baec41c355599191b56934aac8872
A multi-user collaborative circular graphical user interface for displaying items includes a transformation engine responsive to external command, such as mouse clicks, for generating polar coordinates for the items, an asynchronous rendering engine for generating images of the items according to the polar coordinates, and a thread switching engine for controlling the rendering engine. <table> <thead> <tr> <th></th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>451</td> <td>a layer of pop-up items or top-level menus</td> </tr> <tr> <td>452</td> <td>a layer with selected images</td> </tr> <tr> <td>453</td> <td>a layer with the control or menu bar</td> </tr> <tr> <td>454</td> <td>a layer with all the images except one</td> </tr> <tr> <td>455</td> <td>a grid in a deformation mode</td> </tr> <tr> <td>456</td> <td>a layer for the background</td> </tr> </tbody> </table> **Fig. 4b** MULTI-USER COLLABORATIVE CIRCULAR GRAPHICAL USER INTERFACES CROSS-REFERENCE TO RELATED APPLICATION FIELD OF THE INVENTION The present invention relates generally to graphical user interfaces, and more particularly to collaborative circular graphical user interfaces. BACKGROUND OF THE INVENTION Presentations are an important aspect of many professional and social settings. Executives make presentations to directors, managers conduct meetings with staff, salespersons make presentations to potential customers, doctors conduct meetings with nurses and patients, lawyers make presentations to juries, and families and friends present and share photographs of occasions in their lives. Frequently, much effort goes into generating and delivering effective presentations. With specialized software, conventional personal computer systems can provide effective platforms for generating and conducting presentations. Currently available presentation program modules can turn a personal computer into a customized presentation system for generating and delivering picture presentations using display terminals or digital projectors. Generally described, these prior art presentation systems provide a specially designed, user-friendly, set of tools to assist in the construction of a presentation that can be displayed subsequently to an audience. Those presentation systems also allow images to be presented sequentially to an audience, picture-by-picture, with color, animation, audio, and transition effects that enliven and enrich the presentation. Conventional presentation systems do not provide an effective means for interacting with the content of the presentation during the course of the presentation. This drawback arises because these conventional presentation systems have only two modes of operation, an edit mode and a show mode. A single user often constructs the presentation, and a single user delivers the presentation to an audience. During the course of the presentation, the single user can interact with the content of the presentation only by invoking the edit mode, which primarily allows the user to rearrange the order in which the presentation is arranged. A significant drawback arises when using these conventional presentation systems because all other participants of the presentation cannot concurrently interact with the content of the presentation. Conventional systems are designed for use by a single presenter to a passive audience, and not for a setting where all participants of the presentation interact with the presentation on an equal footing. The prior art presentation is typically conducted in a linear setting. The presenter faces the audience, and the audience views the presentation behind the presenter. The presenter can either look at the audience or the presentation, but not at both at the same time. Furthermore, a conventional presentation system only has a single set of controls. To allow any one other than the presenter to control the presentation can be quite disruptive and cumbersome. Also, most computer-implemented presentation systems that concurrently display multiple images use the same rectangular format as used by mechanical slide-sorter. Those require that the typical single user has a specific orientation with respect to the displayed presentation. These types of systems are not suited for situations where multiple participants are facing each other and the displayed presentation, in a highly interactive and multi-dimensional manner. An alternative presentation system can use a circular display surface, such as a tabletop. There are many advantages of tabletop displays over traditional presentation systems, such as white boards, projection screen, desktops computers, or handheld devices, particularly for collaborative tasks where multiple users need to both work with each other and access computer resources. Users can sit around a table and thus easily face each other, rather than try to crowd around a computer screen, or a small handheld device. A tabletop provides shared space and also allows users to have their own personal, if not entirely private, space to work on. Finally, whether it is an electronic display or not, a tabletop affords a convenient space where users can spread out and organize images. The DigitalDesk is a physical desk augmented with vision and projector capabilities so that the physical and electronic desktops are merged into one. DigitalDesk is designed for a single user. The InteracTable in the i-Land project provides a rectangular surface for multiple users. However, most of these tabletop user interfaces organize images in a rectangular manner. It is desired to provide a circular graphical user interface. Collaborative circular graphical user interfaces present special problems, which cannot be addressed by conventional event-driven “window” architectures, such as Microsoft Windows™, where a single “desktop” interface is entirely constrained by Cartesian coordinates, and a single user. The problems with circular graphical interfaces stem from three unique characteristics of a collaborative user interface that is circular and is on a tabletop. First, polar locations and polar orientations of displayed icons, documents, and images, generally “items,” must be handled in a special way that is different from conventional rectangular formats. Second, the number and variety of items that can be displayed is much larger than one would normally find on the traditional "desktop." Also, the items can be organized in multiple layers and views associated with concurrent users. Third, events that drive the interface originate from collaborations between the multiple users. None of these issues are addressed by conventional windows-based architectures. SUMMARY OF THE INVENTION The invention provides visualization and layout schemes for a graphical user interface. Because the interface uses polar coordinate systems to display images, prior techniques, which typically use Cartesian coordinate systems, are inapplicable. It is an object of the invention to give the user of the interface the full capability to relocate, re-orient, scale and layout images in the circular interface in real-time. It is also an object of the invention, to allow multiple users to collaborative display and manipulate images from multiple points of view. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is an oblique view of a multi-user circular graphical interface according to the invention; Fig. 2 is a top view of a control bar of the interface of Fig. 1; Fig. 3 is a side view of the circular graphical user interface of Fig. 1; Fig. 4a is a block diagram of the user interface of Figs. 1 and 3; Fig. 4b is a block diagram of rendering layers used by the invention; Fig. 5 is a diagram of polar coordinate systems used by the invention; Fig. 6a is a block diagram of a pile; and Fig. 6b is a block diagram of rendering a pyramid. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT System Structure Fig. 1 shows multiple users 101–103 in the vicinity of a circular graphical user interface 100 operating according to the invention. The users share and interact with a picture presentation in a dynamic and collaborative manner. The system according to the invention displays images 110 on a display surface, i.e., the horizontal tabletop 130 of a circular table 125. The images can be of photographs, videos, computer generated images, icons, documents, or any other displayable source material, hereinafter generally "items." In the preferred embodiment, the tabletop 130 surface is touch sensitive. The interface 100 includes an orientation area 140, and a plurality of control panels (menus) 200. In the preferred embodiment, the orientation area 140 is an annular ring at the periphery of the images. The control panels are composed within the annular ring. There is one control panel or top-level menu for each user. Additional pop-up menus can be added as needed. Pop-up menus are generally temporary. The control panels 200 are displayed in a region of the display surface 130 in front of the user. A camera 360, see Fig. 3, can be used to track the users 101–103 so that as the users move around the display table, their respective control panels 200 follow. Alternatively, the users can employ a pointing device to indicate where their respective control panels should appear on the tabletop. Fig. 2 shows icons of the control panel 200 in greater detail. Each user control panel includes the following icons: - inkpad 210, keyboard 220, people 230, calendar 240, work space 250, new 260, location 270, events 280, show 290, and summary 295 icons. A mouse or a touch sensitive technique can be used to activate the icons of the control panels 200. Initially, the icons are displayed as black on a white background, but when an icon is activated or selected, the icon is displayed in full color. The people, calendar, location, and events icons can be associated with corresponding "views." In the traditional windows-based desktop, there is only one associated view. However, here, each user can construct one or more views of what can be displayed on the tabletop, and users can select any of these views as an active view, i.e., the active view is the one that is currently displayed. For example, an "event view" organizes clusters images according to events, a "calendar view" clusters images acquired in the same time frame, and a "location view" clusters images according to geographic location. In essence, a view is a set of images having some logical relationship. As shown in Fig. 3, images of items are composited by a processor 310 executing a software architecture 400 according to the invention. The composited images are displayed onto the display surface. The displayed images are composited in response to user input commands (events). User input can be via a touch surface 320, keyboard, mouse 330, and the like. As an advantage, the present system can be operated concurrently by multiple users. In the preferred embodiment, the display surface is circular. For tabletop display, the images are displayed via a projector 340, and mirror 350. The projector could be vertically mounted, or back-projection can be used to eliminate the need for the mirror. The fact that a single projector is used is significant, because this requires that the output image is potentially composited from a large number of individual images. As stated above, the camera can be used to establish the relative locations of the users 101–103 with respect to the interface 100. Some of the Figures also show a coffee mug 1 on the top of the table. The coffee mug 1 is not part of the invention, but often coffee mugs are key items present during presentation, professional or social. As an advantage, the present invention gracefully admits integration of coffee mugs or other physical discussion items with the presentation. In fact, using the camera 360 coupled to a vision system of the processor 310, the displayed images can be composited in such a way that physical items that are not part of the user interface do not obscure significant portions of the images. The main purpose of the architecture 400 according to the invention is to manipulate and present photographs, slides, text, videos, the "items." The items are manipulated by the users using the control panels and other input devices that generate "events." The images can be associated with soundtracks so that when images are selected, the soundtrack can also be played. The images can also be annotated with text. The item can be organized in a database (DB) 370. The database can be local, or remote. The items can be in the form of digital images, e.g., files with .bmp, .jpg, .mpg, .gif, .pdf, or .eps extensions, to name but a few. These files form the source data from which images are formed. Images can have associated audio files in .wav files, for example. The items can also be annotated by name, date, location, etc. Items are selected from the database 370, and the selected items are composited into the displayed images as "views.", as described below. Multiple users can interact with composing process in a concurrent and interactive manner. The orientation area 140 is used to orient the "content" of the presentation image 110 or active view. When the orientation area is circular, then the displayed image can be rotated like a lazy Susan. The rotation is achieved by the processes that composites the image with a selected orientation. The ring can be projected onto the touch sensitive surface of the tabletop. The images of the items are generally shown with an orientation towards the control panel from where the selection took place, i.e., generally facing the user that selected the item. Should another user subsequently want to view the same image, selection can rearrange and reorient the image in the overall image accordingly, as described in further detail below. In order to support individual user viewing preferences and group sharing viewing needs, the interface provides two general user interface functions. First, the entire displayed image can be freely rotated in either direction. This operation is a very convenient way to pass around a global layout of the interface for each individual user’s viewing angle. In addition, we allow control panels to be positioned along the perimeter of the tabletop wherever a user is sitting. Interface and Image Orientations Traditional rectangular interfaces, such as window-based architectures, typically assume that the user or users always view the interface from roughly the same direction and angle, namely from directly in front of a terminal or screen. Prior art interfaces typically use a rectangular (Cartesian) coordinate system to display images. For example, the images are almost always aligned according to the rows and columns of pixels, which can sometimes further define rectangular windows that partition the display area or screen. When pixels an images are aligned, transformations such as affine translation and scaling are straight forward. In contrast, our invention enables face-to-face collaborations where the interface is situated between the users, and thus we must consider issues of rotation and re-orientation of the entire display interface, including the images that are displayed there. Thus, we provide an architecture for visualizing and collaborative interacting in order to facilitate the convenient re-orientation of any or all images on the interface surface, the passing images around the interface surface, and the size of the user interface and the images. Architecture Overview of Circular Graphical User Interface FIG. 4a shows architecture and method 400 for collaborative circular graphical user interfaces. The architecture includes a transformation engine 410, an asynchronous rendering engine 420, and a thread switching engine 430 coupled to each other. The operation of the engines is in response to external events 450, such as mouse clicks, drag&drop events, free-form stroke events, touch events and keyboard events. In response to the events 450, the transformation engine 410 generates polar coordinates for a transformation matrix 411 of graphics context and input events. The rendering engine 420 coordinates multi-layer, multiple depth rendering functions, and the switching engine 430 coordinates multiple execution threads, multiple image layers, and multiple tabletop views. With these three engines, correct and efficient correspondence between input events and output rendering is assured. The architecture 400 operates on layers of images. A set of layers can be collected to form a view. Multiple views can be maintained concurrently. A view is formed by compositing the set of layers associated with the view in a predetermined order. An active view is the presentation image 110 that is currently displayed on the tabletop. The types of layers can include item layers 401, view layers 402, and a background layers 403. To ensure all of the pixels of the final image have some value, the background layer has no transparent pixels. For example, the pixels in the background layers are initially all set to blue. The number of layers can change over time. During rendering, the items 401 are composited into the view layers 402, which are then composited onto the background layer 403. Associated with each layer is an image buffer. Thus, any layers that have not changed since the last refresh can be copied directly to a display or video buffer during rendering. In a preferred embodiment, a double buffering technique is used. While a first buffer is displayed, a second buffer is filled with pixels. Then, the second buffer is displayed, and the first is filled with pixels, and so forth. Layers FIG. 4b shows one possible set of layers that can be composited into a view. For the purpose of merging and rendering, the layers can be numbered, e.g., top-to-bottom 0, 1, 2, 3, 4, etc, where one layer is always defined as the “top” layer. A compositing operation can be carried out at any layer with all of the lower layers. For example, layer 3 is a compositing of layers 0+1+2+3, and layer 4 is a compositing of layers 1+2+3+4. For example, the layers can include the following layers in a top-to-bottom order. A layer 451 of pop-up items or top-level menus, which always are on top, if it exists. Generally, pop-up menus are temporary. A layer 452 of selected images, which is on top if layer 451 does not exist. A layer 453 layer with the control or menu bar 200, which is the top layer if none of the above layers exist. A layer 454 with all the images except the selected images. A layer 455 for a deformation grid. A deformation grid assists the users in visualizing how a view can be deformed. For example, items near the center of the view can be spaced closer and appear smaller than those near the edges of the view to give a “black-hole” type of effect. At the very bottom there is a background layer 456. Transformation Engine With our architecture, the users 101–103 can rotate the entire active view 110, or move individual images within the view. Individual images can be moved using affine transformations, i.e., translation, scaling, and rotation. Because the interface is primarily circular, two polar coordinate systems are maintained. A global polar coordinate system is assigned to an entire view, and a local polar coordinate system is assigned to individual images within a view. The moving of the images is responsive to the events 450. The transformation engine 410 handles all of the necessary primitives to build a circular interface based on these two polar coordinate systems. In a traditional GUI, it is very common to use a hierarchy of components to partition the screen layout (desktop) into smaller regions. This is possible because in a rectangular interface, a rectangle can be partitioned into smaller rectangles with each region operating only on a local coordinate system, and where there is only one common direction of orientation for each displayed visual object. For example, on a desktop interface, all images are vertically aligned and rotation is not possible. In contrast, a polar coordinate based interface has no predominant direction for displayed items. Thus, it is not possible to partition the screen and resolve the smaller problems in a local frame coordinate system, and then assembling the global layout from the local layouts as in windows-based desktop architectures. In the polar coordinate system, there is one and only one center that is meaningful. All the items must know where this center is at all times. Therefore, it is necessary to describe every item to be displayed with a polar location and a polar orientation at the same time. Polar Coordinate System As shown in FIG. 5, the architecture according to the invention uses two polar coordinate systems to determine three variables: a radial distance \( r \) from a center \( c \) of each image \( i \) to the center \( c \) of the display surface, i.e., "view," an angle \( \alpha \) of rotation around the center of the view, and an angle \( \beta \) of rotation around the center of each image. The angle \( \alpha \) is with respect to some virtual reference line \( 1 \) of the display surface, and the angle \( \beta \) is an offset from angle \( \alpha \) to a central axis \( 2 \) of each image. For comparison, the item \( i \) labeled "AB" has an angle \( \beta \) greater than an angle \( \alpha \), and the item \( i \) labeled "CD" has a very small angle \( \beta \) and an angle \( \alpha \) that is close to 90°. In addition, there is an angle \( \phi \), which determines how much the entire view is rotated, with respect to some arbitrary reference position. Even when \( \beta \) is zero, the \( \alpha \) angles are different for these two items, and the items will have different orientation. This problem does not exist in a Cartesian framework. With the introduction of the 3rd degree of freedom, the angle \( \beta \) is possible to rotate every item around the item's own center. To manage the relative position of each item, the transformation engine \( 410 \) translates a position \( (\alpha, \beta) \) of an item into the transformation matrix \( 411 \), and the local angle \( \beta \) \( 503 \). For example, the transformation uses \( \alpha = \phi \) to rotate all the elements displayed on the tabletop to face the same direction towards a user's location at the table, defined by the angle \( \phi \), which is the global angle used to rotate the entire view. It is also possible to use intermediary values between \( \beta = \alpha + \phi \) and \( \beta = 0 \) to re-orient documents in a continuum. Multi-Layer Multiple-Depth Asynchronous Repaint Engine The circular graphical interfaces, as described herein, allows users to "pile," "shuffle," "pass," and "spread" items on the tabletop (view), see FIG. 1. Scaling (zooming) to various resolutions is also permitted. Therefore, it is necessary to display and refresh potentially a very large number of items in a particular view, perhaps as many as a thousand or more. This is a couple of orders of magnitude larger than the number of windows one would have "open" in a conventional desktop display. Because each individual item itself can have a large number of pixels, the total number of pixels to be processed for a single refreshed composition of an active view can be extremely large. For this reason, multi-layers are used by the rendering engine \( 420 \). Whenever the "content" of the active view changes in position, orientation, or size, the rendering engine \( 420 \) determines which layers need to be rendered, and the order of the rendering of the layers. This determination is based on the events \( 450 \), e.g., rotate the entire view, resize and reorient selected items, move an item from one layer to another, construct a composite image for a particular layer, update item attributes, and so forth. Each item is potentially part of a displayable image with attributes describing its properties such as size, current location, and angle of orientation, and a pointer to the actual file in the database \( 370 \) that forms the source data for the item. It is also possible to associate application specific attributes with an item. For example, for digital images, users can add attributes for shadow generation, paint quality/ resolution parameterization, and information about whether the item can be rotated, or not. In the multi-layer representation, the item layer \( 452 \) usually includes one or more selected items. Selected items are being actively controlled by the users and cause events \( 450 \), e.g., the item is being rotated, passed to another user, etc. To reflect a change in the display of a selected item, it is sufficient to composite a new version of the item with the view layer \( 454 \) that is to contain the item, and then to composite that layer with the background layer \( 456 \). Activating a different view, merely causes that view to be composited with the background layer, the individual items of that view do not need to be composited until they are selected. In other words, compositing is a bottom-to-top order. The background (deepest) layer \( 456 \) is relatively static, e.g., an initial blue color that is then overwritten with a deformation grid, map, or a tablecloth texture. In the case where multiple views are used, a different background can be used to distinguish each view. This layer is composited first. However, changing the background requires a recompositing of all layers on top of the background. Layering reduces the number of times layers or views need to be recomposed. The top layer is always the last layer to be composited on top of previous layers. As shown respectively in FIGS. \( 6a \) and \( 6b \), two rendering strategies can be used. In the first strategy, image \( 601 \)–\( 604 \) are generated \( 611 \)–\( 614 \) (left arrows) for all items in each layer. The generation is from source data of each item according to parameters such as size and orientation. The layers can then be composited (up arrows) in a bottom-to-top order to render a view. In the second strategy, each layer \( 621 \)–\( 624 \) includes itself as well as all layers below it. These two strategies are called "pile" and "pyramid" respectively. To be useful, the pyramid has a smaller number of items in the top layers that change more frequently than items in deeper layers. These two strategies can be used in collection, e.g., a pile layer can be composited with a pyramid layer. The pyramid layers \( 621 \)–\( 624 \) can be generated from the pile layers \( 601 \)–\( 604 \) to factorize the generation process. Thread Switching The rendering engine \( 420 \) according to the invention executes multiple threads concurrently and asynchronously. The asynchronous rendering is accomplished by maintaining as an independent rendering thread for each layer. The multi-layer representation enables selective rendering of parts of the view displayed on the tabletop. Most of the time, only a single image of a selected item needs to be regenerated. For example, one user is "passing" a photograph to another user, or zooming on the photograph. However, if a user rotates the entire view on the tabletop, then all layers may need to be composited into a single new rotated view. In other cases, some parts of the view on tabletop remain stationary, for example, a user's control panel, while the rest of the image rotates. The threads are executed so that the number of images that need to be generated from source data is minimized. Also, latencies are minimized. For example, if a user moves an item, and then rotates the entire view, the entire view is updated, and a rendering pass for the single item is discarded. The architecture also includes threads for timers. Timers can be used to animate items. Other thread are used to acquire source data from the database. Multiple Views and Multiple Control Bars As stated above, the architecture supports multiple views. One simple use of this feature is to provide multiple virtual a transformation engine responsive to external commands received from any of a plurality of users touching displayed items to manipulate the displayed items, for generating polar coordinates for each displayed item; an asynchronous rendering engine for generating images of the displayed items according to the polar coordinates, in which the items are displayed as a plurality of layers, and wherein the layers include a pop-up layer, control layer, a selected image layer, a deformation grid, and a background layer; and a thread switching engine for controlling the rendering engine. 2. The interface of claim 1 further comprising: a circular display area for displaying the items. 3. The interface of claim 2 wherein the circular display area includes a touch sensitive surface. 4. The interface of claim 2 wherein a central axis of the items passes through a center of the display area. 5. The interface of claim 1 wherein the transformation engine performs translation, scaling, and rotation using the polar coordinates. 6. The interface of claim 1 wherein the polar coordinates include local and global coordinates. 7. The interface of claim 1 wherein the layers are arranged hierarchically. 8. The interface of claim 7 wherein the items are rendered in a bottom-to-top order. 9. The interface of claim 1 wherein there is an execution thread for each layer. * * * * *
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6894703", "len_cl100k_base": 6185, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 14911, "total-output-tokens": 7378, "length": "2e12", "weborganizer": {"__label__adult": 0.0006151199340820312, "__label__art_design": 0.0191497802734375, "__label__crime_law": 0.000865936279296875, "__label__education_jobs": 0.0032501220703125, "__label__entertainment": 0.0003807544708251953, "__label__fashion_beauty": 0.00030350685119628906, "__label__finance_business": 0.001087188720703125, "__label__food_dining": 0.0006237030029296875, "__label__games": 0.0014638900756835938, "__label__hardware": 0.020538330078125, "__label__health": 0.0005197525024414062, "__label__history": 0.0007510185241699219, "__label__home_hobbies": 0.0003151893615722656, "__label__industrial": 0.00160980224609375, "__label__literature": 0.0004968643188476562, "__label__politics": 0.000240325927734375, "__label__religion": 0.0006470680236816406, "__label__science_tech": 0.1580810546875, "__label__social_life": 8.511543273925781e-05, "__label__software": 0.169677734375, "__label__software_dev": 0.6181640625, "__label__sports_fitness": 0.0001928806304931641, "__label__transportation": 0.0006575584411621094, "__label__travel": 0.0002732276916503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30893, 0.01244]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30893, 0.60419]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30893, 0.8951]], "google_gemma-3-12b-it_contains_pii": [[0, 389, false], [389, 389, null], [389, 389, null], [389, 389, null], [389, 389, null], [389, 781, null], [781, 781, null], [781, 781, null], [781, 7677, null], [7677, 14529, null], [14529, 21917, null], [21917, 29507, null], [29507, 30893, null]], "google_gemma-3-12b-it_is_public_document": [[0, 389, true], [389, 389, null], [389, 389, null], [389, 389, null], [389, 389, null], [389, 781, null], [781, 781, null], [781, 781, null], [781, 7677, null], [7677, 14529, null], [14529, 21917, null], [21917, 29507, null], [29507, 30893, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30893, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30893, null]], "pdf_page_numbers": [[0, 389, 1], [389, 389, 2], [389, 389, 3], [389, 389, 4], [389, 389, 5], [389, 781, 6], [781, 781, 7], [781, 781, 8], [781, 7677, 9], [7677, 14529, 10], [14529, 21917, 11], [21917, 29507, 12], [29507, 30893, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30893, 0.0362]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
75c5bb34a60bbe86de4f1f66ace9036050a1d9ee
Change-Driven Model Transformations Derivation and Processing of Change Histories István Ráth\textsuperscript{1}, Gergely Varró\textsuperscript{2}, and Dániel Varró\textsuperscript{1} \textsuperscript{1} Budapest University of Technology and Economics, Department of Measurement and Information Systems \{rath,varro\}@mit.bme.hu \textsuperscript{2} Department of Computer Science and Information Theory, H-1117 Magyar tudósok krt. 2, Budapest, Hungary gervarro@cs.bme.hu Abstract. Nowadays, evolving models are prime artefacts of model-driven software engineering. In tool integration scenarios, a multitude of tools and modeling languages are used where complex model transformations need to incrementally synchronize various models residing within different external tools. In the paper, we investigate a novel class of transformations, that are directly triggered by model changes. First, model changes in the source model are recorded incrementally by a change history model. Then a model-to-model transformation is carried out to generate a change model for the target language. Finally, the target change history model is processed (at any time) to incrementally update the target model itself. Moreover, our technique also allows incremental updates in an external model where only the model manipulation interface is under our control (but not the model itself). Our approach is implemented within the VIATRA2 framework, and it builds on live transformations and incremental pattern matching. Keywords: Incremental model transformation, change models, change-driven transformations. 1 Introduction Model transformations play a key role in model-driven software engineering by providing embedded design intelligence for automated code generation, model refactoring, model analysis or reverse engineering purposes. Most traditional model transformation frameworks support batch transformations where the execution of a transformation is initiated (on-demand) by a systems designer. As an alternate solution (proposed recently in \cite{1,2}), live transformations (or active transformations) run in the background as daemons, and continuously react to changes in the underlying models. In this respect, a transformation can be executed automatically as soon as a transaction on the model has completed. Up to now, the design and execution of batch transformations and live transformations were completely separated, i.e. the same transformation problem had to be formulated completely differently. \textsuperscript{*} This work was partially supported by EU projects SENSORIA (IST-3-016004) and SecureChange (ICT-FET-231101). In the paper, we bridge this conceptual gap by introducing change-driven model transformations. More specifically, we first define the concept of a change history model to serve as a history-aware log of elementary model changes, which record causal dependency/timeliness between such changes. We show how change history models can be derived incrementally by live transformations during model editing. Then we describe how change history models can be used to incrementally update a model asynchronously (at any desired time) by propagating changes using batch transformations. The use of change history models in model-to-model transformation scenarios has far-reaching consequences as incremental model transformations can be constructed with minimal knowledge about the current structure of the target model. For instance, transformations can still be implemented when only identifiers and a model manipulation interface are known, but the rest of the actual target model is non-materialized (i.e. does not exist as an in-memory model within the transformation framework). As a result, our concepts can be easily applied in the context of runtime models as well as incremental model-to-code transformation problems (where the latter will actually serve as the running example of the paper). The rest of the paper is structured as follows. In Section 2, a motivating case study is introduced as a running example for our paper. The main concepts of change-driven transformations and change history models are introduced in Section 3. Section 4 details the main steps of the approach on the running example. Finally, Section 5 summarizes related work and Section 6 concludes our paper. 2 Motivating Scenario Our motivating scenario is based on an actual tool integration environment developed for the SENSORIA and MOGENTES EU research projects. Here high-level workflow models (with control and data flow links, artefact management and role-based access control) are used to define complex development processes which are executed automatically by the JBoss jBPM workflow engine, in a distributed environment consisting of Eclipse client workstations and Rational Jazz tool servers. The process workflows are designed in a domain-specific language, which is automatically mapped to an annotated version of the jPDL execution language of the workflow engine. jPDL is an XML-based language, which is converted to an XML-DOM representation once the process has been deployed to the workflow engine. A major design goal was to allow the process designer to edit the process model and make changes without the need for re-deployment. To achieve this, we implemented an asynchronous incremental code synchronizing model transformation. This means that (i) while the user is editing the source process model, the changes made are recorded. Then (ii) these changes can be mapped incrementally to the target jPDL XML model without re-generating it from scratch. Additionally, (iii) the changes can be applied directly on the deployed XML-DOM representation through jBPM’s process manipulation DOM programming interface, but, (iv) in order to allow the changes to be applied to the remote workflow server, the actual XML-DOM manipulation is executed on a remote host asynchronously to the operations of the process designer. Example. A simple tool integration workflow model is given in Fig. 1(a) together with its jPDL XML representation (in Fig. 1(b)). Moreover, a metamodel of the source language is given in Fig. 1(c). In case of the target language, an interface is provided to manipulate XML documents (see Fig. 1(d)). Metamodeling background. Since the actual tool integration framework is built upon the model repository and transformation support of the VIATRA2 framework [3], we also use it for the current paper for demonstration purposes. However, all metamodels will be presented as a traditional EMF metamodel to stress that all the main concepts presented could be transferred to other modeling environments as well. VIATRA2 uses the VPM [4] metamodeling approach for its model repository, which uses two basic elements: entities and relations. An entity represents a basic concept of a (modeling) domain, while a relation represents the relationships between other model elements. Furthermore, entities may also have an associated value which is a string that contains application-specific data. Model elements are arranged into a strict containment hierarchy, which constitutes the VPM model space. Within a container entity, each model element has a unique local name, but each model element also has a globally unique identifier which is called a fully qualified name (FQNs are constructed by hierarchical composition of local names, e.g. “workflow.model.node0”). There are two special relationships between model elements: the supertypeOf (inheritance, generalization) relation represents binary superclass-subclass relationships (like the UML generalization concept), while the instanceOf relation represents type-instance relationships (between meta-levels). By using an explicit instanceOf relationship, metamodels and models can be stored in the same model space in a compact way. 3 Change History Models in Incremental Model Synchronization In the current paper, we investigate a model synchronization scenario where the goal is to asynchronously propagate changes in the source model $M_A$ to the target model $M_B$. This means, that changes in the source model are not mapped on-the-fly to the target model, but the synchronization may take place at any time. However, it is important to stress that the synchronization is still incremental, i.e. the target model is not regenerated from scratch, but updated according to the changes in the source model. Moreover, our target scenario also requires that $M_B$ is not materialized in the model transformation framework, but accessed and manipulated directly through an external interface $IF$ of its native environment. This is a significant difference to traditional model transformation environments, where the system relies on model import and export facilities to connect to modeling and model processing tools in the toolchain. To create asynchronous incremental transformations, we extend traditional transformations (which take models as inputs and produce models as output) by change-driven transformations which take model manipulation operations as inputs and/or produce model manipulation operations as output. By this approach, our mappings may be executed without the need of materializing source and target models directly in the transformation system, and may also be executed asynchronously in time. As we still rely on model transformation technology, operations on models need to be represented in the model space by special trace models which encode the changes of models as model manipulation sequences. We call these models change history models (CHMs in short). These models are generated automatically on-the-fly as the source model changes (see $CHM_A$ in the left part of Fig. 2) using live transformations. Live transformations are triggered by event-driven condition-action rules whenever a change is performed in the model space, and create an appropriate change history model fragment (connected to those parts of the model which were affected by the change). The actual model transformation between the two languages is then carried out by generating a change history model $CHM_B$ for the target language as a separate transformation (see middle part of Fig. 2 and also note that traceability information between $CHM_A$ and $CHM_B$ can be recorded as inter-model links). As change history models represent a trace of model evolution, they may be automatically applied to models (see right part of Fig. 2). More precisely, we combine a snapshot of the model $M_B$ (representing the initial state) and a change history model $CHM_B$ (representing a sequence of operations applicable starting from the initial state) to create the final snapshot $M'_B$. In other words, the change history model $CHM_B$ represents an “operational difference” between $M'_B$ and $M_B$, with the order of operations preserved as they were actually performed on $M_B$. ### 3.1 Change History Models Change history models are conceptually derived from the model manipulation operations defined on the host language. These operations may be generic (i.e. corresponding to graph-level concepts such as “create node”, “create edge”, “change attribute value”), or domain-specific (corresponding to complex operations such as “remove subprocess”, “split activity sequence”). In this paper, we discuss the generic solution in detail, however, we also show how our approach can be extended to domain-specific languages. **Change history metamodel.** The generic change history metamodel for VPM host models is shown in Fig. 3. CHM fragments are derived from the abstract Operation class, which can be optionally tagged with a Timestamp attribute for time-based tracing of, e.g. user editing actions. Operations are connected to each other by relations of type next, which enables the representation of operation sequences (transactions). It is important to stress that CHMs do not directly reference their corresponding host models, but use fully qualified name (or unique ID) references. The reason for this is two-fold: (i) by using indirect references, CHMs may point to model elements that are no longer existent (e.g. have been deleted by a consecutive operation), and (ii) CHMs are not required to be materialized in the same model space as the host model (symmetrically, host models are not required to be materialized when processing CHMs). This allows decoupling the actual models from the transformation engine which is a requirement for non-invasive scenarios where target models are indirectly manipulated through an interface. By our approach, change history metamodel elements are either EntityOperations or RelationOperations. Entity operations use the parentFQN reference to define the containment hierarchy context in which the target entity is located before the operation represented by the CHM fragment was executed. Analogously, relation operations use srcFQN and trgFQN to define source and target endpoints of the target relation element (prior to execution). Note that we omitted inheritance edges from EntityOperation and RelationOperation in Fig. 3 for the sake of clarity. All CHM elements correspond to elementary operations in the VPM model space, in the following categories: - **creation** (shown on the far left): CreateEntity and CreateRelation represent operations when an entity or relation has been created (an entity in a given container, a relation between a source and target model element). Both CHM fragments carry information on the type (typeFQN) of the target element. - **deletions** (shown on the near left): DeleteEntity and DeleteRelation correspond to deletions of entities and relations. 4 Change-Driven Transformations In this section, we demonstrate the concept and application of change-driven transformations (see Fig. 2) using change history models by the elaboration of the motivating scenario described in Section 2. First, we demonstrate (in Section 4.1) how CHMs can be derived automatically by recording model manipulations using live transformations. We introduce both generic (metamodel-independent) and domain-specific (metamodel-dependent) techniques to achieve this. Then we discuss (in Section 4.2) how model transformations can be designed between two CHMs of different languages. Finally, we describe (in Section 4.3) how CHMs can be asynchronously processed to incrementally update a model resided in a model repository or within a third-party tool accessed via an external interface. 4.1 Automatic Generation of CHMs by Live Transformations First, we demonstrate the automatic generation of change history models for recording modification operations carried out on the host model. Model changes may be observed using various approaches, e.g. by model notification mechanisms such as the EMF notification API, where the model persistence framework provides callback functions for elementary model changes. This approach is limited to recording only basic model manipulation operations, i.e. an appearance of a complex model element (e.g. a graph node with attribute values and type information) requires the processing of a sequence of elementary operations (e.g. “create node”, “set value”, “assign type”, etc). If the modification operations may be interleaving (e.g. in a distributed transactional environment, where multiple users may edit the same model), it is difficult to process operation sequences on this low abstraction level. In contrast, live transformations [2] define changes on a higher abstraction level as a new match (or lost match) of a corresponding graph pattern (as used in graph transformations [5]). By this approach, we may construct a complex graph pattern from elementary constraints, and the system will automatically track when a new match is found (or a previously existing one is lost) – thus, model manipulation operations may be detected on a higher abstraction level, making it possible to assign change history models not only to elementary operations, but also to domain-specific ones. More precisely, live transformations are defined by event-condition-action triples: – an event is defined with respect to a graph pattern, and may correspond to an appearance of a newly found match, or a disappearance of a previously existing one. – conditions are evaluated on the transaction of elementary operations which resulted in the triggering of the event. They correspond to elementary operations affecting elements of the subgraph identified by the event’s (newly found or deleted) match. – actions are model manipulation operations to be carried out on the model. Basic patterns. Fig. 4 shows three basic graph patterns and their VIATRA2 transformation language representations. Pattern entity_in_parent encompasses a containment substructure where an entity $E$ is matched in a given parent entity $Parent$. A new match for this pattern occurs when any entity is created in the host model (when a new match is detected, concrete references as substitutions for pattern variables $E$, $Parent$ are passed to the transformation engine). Similarly, pattern relation_source_target corresponds to a relation $R$ with its source $S$ and target $T$ elements, while pattern modelelement_type references any model element with its type. These patterns correspond to basic notions of the VPM (typed graph nodes and edges), and may be combined to create precondition patterns for event-driven transformation rules. Generic derivation rules. On the left, Fig. 5 shows a sample CHM generation rule for tracking the creation of model elements. A triggered graph transformation rule is defined for a composite disjunctive pattern, which combines cases of new appearances of entities and relations into a single event. Condition clauses (when(create($E$)), when(create($R$))) are used to distinguish between the cases where an entity or a relation was created. Finally, action sequences (encompassed into seq{} rules after the when-clauses) are used to instruct the VIATRA2 engine to instantiate the change history metamodel, create a CreateEntity or CreateRelation model element and set their references to the newly created host model entity/relation. The right side of Fig. 5 shows an example execution sequence of this rule. The sequence starts with a model consisting only of a top-level container node $w0$ of type Workflow. In Step 2, the user creates a new Invocation node $i0$ inside $w0$. Note that on the VPM level, the creation of $i0$ actually consists of three operations: (1) create entity, (2) set entity type to Invocation, (3) move entity to its container. However, the Fig. 4. Patterns for identifying relevant model manipulation events Fig. 5. Live transformation rule for automatic CHM generation live transformation engine triggers the execution of `handleCreation()` only if the subgraph $w_0 - i_0$ is complete. In Step 3, `handleCreation()` is fired with the match \{`Parent = w_0, E = i_0, Type = Invocation`\}, and – as the condition $create(E)$ is satisfied in this case – the appropriate `CreateEntity` instance $ce_0$ is created. **Domain-specific CHMs.** Change history models can also be defined on a higher abstraction level, directly applicable to domain-specific modeling languages. In Fig. 6(a), a part of the change history metamodel for manipulating jPDL XML documents is shown. This metamodel uses unique IDs to refer to (non-materialized) model elements (as defined in the jPDL standard); since jPDL documents also follow a strict containment hierarchy, creation operations (as depicted in Fig. 6(a)) refer to a `parentID` in which an element is to be created. In the follow-up examples of our case study, we will make use of `CreateJPDLNode` and `CreateJPDLAttribute` to illustrate the usage of this domain-specific change history metamodel. It is important to note, that domain-specific CHMs can be created analogously to generic ones, by using more complex graphs as precondition patterns for events. The domain-specific CHM construction rule in Fig. 6(b) includes direct type references to the domain metamodel (Fig. 1(c)) – in this case, it fires after the creation of an `Invocation` and associated `DataInputs` and `DataOutputs` is completed, and it creates three connected domain-specific CHM fragments accordingly. ### 4.2 Model Transformations between Change History Models Since CHMs are automatically derived as models are modified, they essentially represent a sequence of operations that are valid starting from a given model snapshot (Fig. 2). As such, they may be used to drive mapping transformations between two modeling languages: such a change-driven transformation takes CHMs of the source model and maps them to CHMs of the target model. This is a crucially different approach with respect to traditional model transformations in the sense that the mapping takes place between model manipulation operations rather than models, which makes non-invasive transformations possible (where the models are not required to be materialized in the transformation system). Fig. 7 shows an example transformation rule where the creation of an *Invocation* in the domain-specific workflow language is mapped to the creation of a corresponding jPDL Node and its attribute. In this case, a batch graph transformation rule is used, however, this transformation may also be formulated as a live transformation. The rule looks for a *CreateEntity* element referencing a node of type *Invocation*, and maps it to the domain-specific CHMs of the jPDL language. As *Invocations* are represented by jPDL Nodes with an attribute node, the target CHM will consist of two “create”-type elements, chained together by the *Operation.next* relation. The core idea of creating CHM transformations is the appropriate manipulation of reference values pointing to their respective host models (as CHMs only carry information on the type of the operation, the contextual information is stored in their references). In this example, we make use of the fact that both source and target models have a strict containment hierarchy (all elements have *parents*), which is used to map corresponding elements to each other: Based on parentFQN in the source model, we calculate the target parent’s ID parentID as name(CE.parentFQN). Similarly, the target jPDL node’s ID targetID is calculated as the concatenation of parentID and name(CE.targetFQN) to place the target node under the target parent. Finally, the attribute functionName designates a particular function on a remote interface which is invoked when the workflow engine is interpreting an Invocation workflow node. It is represented by a separate node in the jPDL XML-DOM tree. The targetValue attribute of the additional CreateJPDLAttribute element is derived from the appropriate attribute value of Invocation node in source model (as denoted by the ref(CE.targetFQN) function in the sample code). The right side of Fig. 7 shows a sample execution result of the mapCreate() rule. First, in Step 4, the precondition pattern is matched, and a match is found to the subgraph created in Step 3 of Fig. 5. Following the successful matching, the action sequence is executed to create the domain-specific CHM nodes cjn0 (corresponding to a creation of a jPDL Node) and cj0 (creation of a jPDL attribute node). These CHM nodes are chained together by a next relation to be executed in sequence. **Designing change-driven transformations.** When designing transformations of change history models, it is important to focus on the fact that the transformation will operate on operations rather than models. Consequently, the first step in designing such a transformation is to define the concept of operation – which may be generic (graph-level operations), or domain-specific. This requires a partitioning scheme for the host modeling language, where the partitions correspond to parts whose creation/deletion constitutes an operation which can be represented by a CHM fragment. It is important to note that the granularity of this partitioning can be determined freely (since it is possible to perform the “aggregation” of operations in, e.g. the transformation between CHMs of source-target host languages); however, we have found that it is useful to define these partitions so that they represent a consistent change (i.e. the results of valid modification steps between two consistent states of the host model). **4.3 Processing Change History Models** On the macro level, change history models are represented as chains of parametrized elementary model manipulation operations. As such, they can be processed linearly, progressing along the chain until the final element is reached (thus modeling the execution of a transaction). The consumption of a CHM element is an interpretative step with the following actions performed in the context defined by the CHM’s references: - **creation:** the target entity/relation is created with the correct type assignment; entities are created in the container designated by the parent’s fully qualified name (parentFQN), relations are created between source and target elements referenced by sourceFQN and targetFQN, respectively. - **moves:** for MoveEntity, the target entity is moved to the container designated by newParentFQN; for SetRelationSource, the source end of the target relation is redirected according to newSourceFQN. – *updates*: `SetName` and `SetValue` are mapped to updates in the name and value attributes. `SetRelationTarget` is handled similarly to `SetRelationSource`. – *deletions*: `DeleteEntity` and `DeleteRelation` are interpreted as deletions of their targets (targetFQN). ### Applying CHMs to non-materialized models. As Fig. 2 shows, we apply CHMs to manipulate non-materialized models through an interface. The peculiarity of this scenario is that instead of working on directly accessible in-memory models, the transformation engine calls interface functions which only allow basic queries (based on ids) and elementary manipulation operations. In this case, CHMs are very useful since they allow incremental updates, as they encode directly applicable operation sequences. ### Case study technical details. For the jPDL models of the motivating scenario, we mapped the XML-DOM process model manipulation programming interface to V1ATRA2’s native function API, which enables the system to invoke arbitrary Java code from the transformation program. The following native functions are used: – `getElementById(ID)`: retrieves a jPDL element identified by its unique ID. – `createElement(parentRef,targetID)`: creates a new jPDL DOM element as a child of its parent (identified by `parentRef`), with a given unique ID (`targetID`). – `addElement(elementRef,DocID)`: adds the element `elementRef` to the jPDL DOM identified by `DocID`. – `setContents(elementRef,text)`: sets the textual content of the given DOM element (`elementRef`) to `text`. ```java gtrule newCompoundJDPLNode(JPDL_DOM) = { precondition(CJN,CJA) = { CreateJPDLNode(CJN); CreateJPDLAttribute(CJA); Operation.next(_,CJN,CJA); } action { \ldots // See contents below } } ``` ``` // create JPDL Node -- Step 7 let TargetNode = createElement(getElementById(CJN.parentID),CJN.targetID), Result0 = addElement(TargetNode, JPDL_DOM) in println("Debug created JPDL Node:"+Result0); // create JPDL Attribute -- Step 8 let TargetAttrNode = createElement(getElementById(CJA.parentID),CJA.targetID), Result1 = setContent(TargetAttrNode,ref(CJN.targetFQN).functionName), Result2 = addElement(TargetAttrNode,JPDL_DOM) in println("Debug created JPDL Attribute:"+Result2); ``` Fig. 8. Applying CHMs through the jPDL XML-DOM API Example transformation rule. In this final case study example, we define an application rule based on domain-specific CHMs for the jPDL XML-DOM model (Fig. 6(a)). Fig. 8 shows the newCompoundJPDLNode() rule, which is used to interpret a subsequence of CHM chains for the jPDL domain. More precisely, this rule’s precondition matches the pair of CreateJPDLNode and CreateJPDLAttribute CHM fragments which correspond to the addition of a new ”compound” jPDL node (with a specified function invocation attribute). The rule uses native functions createElement, addElement to instantiate new jPDL XML elements directly in the deployed process model on the workflow server; setContent is used to overwrite the attribute node’s textual content. The left side of Fig. 8 shows the final three steps of our running example. In Step 6, the initial state of the deployed workflow model, the process definition corresponding to Workflow w0 is still empty. During the rule’s execution, first, the jPDL Node i0 is created (Step 7), and then in Step 8, the attribute node is added with the appropriate textual content. (Debug calls are used to write debugging output to the VIATRA2 console.) The entire algorithm which applies CHMs follows the linear sequence of operations along the relations with type Operation.next; the first operation in a transaction can be determined by looking for a CHM fragment without an incoming Operation.next edge. 5 Related Work Now an overview is given on various approaches showing similarity to our proposal. Event-driven techniques. Event-driven techniques, which are the technological basis of live model transformations, have been used in many fields. In relational database management systems (RDBMS), even the concept of triggers [6] can be considered as simple operations whose execution is initiated by events. Later, event-condition-action (ECA) rules [7] were introduced for active database systems as a generalization of triggers, and the same idea was adopted in rule engines [8] as well. The specification of live model transformations is conceptually similar to ECA rules (see Section 4.1). However, ECA-based approaches lack the support for triggering by complex graph patterns, which is an essential scenario in model-driven development. Calculation of model differences. Calculating differences (deltas) of models has been widely studied due to its important role in the process of model editing, which requires undo and redo operations to be supported. In [9], metamodel independent algorithms are proposed for calculating directed (backward and forward) deltas, which can later be merged with initial model to produce the resulting model. Unfortunately, the algorithms proposed by [9] for difference and merge calculation may only operate on a single model, and they are not specified by model transformation. In [10], a metamodel independent approach is presented for visualizing backward and forward directed deltas between consecutive versions of models. Differences (i.e., change history models) have a model-based representation (similarly to [11]), and calculations are driven by (higher order) transformations in both [10] and our approach. However, in contrast to [10] to [11], our current proposal operates in an exogeneous transformation context to propagate change descriptions from source to target models. Incremental synchronization for exogeneous model transformations. Incremental synchronization approaches already exist in model-to-model transformation context (e.g. [12]). One representative direction is to use triple graph grammars [13] for maintaining the consistency of source and target models in a rule-based manner. The proposal of [14] relies on various heuristics of the correspondence structure. Dependencies between correspondence nodes are stored explicitly, which drives the incremental engine to undo an applied transformation rule in case of inconsistencies. Other triple graph grammar approaches for model synchronization (e.g. [15]) do not address incrementality. Triple graph grammar techniques are also used in [16] for tool integration based on UML models. The aim of the approach is to provide support for change synchronization between various languages in several development phases. Based on an integration algorithm, the system merges changed models on user request. Although it is not a live transformation approach, it could benefit from being implemented as such. The approach of [17] shows the largest similarity to our proposal as both (i) focus on change propagation in the context of model-to-model transformation, (ii) describe changes in a model-based and metamodel independent way, and (iii) use rule-driven algorithms for propagating changes of source models to the target side. In the proposal of [17] target model must be materialized and they can also be manually modified, which results in a complex merge operation to be performed to get the derived model. In contrast, our algorithms can be used on non-materialized target models, and the derived models are computed automatically on the target side. 6 Conclusion and Future Work In the paper, we discussed how model synchronization can be carried out using change-driven model transformations, which rely upon the history of model changes. We presented an approach to automatically (and generically) derive change history models by recording changes in a (source) model using live transformations. Then a change history model of the target language is derived by a second (problem-specific) model transformation. Finally, the target change history model can automatically drive the incremental update of the target model itself even in such a case when only an external model manipulation interface is available for the target model. Our approach was exemplified using an incremental code generation case study. As future work, we plan to investigate how to derive aggregated and history independent change delta models (like in [10]) automatically as union of change history models. Additionally, we also plan to work on elaborating the design methodologies of change-driven transformations, and intend to investigate the correctness and consistency checking of change-driven transformations (with respect to a batch transformation reference). Furthermore, we aim at using change history models for model merging. References
{"Source-Url": "http://www.researchgate.net/profile/Daniel_Varro/publication/221223717_Change-Driven_Model_Transformations/links/09e4150b7e7c4318f0000000.pdf", "len_cl100k_base": 6824, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34039, "total-output-tokens": 8594, "length": "2e12", "weborganizer": {"__label__adult": 0.0002899169921875, "__label__art_design": 0.00031948089599609375, "__label__crime_law": 0.00025010108947753906, "__label__education_jobs": 0.0006012916564941406, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.00013077259063720703, "__label__finance_business": 0.00017344951629638672, "__label__food_dining": 0.0002484321594238281, "__label__games": 0.0003631114959716797, "__label__hardware": 0.0004949569702148438, "__label__health": 0.00033974647521972656, "__label__history": 0.00020575523376464844, "__label__home_hobbies": 6.008148193359375e-05, "__label__industrial": 0.0003097057342529297, "__label__literature": 0.00021898746490478516, "__label__politics": 0.00019669532775878904, "__label__religion": 0.00034046173095703125, "__label__science_tech": 0.01262664794921875, "__label__social_life": 6.860494613647461e-05, "__label__software": 0.006023406982421875, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0002288818359375, "__label__transportation": 0.0003387928009033203, "__label__travel": 0.00016224384307861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37367, 0.01648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37367, 0.28527]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37367, 0.88001]], "google_gemma-3-12b-it_contains_pii": [[0, 2640, false], [2640, 5962, null], [5962, 7306, null], [7306, 10326, null], [10326, 13669, null], [13669, 14959, null], [14959, 18619, null], [18619, 18750, null], [18750, 20736, null], [20736, 22184, null], [22184, 25403, null], [25403, 27757, null], [27757, 31280, null], [31280, 34633, null], [34633, 37367, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2640, true], [2640, 5962, null], [5962, 7306, null], [7306, 10326, null], [10326, 13669, null], [13669, 14959, null], [14959, 18619, null], [18619, 18750, null], [18750, 20736, null], [20736, 22184, null], [22184, 25403, null], [25403, 27757, null], [27757, 31280, null], [31280, 34633, null], [34633, 37367, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37367, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37367, null]], "pdf_page_numbers": [[0, 2640, 1], [2640, 5962, 2], [5962, 7306, 3], [7306, 10326, 4], [10326, 13669, 5], [13669, 14959, 6], [14959, 18619, 7], [18619, 18750, 8], [18750, 20736, 9], [20736, 22184, 10], [22184, 25403, 11], [25403, 27757, 12], [27757, 31280, 13], [31280, 34633, 14], [34633, 37367, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37367, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
b5d0b7cb3226281c4f7538ff5f1c95b0e181ed46
A Final Report Grant No. NAG-1-1073 November 22, 1989 - November 21, 1990 A RESEARCH PROGRAM IN EMPIRICAL COMPUTER SCIENCE Submitted to: National Aeronautics and Space Administration Langley Research Center Hampton, VA 23665 Attention: D. E. Eckhardt, Jr. ISD, M/S 478 Submitted by: J. C. Knight Associate Professor Department of Computer Science SCHOOL OF ENGINEERING AND APPLIED SCIENCE UNIVERSITY OF VIRGINIA CHARLOTTESVILLE, VIRGINIA Report No. UVA/528334/CS91/101 February 1991 Copy No. # TABLE OF CONTENTS 1. Introduction ........................................... 1 2. Background and Justification in General ................... 2 3. Statistical Analysis of Experimental Data ................... 4 4. A Paradigm for Experimentation .............................. 6 5. Summary of Evaluation Experiment .............................. 12 5.1 Existing Techniques ..................................... 13 5.2 Phased Inspections ...................................... 15 5.3 Phased Inspection Support Toolset .......................... 16 5.4 Trial Inspections ...................................... 17 5.5 Conclusions ........................................... 18 Bibliography ............................................. 19 1. INTRODUCTION During the grant reporting period our primary activities have been to begin preparation for the establishment of a research program in experimental computer science. The focus of research in this program will be safety-critical systems. Many questions that arise in the effort to improve software dependability can only be addressed empirically. For example, there is no way to predict the performance of the various proposed approaches to building fault-tolerant software. Performance models, though valuable, are parameterized and cannot be used to make quantitative predictions without experimental determination of underlying distributions. In the past, experimentation has been able to shed some light on the practical benefits and limitations of software fault tolerance. It is common, also, for experimentation to reveal new questions or new aspects of problems that were previously unknown. A good example is the Consistent Comparison Problem that was revealed by experimentation and subsequently studied in depth. The result was a clear understanding of a previously unknown problem with software fault tolerance. The purpose of a research program in empirical computer science is to perform controlled experiments in the area of real-time, embedded control systems. The goal of the various experiments will be to determine better approaches to the construction of the software for computing systems that have to be relied upon. As such it will validate research concepts from other sources, provide new research results, and facilitate the transition of research results from concepts to practical procedures that can be applied with low risk to NASA flight projects. The target of experimentation will be the production software development activities undertaken by any organization prepared to contribute to the research program. Experimental goals, procedures, data analysis and result reporting will be performed for the most part by the University of Virginia. This report is organized as follows. In section 2, a review of the background and the major issues concerning empirical computer science are presented. Some of the statistical issues faced by researchers undertaking experiments are discussed in section 3. A new paradigm for experimentation is outlined in section 4, and a preliminary evaluation experiment is summarized in section 5. Finally, a bibliography of recent papers on the subject is included. So many papers have been written that relate to this project that most are not cited individually in the body of the report. 2. BACKGROUND AND JUSTIFICATION IN GENERAL Many important questions in software engineering remain unanswered because there is insufficient opportunity for experimental evaluation of issues. There is no national resource for experimentation in software engineering despite the fact that software is a major industry. There are national facilities for experimentation in other areas, high energy physics for example, even though in many cases such areas are not associated with a specific industry. Some experimentation has taken place at universities but the results, though frequently useful, do not necessarily apply to industrial environments. Much less experimentation has been performed in realistic production software developments. An important exception is the Software Engineering Laboratory (SEL) operated jointly by the University of Maryland, NASA Goddard Space Flight Center, and Computer Sciences Corporation [22]. The SEL has been operating for approximately thirteen years and has produced a wealth of important research results during that period. The emphasis of the SEL is efficient development of ground-based software. The research undertaken has been very varied in nature covering topics such as measurement of programmer activities to help validate cost models, performance comparison of programmers using Ada and FORTRAN, and various evaluations of test methods on production software. Experimentation in software engineering is limited for three major reasons: 1. **It is expensive.** Any effort to perform experiments in the area of software engineering involves building software and that is expensive. Worse still, for results to be believed, they should come from a statistically valid sample of data. That might involve repeating the same software engineering activity several times in order to acquire adequate data. The expenditure of sufficient resources to perform these experiments with professional programming staffs and equipment is beyond the capacity of industrial software development organizations. It is for this reason that many of the experiments that are performed take place in universities using student programmers and teaching equipment. 2. **It requires flexibility in the development process.** The approach to experimentation employed in the SEL reduces the cost substantially by using production software development as the target of experimentation. With this method, a piece of software that is actually needed is produced with designated funds but the process of production is observed and measured as the target of experimentation. This process is not perfect in that it is not possible to control all the independent variables in the way that a researcher might prefer. For example, the total staff assigned to the development cannot be changed, the programming language and target computers cannot be changed, and the overall software development method cannot be changed. However, the approach does offer considerable opportunities for useful experimentation and some relaxation of the restrictions just outlined are possible by performing some experiments separately from development. For example, new concepts in testing can be explored by taking the software as it is produced and testing it in an experimental manner in parallel with the conventional testing performed by the development team. Unfortunately, even the approach used by the SEL is not without cost. Any experimentation involving observation disturbs the subject being observed. In order to perform experiments on production software development activities, those performing the activities must be prepared to be observed, the cost of observation must be met, and the disturbance to the development operation resulting from the observation must be tolerated. Industrial software development activities are typically performed under contract and according to a prescribed schedule. Often the disturbance associated with even limited experimentation is sufficient that industrial organizations are not willing to participate in such experiments even though they admit their value. (3) *Industrial software development often has restricted access.* Although some industrial organizations are prepared to undertake experiments in software engineering, it is often not possible because the software that would be the subject of investigation is either classified or proprietary. Much of the software development undertaken by NASA and its contractors is free of the various restrictions outlined above. The very nature of the agency includes a desire for research and experimentation, and where obstacles are present that would normally inhibit experimentation, there is a desire to remove the obstacles to promote better and more extensive research. The disturbance resulting from experimentation mentioned above is inevitable but likely to be tolerated within NASA provided it is not excessive. In addition, much of the software produced is neither classified nor proprietary yet it is completely realistic allowing meaningful experimentation. 3. STATISTICAL ANALYSIS OF EXPERIMENTAL DATA The basic goal of a research program in empirical computer science is to determine which tools and techniques can be depended upon to support the development of software for safety-critical systems. As noted in section 1, many of the results that must be obtained can only be obtained empirically. Virtually none of the significant results depend upon simple constants. Rather they depend on the comparison of random variables. For example, an important question is whether a formal specification technique will permit systems to be built with higher reliability than informal specification techniques. This cannot be determined definitively by a simple comparison of single systems built using the two specification methods. The degree of difference between the two is a random variable and what is required is information about its distribution. The most appropriate way to perform such a comparison is with a statistical hypothesis tests. Such tests allow conclusions to be drawn of the form “method A is better than method B” with a certain probability, or confidence, that the conclusion is correct. Such hypothesis tests allow higher levels of confidence to be used if more data are available about the underlying populations. In the limiting case, where all the population data are available, clearly the confidence level is 100%. Obtaining confidence levels that are usefully high implies having a large set of data points from the two distributions being compared. In the context of the experimentation being discussed here, this means that observations of several development activities need to be observed, some using the original method and some using the proposed new method. Unfortunately, such experimentation is out of the question in software engineering. More importantly, even experimentation in which a single control project is available for comparison with a single project using a new technique is obviously very expensive. Funding for control studies is very unlikely to be available. The results of this situation are: 1. It is unlikely that statistically valid conclusions about the effect of a new technique, method, or tool could ever be drawn. Thus statements of the form “method B provides an improvement of Y% in quantity Q over method A with confidence C” are unlikely ever to be possible. At best, observed values of some quantity will be available and reported. This is a serious yet unavoidable problem and forces the user of such results to draw informal conclusions and hope they are valid. This does not mean that such experiments should not be performed. It means that trustworthy quantitative conclusions cannot be drawn. However, data collection under such circumstances can give great insight and permit informal conclusions to be drawn that are almost certainly right. (2) On the brighter side, a single data point is sufficient to reject certain hypotheses and this can be very useful. For example, a hypothesis of the form "method B provides an improvement of Y% in quantity Q over method A" can be rejected if an experiment with a control does not obtain a Y% improvement. Of course, if a Y% improvement is obtained, the hypothesis cannot be accepted. (3) An area where good results can be obtained is feasibility. At this stage in our understanding, there are many proposed techniques that have not even been shown to be feasible. For example, the use of formal specifications on a project involving many programmers has never been shown to be a realistic approach. An experiment in which the question of feasibility were investigated could obviously permit positive conclusions to be drawn. 4. A PARADIGM FOR EXPERIMENTATION In the area of dependable computing, we find ourselves in the same situation that faced the general software engineering community when the Goddard SEL was formed. It is tempting, therefore, to establish a program of experimentation to support dependable computing using the SEL as a model. Upon closer examination of the SEL program, it is clear that some changes have to be made before the SEL model can be used. As noted above, the cost of experimentation in the SEL is kept within manageable limits by, for the most part, using production software development as the target of observation. While providing the great benefit of reducing cost, this also limits the range of experiment that can be undertaken. Experiments involving the development of production software must be relatively low risk or they might jeopardize the successful completion of the product. Thus an experiment that wished to use a totally novel and untried tool or technique would be very hard to perform. In the context of the SEL, this is not a major limitation since there are so many important but low-risk experiments that can be performed. This results largely from the fact that an established and extensive development method is in place and generating production software on time at NASA Goddard. A characterization of the SEL experimentation process is shown in figure 1. Note that the emphasis is on technique selection rather than the creation of new tools or techniques. The situation with development methods for safety-critical systems is such that a conservative approach to experimentation cannot be taken. There is no corresponding established approach to software development to which a program of experimentation could add technique selection or modification. In the area of safety-critical software development, many completely fundamental questions remain. For example, a central issue is the role of formal methods and, specifically, whether an entire development method based on formal methods could offer a route to the routine development of software with adequate dependability. The experiments required are driven by questions that are associated with substantial risk. The paradigm for experimentation that is proposed, therefore, is one in which production software is built in a laboratory setting but is subjected to industrial constraints. The development would, however, involve new and untried methods or methods that have not been tried previously in an industrial setting. The risks would be high in that useful products might not be produced. This is precisely why such experiments are required since resolving the risk is a step that must be undertaken before more detailed information on methods can be obtained and before the methods can be applied routinely with confidence in industrial production development. Figure 2 shows the proposed paradigm for experimentation. It focuses on innovation in tools, techniques, and methods. It admits that such concepts might result from observational experiments, and that they will need to be evaluated empirically. Thus a major aspect of the paradigm is to seek new concepts, pose research questions concerning the feasibility, relevance, or performance of the concept, and to then design and carry out experiments based on these questions. Within the general paradigm of experimentation, there are essentially three types of experiment that can be performed. They will be referred to here as *fully controlled*, *semi-controlled*, and *non-controlled*. Fully controlled experiments are just that, fully controlled. All of the independent variables having influence over the outcome and all quantities affecting the statistical results can be set by the researcher. A predefined application is developed in a statistically significant number of replicates by separate staffs carefully selected to eliminate statistically meaningful differences in experience, abilities, education, etc. The individual staffs would use all the same techniques and tools but one to develop the application. The resulting software would be analyzed to determine whether any of the differing techniques produces better results according to some metric. For example, an experiment might develop software with two different programming languages, showing whether one language better lends itself to producing reliable code. Fully controlled experiments are expensive, but very desirable. A fully controlled experiment could be used, for example, to explore the benefits of using formal specifications versus informal specifications. Informal specifications for a predefined application would be rewritten in various formal notations. Groups of programmers, carefully selected to minimize differences in experience and ability, would develop software independently from the different forms of the specifications. During the development process, measurements and observations would include: (1) Tools required during the development process. (2) Acceptability of the formal specifications to the programmers. (3) Questions that arise about the specifications (formal and informal). (4) Tools required to write formal specifications. (5) Errors found in specifications (formal and informal). The experiment would ultimately compare the reliability of software developed from formal specifications with software developed from informal specifications. Semi-controlled experiments control some but not all aspects of the development process. Those factors that are not controlled vary under whatever influences usually operate, and the results of the experiment are conditional on the values that the non-controlled independent variables take. The extents and types of change that will be tolerated by the development environment determine to what degree these types of experiments can be done. In the context of assessing the performance of formal specifications, a semi-controlled experiment could be used to indicate how difficult is it to develop software with formal specifications. Informal specifications for an existing application would be rewritten in a formal notation. Programmers assigned to the development would then use the formal specifications. In such an experiment, the application, the staff, the language and computers used would not be controlled by the researcher, but the results might reveal useful information such as whether using formal specifications is feasible in a productive development environment, what programmer training is required, what tools might be useful, etc. Experiments not controlled by a researcher interfere very little with the existing development process. These types of experiments observe and measure the development process, providing very useful information about the effectiveness of the development process. However, it is virtually impossible to get meaningful quantitative data for comparative purposes from such efforts. While non-controlled experiments on existing applications do not control the development process, they do disturb it because of the inevitable intrusion resulting from data collection. How data collection is done depends on what data are available and in what form. For example, are the specifications, the cost estimates, the expected code size, the staff levels, the application details, the development tools, and the development hardware available? Many times even non-controlled experiments fail because even minimal data collection is not performed by the development organization. Interference with the development process can be reduced by automating the data collection as much as possible. How much automation is possible depends on whether access to code and other documents in electronic form is provided and whether modifications to the operating system used for the development are possible. Removing code and other artifacts from the development environment for testing and analysis at the laboratory can also reduce the disturbance of experimentation and provide opportunities to perform more controlled, desirable experiments. Of course, the laboratory has to be made aware of any special purpose hardware required by the code and artifacts that might restrict analysis. Considering once again the example of assessing the benefits of formal specifications, if a non-controlled experiment is all that can be achieved, useful results can still be obtained. An experiment could determine, for example, the feasibility of formal specifications. Using non-development staff, an attempt could be made to rewrite informal specifications for an existing application in various formal notations in parallel with the production development. Such an experiment would indicate whether formal notations could be prepared that are adequate to describe the kinds of applications currently being developed. Specific quantities that might be measured even in a non-controlled experiment with minimal impact on the development organization include: 1. Resources expended in developing formal specifications. 2. Errors in the formal specifications. 3. Tools for supporting formal specification development. 4. Acceptability of such specifications to programmers. 5. SUMMARY OF EVALUATION EXPERIMENT In order to evaluate the proposed paradigm for experimentation, we have carried out a preliminary evaluation experiment. We performed this experiment to gain experience with the advocated paradigm and determine its practicality. In this section only a summary of the experiment is presented. A complete report will be supplied under separate cover [23]. The experiment is in the category of fully controlled since all aspects were under our control. The topic we chose to study was software inspections. We chose inspections because there is substantial evidence that they are highly effective at locating defects in software when carried out carefully. However, we suspected that improved techniques might be possible, and that determining the suitability and performance of new ideas in this area could only be determined by inspection. The experimental procedure we followed was to: (1) study an industrial implementation of software inspections, (2) define a radically different approach to inspections that we hypothesized would be an improvement, (3) define a toolset that supports the advocated procedure, (4) implement a prototype version of the toolset for evaluation, (5) perform a set of trial inspections using the revised inspection approach supported by the prototype toolset, (6) revise the process and the toolset based on the results of the trial inspections, (7) seek industrial partners to assess the technology in a practical context. 5.1. Existing Techniques Software inspections have been employed for a long time in various forms. They have been referred to variously as walkthroughs, code readings, inspections, Fagan inspections [13] and audits. They have been applied to all work products that are generated during software development including requirements specifications, designs, source code, and test plans. By far the most popular application of inspections is the examination of source code. The basic idea behind all of these techniques is for human readers to examine a work product and look for algorithmic defects. Procedures differ and the members of an inspection team differ according to the particular approach being applied, but all rely on human examination of a paper version of the inspection target. Empirical evidence has emerged showing that such activities, as part of a systematic software development process, can have considerable benefit [13]. Most of the benefit that accrues is a lowering in the rate of faults in the deployed software. Since inspections typically take place before any form of verification, they can be highly cost effective because they eliminate algorithmic defects very early in the lifecycle. Despite this success, many major difficulties remain. We summarize three important ones here. First, inspections are in no sense rigorous. This leads to situations in which, although a work product may have been inspected, it is not possible to specify the precise benefits achieved. In a statistical sense, inspections produce valuable results but a given inspection does not necessarily ensure that a work product has any specific quality. A second important difficulty is that the human resources involved are not used effectively. The process known as Fagan inspections, for example, includes a step in which the author of a work product presents an overview of the product to the inspection team. This is quite inappropriate since it suggests that vital design or implementation information about the product is conveyed to the inspectors verbally. Such information should be readily available in associated documents. As a second example, anecdotal evidence also suggests that inspectors often use inspection time ineffectively by discussing essentially trivial difficulties with the work product. A third difficulty is the dependence of traditional inspection methods on human effort with essentially no computer support. It is possible to supplement the inspection process considerably with computer resources. This permits far more efficient use of human time and more complete coverage of items that have to be inspected. We take the position that inspections should be viewed as an approach to informal proof that a work product possesses certain properties. Further, we consider that establishment of these properties should be undertaken with an approach that permits assurance that the properties exist for a given work product after an inspection. There should be as little dependence on statistical chance to achieve results as possible. This amounts to making inspections a rigorous process and by doing so we suggest that they would be a far more valuable element of the software development process. 5.2. Phased Inspections We have defined a new approach to inspections termed *phased inspections*. Phased inspections are intended to *ensure*, to the extent possible with this technology, that work products possess certain useful properties. These properties are not limited to freedom from algorithmic defects but include properties such as freedom from programming practices that tend to be associated with high rates of defects even if specific instances turn out to be correct. Other example properties include important elements of program style that are known to improve the maintainability of software. The goal with phased inspections is to make the process rigorous, repeatable, as efficient as possible, and as dependent on computer support as possible. The concept of phased inspections is simple. It is only summarized here because of space limitations. A phased inspection consists of a series of partial inspections termed phases. Each phase addresses one or a small set of related properties that it is deemed desirable for the software to have. Phases are conducted in series with each depending on the properties established in preceding phases. Each inspector associated with each phase is required to sign a statement after the phase that the software possess the prescribed property to the best of his or her knowledge. Each phase is carried out by an individual or team and the goal is to establish the presence of the desired property in the work product. To the extent possible, check lists are used to ensure that the required property has a precise definition. Some of the latter phases of an inspection involve establishing correctness properties and these cannot be based on statically defined checklists. Such properties are defined to the extent possible by checklists that are derived from the work product itself. For example, correctness in the definition and use of internal interfaces is based on checklists developed according to prescribed rules by the author of the work product. 5.3. Phased Inspection Support Toolset Computer support for phased inspections is supplied by a set of tools that are presently in prototype form. The toolset provides service in three areas: (1) *Support for management in controlling the inspection process.* This element of the toolset is designed to deal with configuration management of the work products, allocation of staff to the various inspection phases, and management information concerning the state of various inspections. (2) *Support for inspectors.* Various tools are available to support the actual process of examining the work product. Some examples include a general display, scrolling, and searching facility that allows textual work products such as source code to be reviewed rapidly, a facility to permit inspectors to note their conclusions electronically, a syntax-based highlight mechanism that permits various important syntactic structures to be made readily visible, and a display of the checklists, their associated background and justification information. (3) *Support for compliance.* Where items are to be checked by human inspectors, it is essential that the checks be complete. Every instance of the item to be checked must actually be checked by the inspector. The compliance support facility monitors the inspector's use of the tool and the checklists, and ensures, to the extent possible, that the inspector is achieving complete coverage. 5.4. Trial Inspections The key research questions initially with phased inspections were practical. First, it had to be determined whether the basic concept provides a useful benefit to software developers. Benefit is defined to be a cost-effective improvement in some aspect of software quality. The only way to answer this question is by experimentation. The second important research question was the degree to which the concept met its major goal of establishing rigor in the inspection process. In principle it does. The issue was whether this can be carried through to practice and so, once again, the way to answer this question is by experimentation. Many other research questions exist and all are best addressed in whole or in part by observing and measuring the ideas and tools in practice. We performed an empirical study of phased inspections in order to get information on the feasibility and performance of the concept and the toolset. Development of the concept to the point where it can be applied readily to production software development requires extensive data on the feasibility of various aspects of the concept and performance data on the whole process. The preliminary experiment was limited by the available resources. Trial phased inspections were conducted by graduate students at the University of Virginia. The subject of the inspections was the source code for the phased-inspection toolset and the experiment focused on the feasibility of the process and the toolset. The results of this preliminary study led to extensive enhancements to the toolset and minor changes to the process. The results of the trial inspections led to extensive revisions to the toolset concept and minor changes to the process of phased inspections. We have begun to develop a tailored phased-inspection process and toolset for Science Applications International Corporation (SAIC). This activity is in support of SAIC's work in Ada reuse, and will lead to an inspection process in which the reusability of Ada software components is determined. We will be using this activity to gather preliminary data on the use of phased inspections in an industrial setting. 5.5. Conclusions The evaluation experiment is ongoing. The prototype toolset is being developed and plans are proceeding for industrial assessment of the technique and the toolset. The most significant conclusion that can be drawn at this time is that experimentation that attempts to define and evaluate new tools and techniques is workable and very beneficial. At this stage, phased inspections appear to be a substantially better technology than those already existing, and the toolset designed to support this technology appears to be highly successful. BIBLIOGRAPHY DISTRIBUTION LIST 1 - 3 National Aeronautics and Space Administration Langley Research Center Hampton, VA 23665 Attention: D. E. Eckhardt, Jr. ISD, M/S 478 4 - 5* National Aeronautics and Space Administration Scientific and Technical Information Facility P. O. Box 8757 Baltimore/Washington International Airport Baltimore, MD 21240 6 National Aeronautics and Space Administration Langley Research Center Acquisition Division Hampton, VA 23665 Attention: Richard J. Siebels Grants Officer, M/S 126 7 - 8 E. H. Pancake, Clark Hall 9 - 10 J. C. Knight, CS 11 A. K. Jones, CS 12 SEAS Preaward Administration Files *One reproducible copy JO#3690:ph
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940014956.pdf", "len_cl100k_base": 6009, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 41183, "total-output-tokens": 8305, "length": "2e12", "weborganizer": {"__label__adult": 0.0003814697265625, "__label__art_design": 0.0003304481506347656, "__label__crime_law": 0.00040650367736816406, "__label__education_jobs": 0.002735137939453125, "__label__entertainment": 6.860494613647461e-05, "__label__fashion_beauty": 0.00018203258514404297, "__label__finance_business": 0.0002841949462890625, "__label__food_dining": 0.0003361701965332031, "__label__games": 0.00064849853515625, "__label__hardware": 0.0011262893676757812, "__label__health": 0.0005660057067871094, "__label__history": 0.000335693359375, "__label__home_hobbies": 0.0001246929168701172, "__label__industrial": 0.00046324729919433594, "__label__literature": 0.00036406517028808594, "__label__politics": 0.0002269744873046875, "__label__religion": 0.0004405975341796875, "__label__science_tech": 0.024383544921875, "__label__social_life": 0.00013196468353271484, "__label__software": 0.004589080810546875, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.0003383159637451172, "__label__transportation": 0.0006442070007324219, "__label__travel": 0.00021445751190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37641, 0.06189]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37641, 0.65548]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37641, 0.93599]], "google_gemma-3-12b-it_contains_pii": [[0, 503, false], [503, 1258, null], [1258, 3255, null], [3255, 5249, null], [5249, 7289, null], [7289, 9252, null], [9252, 11289, null], [11289, 13131, null], [13131, 15392, null], [15392, 15962, null], [15962, 16668, null], [16668, 18420, null], [18420, 20456, null], [20456, 22070, null], [22070, 23748, null], [23748, 25871, null], [25871, 27941, null], [27941, 29864, null], [29864, 31678, null], [31678, 32794, null], [32794, 35341, null], [35341, 36986, null], [36986, 37641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 503, true], [503, 1258, null], [1258, 3255, null], [3255, 5249, null], [5249, 7289, null], [7289, 9252, null], [9252, 11289, null], [11289, 13131, null], [13131, 15392, null], [15392, 15962, null], [15962, 16668, null], [16668, 18420, null], [18420, 20456, null], [20456, 22070, null], [22070, 23748, null], [23748, 25871, null], [25871, 27941, null], [27941, 29864, null], [29864, 31678, null], [31678, 32794, null], [32794, 35341, null], [35341, 36986, null], [36986, 37641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37641, null]], "pdf_page_numbers": [[0, 503, 1], [503, 1258, 2], [1258, 3255, 3], [3255, 5249, 4], [5249, 7289, 5], [7289, 9252, 6], [9252, 11289, 7], [11289, 13131, 8], [13131, 15392, 9], [15392, 15962, 10], [15962, 16668, 11], [16668, 18420, 12], [18420, 20456, 13], [20456, 22070, 14], [22070, 23748, 15], [23748, 25871, 16], [25871, 27941, 17], [27941, 29864, 18], [29864, 31678, 19], [31678, 32794, 20], [32794, 35341, 21], [35341, 36986, 22], [36986, 37641, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37641, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c2f70be5c30786459e5f318c2f4bb503b33e024a
RMI와 CORBA 환경하의 분산 액티브 객체의 설계 및 구현에 대한 비교 분석 이도학†·김석††·현무용††† 요약 분산 프로그래밍은 분산 커뮤니케이션에 대한 언어적 지원을 기반으로 상당히 단순화 될 수 있다. 현재, 많은 웹 브라우저들은 다양한 형태의 액티브 객체들을 제공하고 있으며, 그 수와 유형은 빠른 속도의 증가 추세에 있다. 자바 애플리케이션이 널리 알려진 웹 브라우저 관련 액티브 객체 중 하나이다. 이 논문은 인터넷 상에 분산되어 있으며 서로 정보를 교환할 수 있는 분산 액티브 객체의 구현에 관하여 기술한다. 분산 액티브 객체를 구현함에 있어, 접근방식이 다르고 상호 호환성이 겉어진 주요한 두 프로그래밍 환경은 RMI와 CORBA 방식이다. 분산 액티브 객체의 구현상 문제들을 명확하게 하기 위해서, RMI와 CORBA를 활용한 OrbxWeb2.0.1 환경 하에서 하나의 애플리케이션 프로그램을 각각 구현하였다. 분산 객체 사이의 바인딩, 상속성, 다형성, 객체의 긴급, 주요한 구현상 중요한 측면들이었다. 실험결과는 분산 액티브 객체를 구현하는데 있어서 작은 차이가 분산 애플리케이션의 구현에 상당한 영향을 미칠 수 있음을 보여 주었다. 두 프로그래밍 환경 하에서 구현된 애플리케이션 간의 비교는 각각의 환경에서 구현된 애플리케이션 사이의 상호 변환 시스템을 구축하기 위한 기초 연구가 될 것이다. Comparison of Design and Implementation for Distributed Active Objects based on RMI and CORBA environment Dohak Lee†·Shik Kim††·Muyong Hyun††† ABSTRACT Distributed programming can be greatly simplified by language support for distributed communication. Many web-browsers now offer some form of active objects and the number and types of them are growing daily in interesting and innovative ways. Java applets are well known as one kind of active object related to web-browser. This paper focuses on distributed active objects which is one kind of active objects that can communicate with other active objects located on different machines across the Internet. Java RMI and CORBA IDL aretwo major programming environments for distributed active objects which are non compatible with each other. To make discussion concrete, we introduce a single application as implemented on two environments: the HORB, adopting RMI mechanism, and the Orbx Web2.0.1, adopting CORBA specification, respectively. Binding, inheritance, polymorphism, object passing and callbacks across the machine boundary on distributed programming environments are issued. The results show that some differences in the implementation of distributed active objects can have a significant impact on how distributed applications are structured. The comparison between two implementations on the programming environments will be the basis for building the translation system between HORB to OrbixWeb and vice versa. 1. Introduction In distributed computing, people have used a "socket" so far to communicate between two or more separate machines. Socket provides programs a read/write interface of network protocols such as TCP/IP. Since such programming style is too painful, many distributed programming languages have been studied [1, 2, 3, 4, 5, 6]. When we write a widely usable network program, we have to consider two important factors, portability and interoperability. One of the most exciting recent developments in Web-browser technology is active objects, where the browser downloads a program, executes it, and displays the program's user interface in a Web page. Sun's HotJava browser with Java applets pioneered active objects, showing Web pages with a wide range of content, from bouncing ball to spreadsheets to simulated science experiments. Most browsers now offer some form of active objects, written in a variety of languages. This paper focuses on how to implement distributed active objects[7], across the machine boundary, that is, active object that can communicate with other active objects located on different machines across the Internet. High-level support for distributed computation makes it easy to write groupware, computer-supported cooperative work(CSCW) applications, and multiplayer games as active objects. Common Object Request Broker Architecture (CORBA) and Java, which are two of major object models, introduce a different approach to distributed computing. CORBA provides an infrastructure which enables invocations of operations on objects located anywhere on a network as they were local to the application using them. Java introduces platform-independent, low-level code, which, when integrated with World Wide Web protocols and browsers, results in applets which are known as active objects. These two object models converge when a mapping is defined from CORBA's interface definition language, Object Management Group Interface Definition Language(OMG IDL), to Java. When combined with a run-time system which supports this language mapping, the result is a Java Object Request Broker(Java ORB)[2, 3, 8]. An alternative approach enabling the invocation of methods on remote Java objects is provided by a mechanism called Remote Method Invocation(RMI)[4, 5]. The idea is to create objects whose methods can be invoked from another Java virtual machine. This approach provides remote procedure call(RPC) mechanisms for Java objects[6]. RMI mechanisms allow the optimization of protocols for communication between Java classes. However, The main advantage of the CORBA IDL approach over RMI is that it supports multiple programming languages. As a point of active objects, Java is a more friendly language to use than COBOL, C, or C++ and a good candidate to replace those languages in commercial software development. Java applications can access those legacy applications if they are wrapped in a Java object wrapper or CORBA object, using the most appropriate language binding[1]. In this paper, firstly we examine some of the fundamental design and implementation issues in distributed active objects which are written in RMI mechanism and CORBA IDL environment respectively, from the point of their impact on distributed applications. Our goal is not to praise or criticize either of two programming systems, but to use the comparison to expose design issues in distributed active objects. Secondly, we study the basis for translation system between two major non-compatible distributed programming environments: Java extension with RMI mechanism. Java with CORBA IDL. To make our discussion more concrete, we introduce an example application, called WB.DAO(White-Board based on Distributed Active Object), as implemented in two distributed programming environments: HORB and OrbixWeb2.0.1 respectively. WB.DAO is a distributed program that manages 'whiteboards', which are windows that several workstation users can write or draw on simultaneously, with each seeing an up-to-date image of the whiteboard's state on his or her screen. HORB[4] is an representative example of RMI mechanisms and OrbixWeb2.0.1[3] is one of CORBA IDL whose language support is limited to Java language. The scope of the paper is as follows. In Section 2, Distributed programming environments, HORB and OrbixWeb, are reviewed. WB.DAO, one kind of realistic application, is described how to design and implement on these environments respectively. In Section 3, we examine WB.DAO from the perspective of several issues: binding, remote object creation and connection, inheritance, object passing, polymorphism and callbacks across the machine boundary. Our objective is simply to use HORB and OrbixWeb to help us examine the influence of different environments and their implementations on distributed programming. Section 4 summarizes our discussion. 2. WB.DAO as evaluation experiments 2.1 Programming environments for distributed active objects Every distributed programming system, whether explicitly or implicitly, motivates a specific style of programming. In this section, we first review the distributed programming environments, which have become the basis for many distributed active objects. The Object Management Group's CORBA(OMG CORBA) is a standardized specification for the creation of distributed, object-oriented software systems. This specification defines the functionality of an Object Request Broker(ORB). An ORB is a software which allows developers to define objects which can be accessed across a network through clearly defined, high-level interfaces. The Interface Definition Language(IDL) is a standard language, defined by the OMG, for defining such interfaces. It provides a universal notation for specifying APIs. IDL supports library function interfaces just as well as distributed objects across a network[9]. Top six commercial CORBA ORB products are IONA's Orbix, IBM's SOM, Digital's ObjectBroker, Sun's DOE, HP's ORB Plus, and Expersoft's XShell. OrbixWeb implements Orbix, IONA's full implementation of the CORBA specification, in the Java language. In this paper, the objective is to examine the difference of programming style between CORBA IDL and RMI mechanism for distributed active objects. OrbixWeb2.0.1 is selected for CORBA programming environments because RMI mechanism is dedicated to Java. Figure 1 shows seven Java source files generated by IDL compiler. Each generated file contains a Java class or interface which serves a specific role in application development. ![Diagram](Fig. 1) Files generated by OrbixWeb IDL compiler _serverRef.java is a Java interface which defines the Java client view of the IDL interface. Server is a Java class which implements the methods defined in interface _serverRef and provides functionality which... allows client method invocation to be forwarded to a server. ServerHolder is a Java class which defines a Holder type for class Server. This is required for pass- ing Server objects as inout or out parameter to and from IDL operations. ServerOperations is Java interface which maps the attributes and operations of the IDL definition to Java methods. _boaimpl_Server and _tie_Server are Java classes which allow server- side developers to implement the Server interface using two techniques available in OrbixWeb: BOAImpl and TIE approach. _dispatcher_Server is a Java class inter- ally by OrbixWeb to dispatch incoming server requests to implementation objects. An alternative approach enabling the invocation of methods on remote Java object is provided by a mechanism called RMI. The idea is to create objects whose methods can be invoked from another Java virtual machine. Examples of RMI mechanism are HORB and JavaSoft’s RMI. In both cases, stub and skeleton classes are directly generated from a Java class identified as remote[5, 10]. HORB is a Java ORB that extends Java for distributed active objects. The HORB package consists of the HORBC compiler and the ORB runtime. To ensure operability on existing Java environments, the compiler and ORB is entirely written using Java and HORB, and execution is possi- bly on the nonmodified Java interpreter. All of Java’s features can be accessed from an HORB program[4]. The HORBC compiler creates a proxy class and a skeleton class which includes the object code and stub of the class from the program which defines the Java class. Figure 2 shows Classes generated by the HORB compiler. HORB is Java ORB that extends Java for distrib- uted object oriented computing, whereas, CORBA IDL is transparent to CORBA ORB in OrbixWeb2.0.1 even though it is also provide for distributed active objects. These different approaches have significant influences on the programming style. 2.2 Distributed program for Whiteboard To make our discussion more concrete, we intro- duce an example application as evaluation experi- ments based on HORB and OrbixWeb2.0.1 environments respectively. Conventional object provides three key properties: encapsulation, inheritance, polymorphism. of ‘whiteboard’ is a good model for examining these properties across the machine boundary. Figure 3 shows the structure of WB.DAO. In this implementation, all of the data structures are im- plemented as distributed active objects. Furthermore, the system is organized in three somewhat separate parts. On the left side of figure 3 is a whiteboard di- rectory object. This object maintains a list with en- tries for whiteboard that are currently in use. Each entry object, as shown in the centre of the figure, consists of references to a whiteboard object and its (Fig. 2) Classes generated by HORBC compiler (Fig. 3) The structure of WB.DAO system associated userlist object. Each whiteboard object has references to its name and state objects. The right-hand side of figure 3 shows the structure of a distributed program on the client node. Each client runs a copy of the client object, which provides the client interface. The client object has a reference to the entry in the whiteboard directory’s list that contains the whiteboard it is using, and a reference to a whiteboard object that is the client’s local copy. A user opens whiteboard by specifying the whiteboard’s name. The client object sends this name, together with a self reference, to the whiteboard directory. The directory examines its list for an entry with the named whiteboard. If there is an entry with a whiteboard of that name in the list, it will reply with a reference to that entry. Otherwise, it will create a new entry for a whiteboard of that name and return a reference to that entry. In either case, the directory adds a reference for the client to the whiteboard’s userlist, and returns a whiteboard reference to the client for the client’s use. Figure 4 shows web based WB_DAO system running on the client side. Client object, implemented by distributed active objects, communicates with other active objects located on the server across the Internet. Whenever a client draws some image to a whiteboard, it modifies the local whiteboard’s state, along with the image on the user’s display. The client then modifies the whiteboard object whose reference it received from the directory on the server. It does this by invoking that whiteboard object on the server. That whiteboard object then notifies the other clients of the changes by using references on the userlist to contact them. 3. Comparison of distributed active objects in programming environments There are two interface implementation approaches in OrbixWeb: BOA (Basic Object Adapter) and TIE approach. The TIE approach can be very useful when using a pre-existing Java class as a implementation class for an IDL interface. This is why WB_DAO was implemented by TIE approach. We learn some different looks of WB_DAO on HORB and OrbixWeb programming environments. These differences result from embedded attributes of RMI mechanisms and CORBA specification. This section presents some of issues in the design and implementation of distributed active objects for WB_DAO in two programming environments. Binding, remote object creation and connection, inheritance, object passing, polymorphism, callbacks across the machine boundary are issued from embedded attributes of RMI and CORBA specification which are based on object-oriented computing. Java codes in design issues are compared with each other and some possibilities to translate them between two programming environments are described. 3.1 Binding Binding is the process of establishing communication between two parties. Through binding, the parties determine the addresses of their partners and the protocols to be used for the communication. In a network, there are various choices for how and when binding is made, and these choices affect both the performance and flexibility of the application. Both programming environments have chosen dynamic binding, which permits flexibility. Here is an example of binding client object to remote server object in WB.DAO. In HORB, a remote server class is written as follows: class Server { Entry.Proxy openWhiteboard(String name, String userid) { Entry.Proxy entry; ... } } The above class is exactly the same as Java class, even though Entry.Proxy class is generated by HORBC compiler. There is no need to add anything to the Java code. Programs when binding to a Server object remotely on an HORB process which operates on the machine of galaxy.semyung.ac.kr are written like this: class Client { public static void main(String argv[]) { HorbURL url = new HorbURL ("horb://galaxy.semyung.ac.kr/"); Server.Proxy server = new Server.Proxy(url); server.Server(); // remote object creation } } The proxy class of Server is Server.Proxy. The instance server is a remote object reference. In the above example, TCP/IP connection is established when the Server and remote object are created. In OrbixWeb, the first step in writing an distributed program is to define the interfaces to the application objects using IDL. A part of interface to our WB-DAO can be defined as follows. // WB-DAO.idl interface Server { ... Entry.openWhiteboard(String name, String userid); ... } interface Entry { ... } When Server is created, it is assumed that its name is assigned to 'WbSrv' by programmer. We find that _EntryRef and _ServerRef generated by IDL compiler is straightforward to Entry.Proxy and Server.Proxy generated by HORBC compiler in their functionality at source code level. As the method for binding, _bind() showed following class, is mapped to the code for "Server.Proxy("horb://galaxy.semyung.ac.kr/")" showed above class. Binding is established by _bind() method call in OrbixWeb, whereas, that is provided by remote object instantiation in HORB. The binding by Naming Service in OrbixWeb is beyond the scope of our study because HORB does not supports this property. class ServerImpl implements _ServerOperations { _EntryRef openWhiteboard(String name, String userid) { _EntryRef entry; ... } } class Client { _ServerRef server; ... server = Server._bind("WbSrv", "galaxy.semyung.ac.kr"); _EntryRef entry = server.openWhiteboard(...) ; } 3.2 Remote Object creation and connection Figure 5 illustrates the difference of remote object creation (left) and remote object connection (right). In case of the remote object connection model, all clients share one remote object. While the clients can share data in the instance variables of the remote object, there is no room to keep client’s dedicated data such like user names. Such data must be transferred in each remote method call or be stored in a special data storage such like hashtables. In case of the remote object creation model, a remote object is created for each client. Thus, a remote object can have client’s specific data. Class variable of the remote object is used to share between clients. HORB supports both models. OrbixWeb supports only the remote object connection model. We find that it is possible to happen side effect, for example, object initialization when codes with HORB map to those of OrbixWeb. IDL for their distributed active objects. Implementation classes corresponding to interfaces specified in IDL should also maintain their inheritance. Following codes are the example of inheritance in OrbixWeb. ```idl // idl file interface graphicObject { ... }; interface graphicLineObject extends graphicObject { ... }; // implementation class for graphicLineObject class graphicLineObjectImpl extends graphicObjectImpl implements _graphicLineObjectOperations { ... } ``` WB.DAO shows that the classes specified with remote method call in HORB should be denoted in IDL for inheritance across the machine boundary. The other parts of implementation for inheritance are mapped to each other because both environments support inheritance in OOP except for multiple inheritance. 3.4 Polymorphism Polymorphism is another key property in OOP. Polymorphism is a high-brow way of saying that the same method can do different things, depending on the class that implements it. WB.DAO would be expected to invoke the draw operation on just about any graphic object - a square, or circle, or line, or whatever - and have it draw itself on a screen. However, the client invoking the same operation on a set of objects actually results in different things happening, since each object has its own methods, for example, a square needs drawing position, height and width, compared as a circle needs drawing position and radius, in drawing operation. HORB fully supports polymorphism between the client and server site. The object type is correctly transferred even if it is casted. However, OrbixWeb does not provide any polymorphic property since CORBA 2.0 specification is not still defined this property by OMG[2]. The OrbixWeb version of WB.DAO shows the programming overhead to implement this property using conventional control statements such as if or switch. Following codes show the example of how to implement it in OrbixWeb. ```java public void appendNode(_graphicObject grobj) { switch (grobj.getid()) { case Line: line = graphicObjectLine._narrow(); break; case Rectangle: rect = graphicObjectRectangle._narrow(); break; ... } } ``` ### 3.5 Object passing In HORB, an object can be passed by value or reference for arguments or return value of the remote method. If the object for transferring is a local object to a remote object on another site, then passing by value is used. If the object for transferring is a remote object on another site to a remote object on the other site, then passing by reference is used. However, OrbixWeb only provides passing by reference between the objects located on different sites. In WB.DAO, the client object first modifies the local copy of the whiteboard's state and the image on the user's display when a user writes to a whiteboard. This is mainly a performance issue: it is most important that the drawing user have real-time feedback on his changes. In such a situation, the OrbixWeb version of WB.DAO shows additional programming needs that the local copy of referenced remote objects should be maintained explicitly. Following class in OrbixWeb show the example of local copy by programmer. ```java class ServerImpl implements _ServerOperations { ... void changeStatus(_graphicLineObjectRef g) { // creation of local copy for referenced remote object _graphicLineObjectRef localObj = new _tie_.graphicLineObject (new graphicLineObjectImpl()); localObj.set_currentColor(g.get_currentColor()); ... } } ``` ### 3.6 Callbacks A callback is a mechanism that enables a server to call methods of client and necessary to design an application as Client/Server model. WB.DAO shows that the client sends the changes to the server, which updates the global data structures and propagates the changes to other users of that whiteboard. WB.DAO is implemented successfully on both environments. In HORB, the client object is registered itself to the HORB server when it is initiated and is need to be called back. Then, it is connected to a server object and sends a signal to the server for invitation. At the server side, the server object accepts the invitation when the preparation for callback in the client is finished. After preparation for callback between the client and server, the server invokes a method on the client object, so called callback, in the identical to the conventional method calls. Following class shows how to implement it on both sides in HORB. ```java class Client {...} ``` public static void main() { Server.Proxy server; ... // register client object to the ORB server HORBServer.registerObject(...); ... // send a signal to invite the server server.invoke(...); } public void AppendNode(...) { ... } } class Server { public void Append(...) { Client.Proxy client; ... ServerSkeleton sk = (Server_Skeleton) HORBServer.getSkeleton(); // accept the signal of preparation client = (Client.Proxy)kd.accept(0); ... // call back client’s method client.AppendNode(...); } ... } In OrbixWeb, the client object creates additional thread which is dedicated to handling callback events. The server object receives an object reference from client. When this object reference enters the server address space, a proxy for the client object is created. It is this proxy which the server will use to call back to the client. Following class demonstrates how to implement it in OrbixWeb. In HORB, there is a facility, provided by programming environment, to support callback. Whereas, user programmer should create additional thread in the client object and pass the client object reference to the server, which takes responsible for handling callback in OrbixWeb. We find that the mapping of callback between two environments is too complex and further study is required to build the translation system. class ClientImpl implements _ClientRef { public static void main() { _ServerRef server; ... Invoking of thread for handling callback event; ... // create reference of client object _ClientRef client = new _tie__Client (new ClientImpl()); Binding to Server object; // pass client object reference to server server.Append(client); } public void AppendNode(...) { ... } ... } class ServerImpl implements _ServerOperations { public void Append(_ClientRef client, ...) { ... // callback client’s method client.AppendNode(...); } ... } 4. Conclusion The primary objective of this paper is to examine design and implementation issues in the construction of distributed active objects. Secondary of it is to study the basis for the translation system between two major distributed programming environments: Java extension with RMI mechanism, Java with CORBA specification. To make our discussion concrete, we examine a single application, called WB.DAO, as implemented in two environments: HORB, the representative of RMI mechanism[4, 12, 13], OrbixWeb 2.0.1, commercial version of CORBA IDL for Java[1, 3]. We present some of issues in the design and implementation of distributed active objects for WB.DAO in two environments. Binding, remote object creation and connection, inheritance, object passing, polymorphism, callbacks across the machine boundary are issues from embedded attributes of RMI and CORBA. There are three essential properties in distributed active objects: inheritance, polymorphism, encapsulation. In both environments, inheritance across the network is fully supported. However, additional programming overhead is needed since OrbixWeb does not provide the polymorphism compared as HORB does. As the encapsulation across machine boundary, programming environment should support binding, object creation/connection and object passing mechanism to implement distributed active objects. OrbixWeb provides Naming Service for binding, whereas HORB provides object passing by copy between remote objects. DAO demonstrates these properties can have a significant impact on how to design and implement distributed active objects. RMI is an interesting approach for small and medium-sized applications completely implemented in Java. However, applications that require the integration of legacy components or the use of a particular programming language need a CORBA solution. CORBA IDL's separation of interface definitions from implementations and mappings to many programming languages, in combination with CORBA services, provide better support for large-scale applications. Even though CORBA IDL provides the integration of legacy components, there is no room to integrate RMI and CORBA IDL[2, 8]. This paper describes some possibilities to translate Java codes implemented in the RMI environment to those implemented in the CORBA IDL. As a point of view of active objects, Java is a more friendly language to use than COBOL, C, or C++ and a good candidate to replace those languages in commercial software development. Java applications can access those legacy applications if they are wrapped in a Java object wrapper or CORBA object, using the most appropriate language binding. We find the mapping of the inheritance and binding without Naming Service in both environments is straightforward to each other. In mapping of polymorphism and object passing by copy, additional programming efforts are required. WB.DAO shows the callback mechanism is necessary to design an application as a Client/Server model. In HORB, there is a facility, provided by programming environment, to support callback. Whereas, programmer should create additional thread in the client object and pass the client object reference to the server, which takes responsible for handling callback in OrbixWeb. We find that the mapping of callback between two environments is too complex and further study for callback is needed for the translation system. The comparison between two implementations on the programming environments will be the basis for building the translation system from HORB to OrbixWeb and vice versa. Indeed, the authors are currently working on developing a translation system each other. References [4] Hirano Satoshi, HORB User's guide (URL: http: [5] Sun Microsystems, JDK 1.1 Documentation (URL: http://java.sun.com/)
{"Source-Url": "http://koreascience.or.kr:80/article/JAKO199715875839661.pdf", "len_cl100k_base": 6348, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 11759, "total-output-tokens": 7249, "length": "2e12", "weborganizer": {"__label__adult": 0.0002791881561279297, "__label__art_design": 0.00021529197692871096, "__label__crime_law": 0.0001976490020751953, "__label__education_jobs": 0.0004239082336425781, "__label__entertainment": 4.5299530029296875e-05, "__label__fashion_beauty": 9.632110595703124e-05, "__label__finance_business": 0.00015747547149658203, "__label__food_dining": 0.00021076202392578125, "__label__games": 0.0004265308380126953, "__label__hardware": 0.0007376670837402344, "__label__health": 0.0002789497375488281, "__label__history": 0.00017631053924560547, "__label__home_hobbies": 5.072355270385742e-05, "__label__industrial": 0.0002522468566894531, "__label__literature": 0.0001550912857055664, "__label__politics": 0.0001703500747680664, "__label__religion": 0.0003497600555419922, "__label__science_tech": 0.007007598876953125, "__label__social_life": 5.775690078735352e-05, "__label__software": 0.005489349365234375, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00020742416381835935, "__label__transportation": 0.00035190582275390625, "__label__travel": 0.00016868114471435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30443, 0.00756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30443, 0.6992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30443, 0.88977]], "google_gemma-3-12b-it_contains_pii": [[0, 1911, false], [1911, 5757, null], [5757, 8988, null], [8988, 11922, null], [11922, 15049, null], [15049, 17551, null], [17551, 19947, null], [19947, 23116, null], [23116, 25428, null], [25428, 29194, null], [29194, 30443, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1911, true], [1911, 5757, null], [5757, 8988, null], [8988, 11922, null], [11922, 15049, null], [15049, 17551, null], [17551, 19947, null], [19947, 23116, null], [23116, 25428, null], [25428, 29194, null], [29194, 30443, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30443, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30443, null]], "pdf_page_numbers": [[0, 1911, 1], [1911, 5757, 2], [5757, 8988, 3], [8988, 11922, 4], [11922, 15049, 5], [15049, 17551, 6], [17551, 19947, 7], [19947, 23116, 8], [23116, 25428, 9], [25428, 29194, 10], [29194, 30443, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30443, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
99c2b203bcb6a2a87303e43775b7cab90516106b
NBCE: A Neo4j-Based Content Extraction Algorithm in Threat Intelligence Web Pages Xiaoyang Li\textsuperscript{1,a}, Mengming Li\textsuperscript{1,b}, Rongfeng Zheng\textsuperscript{2,c}, Anmin Zhou\textsuperscript{1,d} and Liang Liu\textsuperscript{1,e,*} \textsuperscript{1}College of Cybersecurity, Sichuan University, Chengdu, Sichuan, China \textsuperscript{2}College of Electronics and Information Engineering, Sichuan University, Chengdu, Sichuan, China \textit{a.} shawnlee97@163.com, \textit{b.} limengmingx@sina.cn, \textit{c.} qswhs@foxmail.com \textit{d.} zhouanmin@scu.edu.cn, \textit{e.} liangzhai18@163.com *corresponding author: Liang Liu \textbf{Keywords:} Main content extraction, threat intelligence, Neo4j, machine learning. \textbf{Abstract:} Main content extraction is a widely used technique in web crawler, search engines and so on to extract the main content of web pages as well as discarding other complementary and decorative components. By extracting the main content, irrelevant and redundant information can be ignored hence reducing the complexity of data processing and improving the efficiency of further analysis. Among the existing methods tackling this problem, solutions are designed to satisfy the different requirements of various groups. For instance, companies specialized in content extraction always focus more on efficiency and accuracy while others may concentrate more on practicality. In our proposed method, we innovatively present a neo4j-based content extraction algorithm (NBCE) in threat intelligence websites. The NBCE algorithm initially transforms the HTML source code into the form of the tree structure. Then the triples extracted from the HTML tree are used to construct a graph based on neo4j database. Finally, by deciding whether a node is the main content node or not, the main content of the given web page can be extracted. The availability of the proposed method is validated through a set of experiments conducted on a threat-intelligence-related database. 1. Introduction With the increasing complexity of World Wide Web, main content extraction, as an effective way to select valuable information from web pages while ignoring other non-informative elements, deserves great attention. However, the detailed requirements of web page content extraction vary from one group to another. On the one hand, well-organized corporations with abundant human resources are able to obtain large amounts of samples by advanced and accurate algorithms. On the other hand, people in small businesses always concentrate more on specific applications and therefore require task-oriented algorithms. For example, in cybersecurity, the extracted main content can be used to collect and analyze low-level IOCs[7, 8] and high-level IOCs [11] utilized by hackers. Therefore, main content extraction has turned out to be topical issues in cybersecurity. What’s more, in the applications of general web crawler, a great of time is spent on parsing the HTML structure. However, for other web crawlers aimed at particular fields, the workload is much lower and they only have to be operated several times to satisfy the specific requirements. The reason is that for those small-size enterprises, they don’t engage in web page analyzing but depend on main content extracted from web pages to support follow-up operations. Therefore, more attention will be focused on accuracy and practicality than processing time. Moreover, it is difficult for them to design a context extraction algorithm based on annotated datasets on account of their limited human resources and budgets. Concerning this, a reliable algorithm with simple rules and high robustness is still sought after. In this paper, we propose a simple but reliable neo4j-based content extraction algorithm (NBCE) for the accurate and efficient extraction of cybersecurity-related web content. Our method consists of three parts. In the first step, the HTML source code is transformed into a simple graph using neo4j database according to its main structure. Then, specific rules are set to discard null nodes to reduce redundancy. Finally, we approach the issue of content extraction as a binary classification question, using machine learning on the set of nodes obtained from former steps to extract the unique main content node, and thereby extracting the main content of the given web page. Moreover, we perform 10-folds cross-validation on a set of experiments to evaluate our proposed method. The rest of the paper is organized as follows. Section II reviews the related works tackling the task of main content extraction in web pages. In section III, the proposed NBCE algorithm is explained in detail. The experiments to evaluate the performance of the novel method are described in section IV. A brief conclusion and future explorations are presented in section V. 2. Related Work Main content extraction, as an indispensable technique in many applications such as web content aggregation, web crawler and search engine, has attracted the attention of numerous researchers. There are many approaches aiming at this problem [5, 6, 14]. Kreuzer et al. [10] concluded a taxonomy of some existing methods according to the type of information used for extraction: textual information, visual information, DOM Tree and so on. The main idea of regarding textual information as the chief gauge for content extraction is that the text density of main content in a web page is much higher than other blocks. Weninger et al. [13] put forward the Content Extraction via Tag Ratios (CETR), in which the ratio of characters and labels inside each label is calculated to serve as a determinant. Besides, in [3] a quantitative linguistic approach is employed with consideration of other HTML metrics such as link density. Visual methods such as VIPS [4] and its derivatives [12] make use of the visual tree structure of HTML to extract the main content. By dividing the web page into blocks, the ratio of text nodes and leaf nodes in each block is calculated to decide whether it is the main content or not. In addition, Burget et al. [9] combined CSS feature based on the VIPS algorithm, which improves the performance to a certain degree. Bar-Yossef et al. [2] regarded the template in web pages as noise. They detected the template by analyzing the DOM tree and further extracted the main content. Many other methods based on the DOM tree [1, 15] have also turned out to be effective. Although many studies have been conducted, few of them take the relation between HTML code and graph into consideration as our proposed NBCE algorithm. 3. Method We propose a Neo4j-Based Content Extraction algorithm (NBCE) to extract the main content from the given threat intelligence web page. The proposed NBCE approach can be divided into HTML-Neo4j transformation phase, compression phase and content extraction phase, as shown in Figure 1. To transform HTML source code into a graph, we take advantage of the specific tree structure and extract the triples necessary for constructing a graph in neo4j database. Based on the constructed graph, we make further improvements by performing node-level and branch-level compression to reduce redundancy. Finally, the processed nodes can serve as the input of machine learning to train models and then extract the expected main content. The following subsections will describe these steps in detail. 3.1. HTML-Neo4j Transformation HTML always presents predominantly unstructured data with structured tags. Triples necessary for constructing a graph can be extracted from related tags according to their hierarchical relationships. Algorithm 1 and Algorithm 2 represent the details in the transformation phase. In algorithm 1, all the nodes in the HTML tree are traversed to find the relation between two directly connected nodes. For the first-level tags under the original HTML source code, these tags and their corresponding sequences are recorded for further employment. Then, similarly to the previous step, the subtags under first-level tags and their sequences are also captured. Therefore, through traversing, the relations between all nodes can be represented in the form of [sub_node, "sub_tag",[children, number]], where “sub_tag” represents the relation between two directly connected nodes and “number” represents the sequence of sub nodes. Furthermore, in algorithm 2, all the triple nodes in the HTML can be aggregated. Hence the HTML source code can be transformed into the form of a tree structure. Figure 2 displays a transformation sample in detail. Finally, RDF triples are extracted and stored in neo4j graph database to construct the graph of the given web page. The standard template of RDF triple used in neo4j database is described in Figure 3, where “src” and “dst” represent different nodes in a graph while “r” represents the relation between two nodes as well as an edge in a graph. 3.2. Node-level and Branch-level Compression There exist two kinds of nodes connected directly to a null node in the generated graph, namely end nodes and branch nodes. As we can see from Figure 4, node B without any child node is connected to a null node and the null node in turn is connected to another null node. Therefore, node B can be regarded as an end node. Whereas in Figure 5, node C is connected to a series of leaf nodes at one end while connected to a null node at another. Hence it is defined as a branch node. On account of the numerous null nodes brought about by HTML tags, we perform node-level and branch-level compression in our proposed approach. The main idea of compression is discarding the unnecessary null node to reduce redundancy. More specifically, in node-level compression, the null node connected directly to an end node is discarded and the end note is then connected to the ancestor of the null node. Similarly, in branch-level compression, the null node connected directly to the branch node is also removed and its ancestor is connected to the branch node. The detailed procedures can be seen in Algorithm 3. By removing these redundant nodes, we eliminate the influence of data discretization brought about by HTML tags. Furthermore, because of the particular structure of HTML, the nodes related to similar topics are always connected to the same or adjacent nodes. Therefore, those nodes that contain main content must be connected to the same node directly or within a certain distance, which is the underlying principle of our approach. Figure 1: Processing phases for main content extraction. ```html <html> <div> <p>here is the reason: </p> <p>1, we love peace</p> <p>2, we hate war</p> </div> <div> <p>domain-1</p> <p>ip-2</p> </div> </html> ``` Figure 2: A sample of transforming HTML into the tree structure. ```json result = { "src": { "html_tag": src_main_tag, "unique_id": src_unique_id }, "r": { "name": "sub_tag" }, "dst": { "html_tag": dst_main_tag, "context": dst_raw_context, "unique_id": dst_unique_id, "children_sequence": children_sequence } } ``` Figure 3: The standard template of RDF triple. 3.3. Main Content Extraction After the compression phase, redundant null nodes are discarded. Generally, in a graph, large amounts of nodes that contain main content may connect to a null node directly or within a certain distance, and therefore, this node is defined as the main content node. More specifically, there is only one content node in a web page. Besides, the content node extracted from a web page is corresponding to all the tags containing the main content in HTML structure. Therefore, according to the sequence recorded in the former steps, the main content can be recovered from the nodes connected to the content node. In this way, the problem of the main content extraction is then transformed into the problem of main content node extraction. In our approach, we consider the issue as a binary classification question hence the nodes can be classified into content nodes and non-content nodes. Traditionally, this kind of binary classification questions can be solved by machine learning. Consequently in our approach, we extract several distinct features between content and non-content nodes to perform machine learning. The features we selected are based on the following rules to figure out whether the null node is content node or not: - The number of nodes connected directly to the null node. - The average text length of the nodes connected directly to the null node. Algorithm 1. Establish_triple_nodes Input: HTML Source Code Output: the tree structure of the given HTML ```plaintext 1 var_triple_node ← first-level nodes 2 for sub_node in var_triple_node do 3 if not_hasChild(sub_node) then 4 continue 5 else 6 children_result = get_children_tags(sub_node) 7 number = 0 8 var_all_triple.append([sub_node, "sub_tag", [children, number]]) 9 number = number + 1 10 var_all_triple = establish_triple_nodes(children_result, var_all_triple) 11 end 12 end ``` Algorithm 2. Build_tree_structure Input: HTML Source Code Output: the tree structure of the given HTML ```plaintext 1 var_all_triple = [] 2 var_triple_node = get_children_tags(html) 3 soup = BeautifulSoup(html) 4 for each triple_node in var_triple_node do 5 var_all_triple.append([soup, "sub_tag", [triple_node, number]]) 6 number = number + 1 7 end 8 var_all_triple = establish_triple_nodes(var_triple_node, var_all_triple) ``` Algorithm 3. End/branch compression Input: all the nodes Output: compressed nodes 1 for each node in end/branch node do 2 if node→parent = null then 3 node→parent = node→parent→parent 4 end 5 end Figure 4: An example of end node. Figure 5: An example of branch node. On the one hand, the main content node always corresponds to more nodes due to the particular structure of a website. On the other hand, some websites contain recommendation parts, which can also be transformed into graphs. However, as there is only one main content node for each web page, this kind of part containing less information with shorter text length should not be regarded as the main content. Additionally, in a graph transformed by a website, the number of content nodes is far less than that of non-content nodes. Therefore, in our proposed NBCE algorithm, we depend on the MLP model to perform classification, which will be explained in detail in Section IV. And from the main content nodes extracted from machine learning, the main content of the web page can be recovered by using the sequences recorded in former steps. 4. Experiment To evaluate the availability of our proposed NBCE algorithm, two experiments are conducted on the same cybersecurity-related dataset. The first one confirms the superiority of our proposed algorithm while another is performed to select the most suitable machine learning model by applying extensively used measures such as precision, recall and F1-score. 4.1. Dataset To evaluate the effectiveness of our proposed method, we collected 100 web pages from 7 authoritative threat intelligence publishing platforms as the dataset. Each sample contains the source content (HTML) of the web pages, which is then pre-processed to feed the need of the neo4j database and finally the HTML is transformed into a graph. All the nodes that match the criterion described in section III. are labeled as the main content nodes, the rest as non-content nodes. To perform the 10-fold cross-validation, we then divide the dataset into 4 parts, in which 75% are for training and the remaining for testing. In addition, traditional measures such as precision, recall, F1-score and processing time are used to evaluate the performance of the classifiers. 4.2. Experimental Design To further illustrate the advantages of our proposed method NBCE, another extensively used method VIPS described in [4] is performed on the same dataset for comparison. Different from our method, this method is based on vision and text leaves ratio. In VIPS method, the web page is initially divided into several blocks according to its visual content. Then in each block, the ratio of text nodes and leaf nodes is calculated to decide whether the given block is the main content or not. And the algorithm is regarded as effective only when the extracted content covers 95% of the manually selected main content. The comparison is conducted in terms of precision and processing time. Another experiment is designed to evaluate the superiority of the chosen MLP classifier. By comparing with some other classifiers such as SVM and Naive Bayes in terms of precision, recall and F1-score, the most suitable one is chosen. 4.3. Experimental Result We use three-quarters of the dataset as the training set and the rest as the testing set to perform 10-fold cross-validation. The practiced model is then used to extract the main content of a given web page. Table 1 displays the extracting result of our proposed method and vision-based method VIPS [4]. The result includes the precision rate and the average cost of time for each web page. As we can see from the experimental results, our proposed NBCE method has a better performance in terms of precision. Although it’s at the cost of processing time, the precision of our method is 7% higher than the old one. Table 1: The result of the proposed algorithm and VIPS. <table> <thead> <tr> <th></th> <th>Precision</th> <th>Cost of time</th> </tr> </thead> <tbody> <tr> <td>NBCE</td> <td>93%</td> <td>617ms</td> </tr> <tr> <td>VIPS</td> <td>86%</td> <td>425ms</td> </tr> </tbody> </table> Table 2: The performance of different classifiers. <table> <thead> <tr> <th>Models</th> <th>Precision</th> <th>Recall</th> <th>F1-score</th> </tr> </thead> <tbody> <tr> <td>MLP</td> <td>92%</td> <td>78%</td> <td>84%</td> </tr> <tr> <td>SVM</td> <td>70%</td> <td>84%</td> <td>76%</td> </tr> <tr> <td>Naïve Bayes</td> <td>50%</td> <td>32%</td> <td>39%</td> </tr> </tbody> </table> However, in task-oriented corporations, precision is of more importance than processing time. In our method, the precision is up to 93%, 7% higher than the old one whereas the delay is no more than 200ms. Therefore, this degree of delay is comparatively acceptable. As to the performance of different classification models, Table 2 enlightens the gap between three classifiers in the given scenario. As we can see from the results, the precision of MLP is up to 92%, outperforming SVM and Naïve Bayes. Besides, the F1-score of MLP is also 8% and 45% higher than SVM and Naïve Bayes respectively. Therefore, we can conclude that MLP is the most suitable one among the three classifiers to extract the main content of a web page in the given scenario. The possible reason is that MLP is characterized by its fast convergence in small-size datasets. Additionally, we make an assumption that the limitation of our dataset may also have an influence. That is to say, when applied in larger datasets, other models may have better performance. 5. Conclusions In this paper, we have proposed a method named NBCE to extract main content from threat intelligence web pages based on graph database neo4j. It makes full use of the tree structure of HTML and transforms the HTML source code into a graph based on neo4j database. Regarding the problem of main content extraction as a binary classification problem, several features are combined to decide whether a node is a content node or not. A set of experiments show that our method achieves competitive results on threat intelligence dataset, reaching 93% in precision. The results also reflect that in terms of classification models, the MLP model outperforms other traditional models in the main content extraction task. With the high precision in threat intelligence, we suggest that when applied in other particular websites such as pharmaceuticals sites, our NBCE algorithm will also have good performance. Future work could be done on expanding the dataset as well as improving its performance in terms of efficiency. References
{"Source-Url": "https://www.clausiuspress.com/conferences/ACSS/CNCI%202020/CNCI040.pdf", "len_cl100k_base": 4351, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 18124, "total-output-tokens": 5844, "length": "2e12", "weborganizer": {"__label__adult": 0.0004448890686035156, "__label__art_design": 0.000675201416015625, "__label__crime_law": 0.001911163330078125, "__label__education_jobs": 0.0010595321655273438, "__label__entertainment": 0.0002086162567138672, "__label__fashion_beauty": 0.0002428293228149414, "__label__finance_business": 0.0006422996520996094, "__label__food_dining": 0.00038313865661621094, "__label__games": 0.0009002685546875, "__label__hardware": 0.0014123916625976562, "__label__health": 0.000736236572265625, "__label__history": 0.0004732608795166016, "__label__home_hobbies": 0.00013017654418945312, "__label__industrial": 0.0006890296936035156, "__label__literature": 0.0006575584411621094, "__label__politics": 0.00057220458984375, "__label__religion": 0.00051116943359375, "__label__science_tech": 0.318603515625, "__label__social_life": 0.0001832246780395508, "__label__software": 0.0733642578125, "__label__software_dev": 0.59521484375, "__label__sports_fitness": 0.0002608299255371094, "__label__transportation": 0.0004024505615234375, "__label__travel": 0.00021255016326904297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23401, 0.0365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23401, 0.54428]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23401, 0.88744]], "google_gemma-3-12b-it_contains_pii": [[0, 2731, false], [2731, 6649, null], [6649, 10396, null], [10396, 11189, null], [11189, 13541, null], [13541, 14421, null], [14421, 17614, null], [17614, 21630, null], [21630, 23401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2731, true], [2731, 6649, null], [6649, 10396, null], [10396, 11189, null], [11189, 13541, null], [13541, 14421, null], [14421, 17614, null], [17614, 21630, null], [21630, 23401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23401, null]], "pdf_page_numbers": [[0, 2731, 1], [2731, 6649, 2], [6649, 10396, 3], [10396, 11189, 4], [11189, 13541, 5], [13541, 14421, 6], [14421, 17614, 7], [17614, 21630, 8], [21630, 23401, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23401, 0.0596]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
b146d9d28e36c971cf19fd858c18307689c5aeca
PUTTING IT ALL TOGETHER We have seen a great deal of the structure at the edge of the cloud, and how it might “talk” to data sources of various kinds. Tasks like machine learning and “big data” analytics occur at the back. How are these connected together? - Some tier two µ-services store data into files or append-only logs - Periodically (delay depends on the use) we process in batches. … we batch the operations because of the massive scale. … idea is to compute “once” but answer many questions! For example, imagine a smart city. We could collect a **batch** of new location data, then update all the “people locations” in some database in a single (parallel) computation. - One computation reads a lot of files, and updates many data fields. - The pattern is typical of today’s cloud. Due to delays, batched computing not ideal for instant reaction. ... But it is adequate for tasks where a short delay is fine. Why Batch? The core issue is overhead. Doing things one by one incurs high overheads. Updating data in a batch pays the overhead once on behalf of many events, hence we “amortize” those costs. The advantage can be huge. But batching must accumulate enough individual updates to justify running the big parallel batched computation. Tradeoff: Delay versus efficiency. Petabytes of data need to be processed or accessed everyday ➢ 300 Billion Google searches per day ➢ 300 Million photos uploaded on Facebook per day Nature of the data causes it to be massive: 3 V’s ➢ Volume, Velocity, and Variety The volume of unstructured data exploded in the past decade ➢ By 2020, it will be 53 ZettaBytes (53 trillion gigabytes) -- an increase of 10 times in 15 years (it was below 5 ZB until 2003) COMMON BIG DATA USE CASES ➢ Extract/Transform/Load (ETL) – Huge Data Warehouses ➢ Text Mining ➢ Graph Creation and Analysis ➢ Prediction Models ➢ Analytics To exploit batching, Google/Facebook etc., precompute the results for anticipated search queries. They analyze data in batches, then cache results. BIG DATA PROCESSING? Nature of the data forces us to use massive parallelism ➢ Recall: Huge Volumes, High Velocity, and Variety Traditional Single Server Systems are far too weak for processing petabytes of data – insufficient compute and storage The only option: Distribute data, obtain parallelism with multiple servers TRADITIONAL DISTRIBUTED SYSTEMS The Data Bottleneck: ➢ Data was historically first stored in a central location ➢ … then copied to processors at runtime ➢ Fine for limited amounts of data, breaks with massive data sets Solution: A new style in which we process huge numbers of data files in parallel -- BigData Systems (e.g., Apache Hadoop, Apache Spark) BIG DATA SYSTEMS Two key ideas ➢ Distribute data right from the outset, when the data is initially stored ➢ Bring computation to the data rather than sending data to the computation Scalable and economical data storage, processing and analysis ➢ Distributed and Fault-tolerant ➢ Harness the power of industry standard hardware ➢ Heavily inspired by Open Source technologies (HDFS, HBase, etc.) A TYPICAL BIG DATA SYSTEM Popular BigData Systems: Apache Hadoop, Apache Spark APACHE HADOOP ECOSYSTEM Map Reduce Hive Pig Other Applications Spark Stream Yet Another Resource Negotiator (YARN) Hadoop NoSQL Database (HBase) Hadoop Distributed File System (HDFS) Cluster Data Ingest Systems e.g., Apache Kafka, Flume, etc HADOOP ECOSYSTEM • HDFS, HBase • Yet Another Resource Negotiator • MapReduce, Hive • Kafka HADOOP DISTRIBUTED FILE SYSTEM (HDFS) HDFS is the storage layer for Hadoop BigData System HDFS is based on the Google File System (GFS) Fault-tolerant distributed file system Designed to turn a computing cluster (a large collection of loosely connected compute nodes) into a massively scalable pool of storage Provides redundant storage for massive amounts of data -- scales up to 100PB and beyond HDFS IS FOR BATCH PROCESSING Designed for batch processing rather than interactive High throughput of data access rather than low latency HDFS Must never lose any data (Resilience) Sits on top of native file systems (e.g., xfs, ext3, ext4) -- HDFS is written in Java, i.e., JVM → OS (file system) → Storage (disks) Write-once read-many (or append-only) paradigm → Batch Processing → High Throughput • Data is distributed when stored & move computation to the data • Minimizes network congestion and increases throughput HDFS: KEY FEATURES - Scalable: HDFS is designed for massive scalability, so you can store unlimited amounts of data in a single platform. - Flexible: Store data of any type -- structured, semi-structured, unstructured -- without any upfront modeling. - Reliable: Multiple copies of your data are always available for access and protection from data loss (build-in fault tolerance). HDFS: ARCHITECTURE Master/slave architecture: NameNode cluster Availability of Name Node is critical The NameNode executes file system operations such as opening, closing etc. The DataNodes (slaves) are responsible for serving read and write requests from the file system’s clients. Image source: https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html Data File is split into contiguous chunks, typically 64MB size, distributed at load time. Each chunk is replicated on multiple “data” nodes (usually 3x). “Name” node for a file stores metadata, location of all chunks, etc. Optimized for large, streaming reads of files (rather than random reads). Files are “write once” -- No random writes to files allowed -- because of HDFS’s batch roots, it was only designed to handle append-only formats. How to choose number of replicas (replication factor)? The client sends a request to the NameNode to read a file. The NameNode determines which blocks are involved and chooses the most efficient access path. The client then accesses the blocks using the addresses provided by the NameNode. HDFS: WRITING DATA (1) Get a Lease → Write Data → Close the Lease Getting Lease: - The client sends a request to the NameNode to create a new file. - The NameNode determines how many blocks are needed, and the client is granted a lease for creating these new file blocks in the cluster. HDFS: WRITING DATA (2) Get a Lease → Write Data → Close the Lease Write Data: • The client then writes the first copies of the file blocks to the slave nodes using the lease assigned by the NameNode. • As each block is written to HDFS, a special background task duplicates the updates to the other slave nodes identified by the NameNode. HDFS: WRITING DATA (3) Get a Lease → Write Data → Close the Lease Close Lease: - The DataNode daemons acknowledge the file block replicas have been created, - The client application closes the file and notifies the NameNode - NameNode closes the open lease. Updates become visible at this point. HDFS: SOME LIMITATIONS Not appropriate for real-time, low-latency processing -- have to close the file immediately after writing to make data visible, hence a real time task would be forced to create too many files Centralized metadata storage -- multiple single points of failures The Persistence of File System Metadata? HADOOP DATABASE (HBASE) A NoSQL database built on HDFS A table can have thousands of columns Supports very large amounts of data and high throughput HBase $\rightarrow$ Strong Consistency Random access, low latency HBase HBase is based on Google’s Bigtable A NoSQL distributed database/map built on top of HDFS Designed for Distribution, Scale, and Speed Relational Database (RDBMS) vs NoSQL Database: RDBMS → vertical scaling (expensive) → not appropriate for BigData NoSQL → horizontal scaling / sharding (cheap) → appropriate for BigData RDBMS VS NOSQL (1) • BASE not ACID: ➢ RDBMS (ACID): Atomicity, Consistency, Isolation, Durability ➢ NoSQL (BASE): Basically Available Soft state Eventually consistency • The idea is that by giving up ACID constraints, one can achieve much higher availability, performance, and scalability ➢ e.g. most of the systems call themselves “eventually consistent”, meaning that updates are eventually propagated to all nodes RDBMS VS NOSQL (2) • NoSQL (e.g., CouchDB, HBase) is a good choice for 100 Millions/Billions of rows • RDBMS (e.g., mysql) is a good choice for a few thousand/millions of rows • NoSQL $\rightarrow$ eventual consistency (e.g., CouchDB) or strong consistency (HBase) ## HBASE: DATA MODEL (1) ### Data model <table> <thead> <tr> <th>Row key</th> <th>info:name</th> <th>info:age</th> <th>comp:base</th> <th>comp:stocks</th> </tr> </thead> <tbody> <tr> <td>121</td> <td>‘tom’</td> <td>‘28’</td> <td>‘125k’</td> <td></td> </tr> <tr> <td>145</td> <td>‘bob’</td> <td>‘32’</td> <td>‘110k’</td> <td>‘50’ (ts=2012)</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>‘100’ (ts=2014)</td> </tr> </tbody> </table> - **Row keys** - **Columns** - **Cells** HBASE: DATA MODEL (2) • Sorted rows: support billions of rows • Columns: Supports millions of columns • Cell: intersection of row and column ➢ Can have multiple values (which are time-stamped) ➢ Can be empty. No storage/processing overheads ## HBASE: TABLE <table> <thead> <tr> <th>Unique id</th> <th>Name</th> <th>price</th> <th>weight</th> <th>store1</th> <th>store2</th> <th>store3</th> </tr> </thead> <tbody> <tr> <td>“1000000”</td> <td>snickers</td> <td>$9.99</td> <td>4 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“3000000”</td> <td>almonds</td> <td>$9.99</td> <td>8 Oz</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>“8000000”</td> <td>coke</td> <td>$9.99</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“4000000”</td> <td>foo</td> <td>$34.63</td> <td>16 Oz</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“5000000”</td> <td>bar</td> <td>$22.54</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“9000000”</td> <td>new1</td> <td>$2.5</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“7000000”</td> <td>new2</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“2000000”</td> <td>new3</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> # HBASE: HORIZONTAL SPLITS (REGIONS) ### Region 1: ["", "5000000"] <table> <thead> <tr> <th>Row Key</th> <th>Name</th> <th>brand</th> <th>price</th> <th>weight</th> <th>store1</th> <th>store2</th> <th>store3</th> </tr> </thead> <tbody> <tr> <td>&quot;1000000&quot;</td> <td>snickers</td> <td>xxx</td> <td>$9.99</td> <td>4 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>&quot;2000000&quot;</td> <td>new3</td> <td>xxx</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>&quot;3000000&quot;</td> <td>almonds</td> <td>xxx</td> <td>$9.99</td> <td>8 Oz</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>&quot;4000000&quot;</td> <td>foo</td> <td>xxx</td> <td>$34.63</td> <td>16 Oz</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> ### Region 2: ["5000000", "]" <table> <thead> <tr> <th>Row Key</th> <th>Name</th> <th>brand</th> <th>price</th> <th>weight</th> <th>store1</th> <th>store2</th> <th>store3</th> </tr> </thead> <tbody> <tr> <td>&quot;5000000&quot;</td> <td>bar</td> <td>xxx</td> <td>$22.54</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>&quot;7000000&quot;</td> <td>new2</td> <td>xxx</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>&quot;8000000&quot;</td> <td>coke</td> <td>xxx</td> <td>$9.99</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>&quot;9000000&quot;</td> <td>new1</td> <td>xxx</td> <td>$2.5</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> ## HBASE ARCHITECTURE (REGION SERVER) <table> <thead> <tr> <th>Row Key</th> <th>Name</th> <th>price</th> <th>weight</th> <th>....</th> </tr> </thead> <tbody> <tr> <td>“1000000”</td> <td>snickers</td> <td>$9.99</td> <td>4 Oz</td> <td>....</td> </tr> <tr> <td>“2000000”</td> <td>new3</td> <td>$6.4</td> <td>16 Oz</td> <td>....</td> </tr> <tr> <td>“3000000”</td> <td>almonds</td> <td>$9.99</td> <td>8 Oz</td> <td>....</td> </tr> <tr> <td>“4000000”</td> <td>foo</td> <td>$34.63</td> <td>16 Oz</td> <td>....</td> </tr> </tbody> </table> Server 12 <table> <thead> <tr> <th>Row Key</th> <th>Name</th> <th>price</th> <th>weight</th> <th>....</th> </tr> </thead> <tbody> <tr> <td>“5000000”</td> <td>bar</td> <td>$22.54</td> <td>16 Oz</td> <td>....</td> </tr> <tr> <td>“7000000”</td> <td>new2</td> <td>$6.4</td> <td>16 Oz</td> <td>....</td> </tr> <tr> <td>“8000000”</td> <td>coke</td> <td>$9.99</td> <td>16 Oz</td> <td>....</td> </tr> <tr> <td>“9000000”</td> <td>new1</td> <td>$2.5</td> <td>16 Oz</td> <td>....</td> </tr> </tbody> </table> Server 7 # HBASE ARCHITECTURE <table> <thead> <tr> <th>Unique id</th> <th>Name</th> <th>price</th> <th>weight</th> <th>store1</th> <th>store2</th> <th>store3</th> </tr> </thead> <tbody> <tr> <td>“1000000”</td> <td>snickers</td> <td>$9.99</td> <td>4 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“3000000”</td> <td>almonds</td> <td>$9.99</td> <td>8 Oz</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>“8000000”</td> <td>coke</td> <td>$9.99</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“4000000”</td> <td>foo</td> <td>$34.63</td> <td>16 Oz</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“5000000”</td> <td>bar</td> <td>$22.54</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“9000000”</td> <td>new1</td> <td>$2.5</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“7000000”</td> <td>new2</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“2000000”</td> <td>new3</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Row Key</td> <td>info: name</td> <td>info: price</td> <td>info: weight</td> <td>availability: store1</td> <td>availability: store2</td> <td>availability: store3</td> </tr> <tr> <td>------------</td> <td>------------</td> <td>-------------</td> <td>--------------</td> <td>----------------------</td> <td>----------------------</td> <td>----------------------</td> </tr> <tr> <td>“1000000”</td> <td>snickers</td> <td>$9.99</td> <td>4 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“2000000”</td> <td>new3</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“3000000”</td> <td>almonds</td> <td>$9.99</td> <td>8 Oz</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>“4000000”</td> <td>foo</td> <td>$34.63</td> <td>16 Oz</td> <td>No</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“5000000”</td> <td>bar</td> <td>$22.54</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“7000000”</td> <td>new2</td> <td>$6.4</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“8000000”</td> <td>coke</td> <td>$9.99</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“9000000”</td> <td>new1</td> <td>$2.5</td> <td>16 Oz</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> HBASE ARCHITECTURE: COLUMN FAMILY <table> <thead> <tr> <th>Column Family</th> <th>info: name</th> <th>info: price</th> <th>info: weight</th> </tr> </thead> <tbody> <tr> <td>“1000000”</td> <td>snickers</td> <td>$9.99</td> <td>4 Oz</td> </tr> <tr> <td>“2000000”</td> <td>new3</td> <td>$6.4</td> <td>16 Oz</td> </tr> <tr> <td>“3000000”</td> <td>almonds</td> <td>$9.99</td> <td>8 Oz</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Column Family</th> <th>available: store1</th> <th>available: store2</th> <th>available: store3</th> </tr> </thead> <tbody> <tr> <td>“1000000”</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“2000000”</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>“3000000”</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> </tbody> </table> HBASE ARCHITECTURE: COLUMN FAMILY (3) • Data (column families) stored in separate files (Hfiles) • Tune Performance ➢ In-memory ➢ Compression • Needs to be specified by the user HBase is a key-value store designed for distribution, scale, and speed - Data that is accessed together is stored together → faster for scaling - Grouping the data by key (here rowkey) is central to running on a cluster and sharding -- the key acts as the atomic unit for updates - Each record/row is indexed by a key (the rowkey) that you can use for lookup. The rowkey is like a primary key from a relational database. - Records in HBase are stored in sorted order according to the rowkey. This is a critical semantic used in HBase schema design. HBASE CONCEPTS (2) Horizontal splits/sharding: - Tables are divided into sequences of rows, by “key range”, called regions. - These regions are then assigned to the data nodes (HDFS) in the cluster called “RegionServers” → Preserving data locality. - Scales read and write capacity by spreading “regions” across the cluster. - Here HBase maps (rowkey, column family, column, timestamp) to a “value”. HBASE ARCHITECTURE (1) HBase is composed of three types of servers in a master slave type of architecture: Region Server, Hbase Master, ZooKeeper. Region Server: - Clients communicate with RegionServers (slaves) directly for accessing data - Serves data for reads and writes. - These region servers are assigned to the HDFS data nodes to preserve data locality. HBase Master: coordinates region servers, handles DDL (create, delete tables) operations. Zookeeper: HBase uses ZooKeeper as a distributed coordination service to maintain server state in the cluster. HOW DO THESE COMPONENTS WORK TOGETHER? Region servers and the active HBase Master connect with a session to ZooKeeper. A special HBase Catalog table “META table” holds the location of the regions in the cluster. ZooKeeper stores the location of the META table. The META table is an HBase table that keeps a list of all regions in the system. This META table is like a B Tree. The client gets the Region server that hosts the META table from ZooKeeper. The client will query (get/put) the META server to get the region server corresponding to the rowkey it wants to access. It will get the Row from the corresponding Region Server. ZOOKEEPER: THE COORDINATOR Maintains region server state in the cluster Provides server failure notification Uses consensus to guarantee common shared state HBASE: SOME LIMITATIONS Not ideal for large objects (>50MB per cell), e.g., videos -- the problem is “write amplification” -- when HDFS reorganizes data to compact large unchanging data, extensive copying occurs Not ideal for store data chronologically (time as primary index), e.g., machine logs organized by time-stamps cause write hot-spots. HBASE VS HDFS Hbase is a NoSQL distributed store layer (on top of HDFS). It is for faster random, realtime read/write access to the big data stored in HDFS. **HBase** - Stores data as key-value stores in columnar fashion. Records in HBase are stored according to the rowkey and that sequential search is common - Provides low latency access to small amounts of data from within a large data set - Provides flexible data model **HDFS** - Stores data as flat files - Optimized for streaming access of large files -- doesn’t support random read/write - Follows write-once read-many model Yet Another Resource Negotiator (YARN) ➢ YARN is a core component of Hadoop, manages all the resources of a Hadoop cluster. ➢ Using selectable criteria such as fairness, it effectively allocates resources of Hadoop cluster to multiple data processing jobs ○ Batch jobs (e.g., MapReduce, Spark) ○ Streaming Jobs (e.g., Spark streaming) ○ Analytics jobs (e.g., Impala, Spark) HADOOP ECOSYSTEM (RESOURCE MANAGER) - Map Reduce - Yet Another Resource Negotiator (YARN) - Hive - Pig - Other Applications - Spark Stream - Hadoop Distributed File System (HDFS) - Hadoop NoSQL Database (HBase) - Data Ingest Systems e.g., Apache Kafka, Flume, etc. Resource manager YARN CONCEPTS (1) Container: - YARN uses an abstraction of resources called a container for managing resources -- an unit of computation of a slave node, i.e., a certain amount of CPU, Memory, Disk, etc., resources. Tied to Mesos container model. - A single job may run in one or more containers – a set of containers would be used to encapsulate highly parallel Hadoop jobs. - The main goal of YARN is effectively allocating containers to multiple data processing jobs. Three Main components of YARN: Application Master, Node Manager, and Resource Manager (a.k.a. YARN Daemon Processes) ➢ Application Master: ○ Single instance per job. ○ Spawned within a container when a new job is submitted by a client ○ Requests additional containers for handling of any sub-tasks. ➢ Node Manager: Single instance per slave node. Responsible for monitoring and reporting on local container status (all containers on slave node). Three Main components of YARN: Application Master, Node Manager, and Resource Manager (aka The YARN Daemon Processes) - **Resource Manager**: arbitrates system resources between competing jobs. It has two main components: - *Scheduler* (Global scheduler): Responsible for allocating resources to the jobs subject to familiar constraints of capacities, queues etc. - *Application Manager*: Responsible for accepting job-submissions and provides the service for restarting the ApplicationMaster container on failure. How do the components of YARN work together? Image source: http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/YARN.html HADOOP ECOSYSTEM (PROCESSING LAYER) Map Reduce Hive Pig Other Applications Yet Another Resource Negotiator (YARN) Spark Stream Hadoop Distributed File System (HDFS) Hadoop NoSQL Database (HBase) Data Ingest Systems e.g., Apache Kafka, Flume, etc. HADOOP DATA PROCESSING FRAMEWORKS Hadoop data processing (software) framework: ➢ Abstracts the complexity of distributed programming ➢ For easily writing applications which process vast amounts of data in-parallel on large clusters Two popular frameworks: ➢ MapReduce: used for individual batch (long running) jobs ➢ Spark: for streaming, interactive, and iterative batch jobs Note: Spark is more than a framework. We will learn more about this in future lectures MapReduce allows a style of parallel programming designed for: - Distributing (parallelizing) a task easily across multiple nodes of a cluster - Allows programmers to describe processing in terms of simple map and reduce functions - Invisible management of hardware and software failures - Easy management of very large-scale data A MapReduce job starts with a collection of input elements of a single type -- technically, all types are key-value pairs A MapReduce job/application is a complete execution of Mappers and Reducers over a dataset - Mapper applies the map functions to a single input element - Application of the reduce function to one key and its list of values is a Reducer Many mappers/reducers grouped in a Map/Reduce task (the unit of parallelism) MAPREDUCE: PHASES Map ➢ Each Map task (typically) operates on a single HDFS block -- Map tasks (usually) run on the node where the block is stored ➢ The output of the Map function is a set of 0, 1, or more key-value pairs Shuffle and Sort ➢ Sorts and consolidates intermediate data from all mappers -- sorts all the key-value pairs by key, forming key-(list of values) pairs. ➢ Happens as Map tasks complete and before Reduce tasks start Reduce ➢ Operates on shuffled/sorted intermediate data (Map task output) -- the Reduce function is applied to each key-(list of values). Produces final output. The Problem: ➢ We have a large file of documents (the input elements) ➢ Documents are words separated by whitespace. ➢ Count the number of times each distinct word appears in the file. Why Do We Care About Counting Words? ➢ Word count is challenging over massive amounts of data ○ Using a single compute node would be too time-consuming ○ Using distributed nodes requires moving data ○ Number of unique words can easily exceed available memory -- would need to store to disk ➢ Many common tasks are very similar to word count, e.g., log file analysis WORD COUNT USING MAPREDUCE (1) map(key, value): // key: document ID; value: text of document FOR (each word w IN value) emit(w, 1); reduce(key, value-list): // key: a word; value-list: a list of integers result = 0; FOR (each integer v on value-list) result += v; emit(key, result); WORD COUNT USING MAPREDUCE (2) Input the cat sat on the mat the aardvark sat on the sofa Map & Reduce Result aardvark 1 cat 1 mat 1 on 2 sat 2 sofa 1 the 4 WORD COUNT: MAPPER Input: - the cat sat on the mat - the aardvark sat on the sofa Map: - the 1 - cat 1 - sat 1 - on 1 - the 1 - mat 1 Map: - the 1 - aardvark 1 - sat 1 - on 1 - the 1 - sofa 1 WORD COUNT: SHUFFLE & SORT Mapper Output - the 1 - cat 1 - sat 1 - on 1 - mat 1 - aardvark 1 - sat 1 - on 1 - the 1 - sofa 1 Intermediate Data - aardvark 1 - cat 1 - mat 1 - on 1,1 - sat 1,1 - sofa 1 - the 1,1,1,1 Shuffle & Sort WORD COUNT: REDUCER Intermediate Data: - aardvark 1 - cat 1 - mat 1 - on 1,1 - sat 1,1 - sofa 1 - the 1,1,1,1 Reducer Output: - aardvark 1 - cat 1 - mat 1 - on 2 - sat 2 - sofa 1 - the 4 Result: - aardvark 1 - cat 1 - mat 1 - on 2 - sat 2 - sofa 1 - the 4 MapReduce is designed to deal with compute nodes failing to execute a Map task or Reduce task. Re-execute failed tasks, not whole jobs/applications. Key point: MapReduce tasks produce no visible output until the entire set of tasks is completed. If a task or sub task somehow completes more than once, only the earliest output is retained. Thus, we can restart a Map task that failed without fear that a Reduce task has already used some output of the failed Map task. SUMMARY With really huge data sets, or changing data collected from huge numbers of clients, it often is not practical to use a classic database model where each incoming event triggers its own updates. So we shift towards batch processing, highly parallel: many updates and many “answers” all computed as one task. Then cache the results to enable fast tier-one/two reactions later.
{"Source-Url": "http://www.cs.cornell.edu/courses/cs5412/2019sp/slides/Lecture-21.pdf", "len_cl100k_base": 7885, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 71319, "total-output-tokens": 9206, "length": "2e12", "weborganizer": {"__label__adult": 0.00025582313537597656, "__label__art_design": 0.000522613525390625, "__label__crime_law": 0.0004062652587890625, "__label__education_jobs": 0.0015668869018554688, "__label__entertainment": 0.00014507770538330078, "__label__fashion_beauty": 0.0001710653305053711, "__label__finance_business": 0.0008363723754882812, "__label__food_dining": 0.0003921985626220703, "__label__games": 0.0005011558532714844, "__label__hardware": 0.0019512176513671875, "__label__health": 0.0005750656127929688, "__label__history": 0.0003483295440673828, "__label__home_hobbies": 0.00017011165618896484, "__label__industrial": 0.0007491111755371094, "__label__literature": 0.00030422210693359375, "__label__politics": 0.0003325939178466797, "__label__religion": 0.0004503726959228515, "__label__science_tech": 0.30078125, "__label__social_life": 0.00018405914306640625, "__label__software": 0.06427001953125, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.00021648406982421875, "__label__transportation": 0.0004498958587646485, "__label__travel": 0.0002262592315673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24454, 0.04209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24454, 0.11918]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24454, 0.81488]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 510, false], [510, 931, null], [931, 1301, null], [1301, 1726, null], [1726, 2032, null], [2032, 2358, null], [2358, 2716, null], [2716, 3114, null], [3114, 3194, null], [3194, 3446, null], [3446, 3538, null], [3538, 3941, null], [3941, 4080, null], [4080, 4466, null], [4466, 4849, null], [4849, 5206, null], [5206, 5653, null], [5653, 5708, null], [5708, 5945, null], [5945, 6235, null], [6235, 6577, null], [6577, 6876, null], [6876, 7202, null], [7202, 7418, null], [7418, 7745, null], [7745, 8170, null], [8170, 8436, null], [8436, 8832, null], [8832, 9078, null], [9078, 9806, null], [9806, 10792, null], [10792, 11427, null], [11427, 12139, null], [12139, 13392, null], [13392, 14118, null], [14118, 14301, null], [14301, 14851, null], [14851, 15252, null], [15252, 15616, null], [15616, 15818, null], [15818, 16082, null], [16082, 16197, null], [16197, 16454, null], [16454, 16612, null], [16612, 16959, null], [16959, 17547, null], [17547, 17929, null], [17929, 18213, null], [18213, 18686, null], [18686, 19141, null], [19141, 19662, null], [19662, 19798, null], [19798, 20047, null], [20047, 20516, null], [20516, 20850, null], [20850, 21288, null], [21288, 21894, null], [21894, 22080, null], [22080, 22454, null], [22454, 22749, null], [22749, 22910, null], [22910, 23105, null], [23105, 23337, null], [23337, 23596, null], [23596, 24068, null], [24068, 24454, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 510, true], [510, 931, null], [931, 1301, null], [1301, 1726, null], [1726, 2032, null], [2032, 2358, null], [2358, 2716, null], [2716, 3114, null], [3114, 3194, null], [3194, 3446, null], [3446, 3538, null], [3538, 3941, null], [3941, 4080, null], [4080, 4466, null], [4466, 4849, null], [4849, 5206, null], [5206, 5653, null], [5653, 5708, null], [5708, 5945, null], [5945, 6235, null], [6235, 6577, null], [6577, 6876, null], [6876, 7202, null], [7202, 7418, null], [7418, 7745, null], [7745, 8170, null], [8170, 8436, null], [8436, 8832, null], [8832, 9078, null], [9078, 9806, null], [9806, 10792, null], [10792, 11427, null], [11427, 12139, null], [12139, 13392, null], [13392, 14118, null], [14118, 14301, null], [14301, 14851, null], [14851, 15252, null], [15252, 15616, null], [15616, 15818, null], [15818, 16082, null], [16082, 16197, null], [16197, 16454, null], [16454, 16612, null], [16612, 16959, null], [16959, 17547, null], [17547, 17929, null], [17929, 18213, null], [18213, 18686, null], [18686, 19141, null], [19141, 19662, null], [19662, 19798, null], [19798, 20047, null], [20047, 20516, null], [20516, 20850, null], [20850, 21288, null], [21288, 21894, null], [21894, 22080, null], [22080, 22454, null], [22454, 22749, null], [22749, 22910, null], [22910, 23105, null], [23105, 23337, null], [23337, 23596, null], [23596, 24068, null], [24068, 24454, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24454, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24454, null]], "pdf_page_numbers": [[0, 0, 1], [0, 510, 2], [510, 931, 3], [931, 1301, 4], [1301, 1726, 5], [1726, 2032, 6], [2032, 2358, 7], [2358, 2716, 8], [2716, 3114, 9], [3114, 3194, 10], [3194, 3446, 11], [3446, 3538, 12], [3538, 3941, 13], [3941, 4080, 14], [4080, 4466, 15], [4466, 4849, 16], [4849, 5206, 17], [5206, 5653, 18], [5653, 5708, 19], [5708, 5945, 20], [5945, 6235, 21], [6235, 6577, 22], [6577, 6876, 23], [6876, 7202, 24], [7202, 7418, 25], [7418, 7745, 26], [7745, 8170, 27], [8170, 8436, 28], [8436, 8832, 29], [8832, 9078, 30], [9078, 9806, 31], [9806, 10792, 32], [10792, 11427, 33], [11427, 12139, 34], [12139, 13392, 35], [13392, 14118, 36], [14118, 14301, 37], [14301, 14851, 38], [14851, 15252, 39], [15252, 15616, 40], [15616, 15818, 41], [15818, 16082, 42], [16082, 16197, 43], [16197, 16454, 44], [16454, 16612, 45], [16612, 16959, 46], [16959, 17547, 47], [17547, 17929, 48], [17929, 18213, 49], [18213, 18686, 50], [18686, 19141, 51], [19141, 19662, 52], [19662, 19798, 53], [19798, 20047, 54], [20047, 20516, 55], [20516, 20850, 56], [20850, 21288, 57], [21288, 21894, 58], [21894, 22080, 59], [22080, 22454, 60], [22454, 22749, 61], [22749, 22910, 62], [22910, 23105, 63], [23105, 23337, 64], [23337, 23596, 65], [23596, 24068, 66], [24068, 24454, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24454, 0.14903]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
06a6bcd12a46254ca2fb7cdfb42066c0b1c56c70
PipeIt: A Pipeline Programming Framework for Embedded Processor Array Systems-on-Chip Dimitris Syrivelis, and Spyros Lalis Computer and Communications Engineering Department, University of Thessaly, Volos, Greece Abstract—This paper presents the PipeIt framework for developing pipelined applications targeted at tightly-coupled processor arrays on a chip. The framework includes a component programming and wiring model, a runtime environment, and a corresponding toolchain. It enables the programmer to develop applications in a high-level manner, structuring the code at the finest possible meaningful level of granularity, without caring about how this will be deployed and executed. At runtime, the stages of the pipeline are distributed among the available processors. This arrangement can be changed dynamically when another PipeIt application needs to be executed concurrently. We discuss and demonstrate a complete embedded system prototype. Keywords: embedded processor arrays, application-level pipelining, dynamic load balancing, reconfiguration 1. Introduction The promising performance and better physical as well as financial scalability, have motivated researchers to improve all aspects of multicore computing both for high-end and embedded systems. At the architecture level, approaches range from loosely-coupled processors interconnected via ethernet on different nodes, to tightly-coupled processors on a chip that are typically assigned with dedicated tasks and are connected together via high end dedicated links without any arbitration [1]. Platform differences in conjunction with the wide polymorphism in terms of applications also lead to a variety of partitioning and communication schemes. In turn, these can be supported by different tools. Of course, the type of computation at hand may naturally favor a certain scheme. Another important aspect is whether a multicore system is used in a dedicated fashion, or in the context of an open computing environment. In the former case, the optimal partitioning of the computation that will lead to the best possible performance can be decided at the design phase. On the contrary, in the latter case, new tasks may appear at any point in time and the available resources must be used opportunistically to boost performance. In this paper we present the PipeIt framework which provides support for building and deploying application-specific pipelines on tightly-coupled distributed memory parallel processor arrays (PPAs) [1] for embedded systems in the context of an open, general-purpose computing environment. PipeIt includes a component and wiring model, a runtime backend that is appropriately customized for and integrated with the target execution environment, and a corresponding front-end compiler that generates the ultimate source code and builds scripts so that a regular toolchain can then be used to produce proper executables. A custom loader is used to deploy PipeIt application stages at runtime, and the initial arrangement can be changed at any point in time if the system workload changes. This is achieved without requiring PPA cores to feature heavyweight OS support. The main contributions of this work are: i) the modular application design approach that enables the reuse of basic/common pipeline structures; ii) seamless pipeline execution with support for the efficient dynamic reassignment of stages to available cores; iii) the ability to invoke a pipelined computation via a simple library call from within a conventional application; iv) a prototype implementation of all the development tools along with with an emulation environment for debugging and accessing expected performance of the pipelined computation; and v) an implementation of a well-known application on an FPGA-based PPA prototype. 2. PipeIt Target Platform With PipeIt we wish to support the pipelining of typical CPU-intensive computations of embedded applications which operate on data block streams, e.g., cipher, (de)compression or encoding/decoding algorithms. The target platform is tightly-coupled distributed memory parallel processor arrays (PPAs) [1], aimed for general-purpose embedded computing. PPAs typically feature reconfigurable, ultra fast and dedicated interconnections that introduce a small overhead for data block transfer. The individual processing elements are rather resource constrained, with memories of a few Mbytes, and cannot host proper OS support or heavyweight runtime environments. PPAs are currently used as dedicated coprocessors that may carry out one large computation at a time which is divided in an a priori known number of statically assigned tasks. Our work assumes a special processor on the PPA (or an external processor connected to the PPA) which plays the role of the platform master, is interfaced to all platform peripherals, runs a proper OS/runtime, and is responsible for configuring the PPA to setup application pipelines as desired. In the general case, several pipelined applications may execute concurrently to each other but also other The pipelined applications may change dynamically. The system workload, in terms of both conventional and pipelined applications, may change dynamically. 3. The PipeIt Framework To enable structured development of pipelined applications, PipeIt adopts a component model. Each component represents a pipeline stage that ideally should be executed using a separate processor. Components have a fixed number of input and output ports. They can be wired together, linking together output ports with input ports in a point-to-point fashion according to the desired data flow. The wiring of components is practically orthogonal to their implementation, and it is specified in a separate so-called configuration file. Basic checking is done so that interconnected ports handle data objects of the same size. During execution, each component blocks until data is available, processes data, and writes data to its output, in an endless loop. The only component that has no input ports, i.e., does not wait for data to arrive from another component, is the pipeline entry point, called root. The root component typically reads data from an external source, such as a file, special memory location or a special device. Similarly, the only component that has no output ports, i.e., does not send data to another component, is the pipeline exit point, called sink, which typically writes data to an external destination. The root and sink both run on the master processor (using two different threads), enabling the seamless integration of the pipelined computation with the rest of the system. Figure 1 shows an indicative PipeIt pipeline. Note that it is possible to have branches in order to introduce data parallelism inside the pipeline. 3.1 Component and communication model A PipeIt component is coded as a C++ object, in a separate file with the same name. Each component class must be defined as a subclass of a runtime type class, which features two virtual functions, config and exec, that must be overloaded. The config function is called once, before the execution commences, and must be used to declare the ports of the component and initialize its internal state. Different configuration strings can be passed to each component, making it possible to implement flexible initialization schemes, and allowing for component classes to be reused in the same or different PipeIt applications. The exec function must contain the component’s data transfer and stage processing code. It is invoked from within the PipeIt runtime, in an endless loop. Data transfer is performed using the input and output functions. These are inherited from the base runtime type class, and are mapped to the communication primitives of the respective runtime environment. Ports are addressed using a simple numbering scheme which is mapped by the PipeIt framework to appropriate target-specific addresses. In a nutshell, components receive and send data using the abstract PipeIt primitives and port ids without caring about the underlying platform details. Memory allocation of data buffers must be done using the pipeIt_malloc function provided by the runtime. This is necessary because PipeIt needs to control data access and transfers in order to perform component migration safely during pipeline restructuring. For the same reason, static declarations of data transfer blocks are not allowed. As an example, we give the code of a simple PipeIt component that receives an integer from its input port, increments it, and forwards it to its output port: ```cpp class IncInt : public PipeITOs { public: IncInt() {}; void config(int argc, char *argv[]) { data = pipeit_malloc(sizeof(int)); pipeit_add_input(0, sizeof(int)); pipeit_add_output(0, sizeof(int)); } void exec(void *d, int size) { int data = (int *)d; pipeit_output(0, &data, sizeof(int)); } }; ``` Data is passed from one component to another by writing and reading proper output and input ports. However, some variables may need to be accessed only between the root and sink components, hence need not travel through the entire pipeline. A separate mechanism, called the context queue, is employed to keep track of these values in synch with the pipeline. Before writing data into its output, the root adds a context entry with the proper values, and, conversely, the sink removes the next context entry before attempting to read data from its input. In our implementation, the root and sink threads use a shared memory FIFO queue. 3.2 Runtime classes PipeIt introduces runtime support mainly for two reasons: i) to confine the programmer during the development of a component to the execution environment and available resources of the target processing element; and ii) to be able to dynamically reconfigure the component placement in a seamless fashion. There are two radically different execution environments, the master processor which runs a proper operating system, and ordinary PPA processors running a small basic I/O system (BIOS) that is a custom implementation. In both cases PipeIt adds a thin layer providing a set of generic data transfer primitives, optimized for the respective environment. There are three different runtime type classes, PipeItOS, PipeItBIOS and PipeItOSLib, which can be used to develop components. Each reflects a different flavor of the PipeIt runtime support, as follows. The PipeItOS class is used for components that will execute on the master processor, having access to the full functionality of a proper OS. This runtime class is used for the root and sink components, which may contain system calls and access peripherals. The generated code is an autonomous executable that runs on the master processor under the full-fledged OS and uses a separate POSIX thread for running each component. The PipeItBIOS class is aimed at components that should run on ordinary PPA cores on top of the PipeIt BIOS. In this context the programmer may only perform CPU intensive computations, read data from input ports and write data to output ports. Attempts to use a non-existing runtime feature will cause the compilation of the component to fail. The default for such components is for them to execute on a dedicated PPA core. However, as a result of pipeline reconfiguration, several such components can be placed on the same PPA core or even on the master processor. The PipeItOSLib class has a similar functionality to PipeItOS, but it does not result in the generation of an autonomous executable. Instead, it produces code that enables the pipelined computation to be invoked from within an external application context, much like a library. In this case, the root and sink components execute as POSIX threads and must establish an appropriate communication channel with the application, based on the arguments of the config and/or exec functions. Any IPC mechanism can be used for this purpose. The corresponding initialization code is placed in a routine named according to a certain convention, and this routine must be invoked from the application before initiating communication with these components. ### 3.3 Configuration language The wiring of each PipeIt computation is specified in a separate configuration file. Configurations are expressed using three elements: component declarations, port connections and composites. Components are declared using the class names of the respective implementations, optionally giving a configuration string that can drive initialization. The configuration string is not interpreted by the PipeIt framework; it is passed “as is” to the component, via a call to its config function. Input and output ports are denoted in brackets placed at the left and right hand side of a component name, respectively. Each connection is denoted by a right arrow, starting from an input port and pointing to an output port. To enhance the structure of complex computations, and to enable the reuse of common sub-structures, configurations can be grouped into so-called composites which export their input and output ports. Composites have a so-called execution type, for which there are two options. The PipeItMaster type is used for composites that will run on the master processor; their components must extend the PipeItOS or PipeItBIOS class. The PipeItArray type is used for composites that should run on ordinary PPA cores, and all of their components must extend the PipeItBIOS class. A PipeIt configuration file has exactly one PipeItMaster composite and an arbitrary number of PipeItArray composites. As an example, the application shown below uses two composites to increment an integer value twice. The IncIntTwice composite of type PipeItArray uses two appropriately connected instances of the IncInt class. One instance is declared explicitly while the other is declared implicitly, via the class name. The MyApp composite of type PipeItMaster contains a MyRoot and a MySink component (the code of those component is not shown here). ```plaintext PipeItArray IncIntTwice { inc :: IncInt(); input[0] -> [0]inc; inc[0] -> [0]IncInt()0 -> output[0]; } PipeItMaster MyApp { r :: MyRoot("42"); s :: MySink(); } IncIntTwice[0] -> [0]MyApp; MySink[0] -> [0]IncIntTwice; MyApp[0] -> [0]MySink; MySink[0] -> [0]MyApp; ``` Local (intra composite) connections are declared within the respective scope, while global (inter composite) connections are defined at the end of the configuration file. The keywords input and output refer to the input and output ports of the composite. Also, in this example, the computation takes its input via the configuration string “42” passed to the MyRoot component, hence the MySink component will receive the value 44. ### 3.4 Dynamic load balancing support The pipeline structure of a PipeIt application is designed assuming that all the components (stages) of the pipeline will be executed on a dedicated PPA core. However, at deployment time, there may not be as many processors available, either because the system does not have them in the first place or because some processors are already being used for other applications. In addition, during the execution of the application, new tasks may arrive and exiting tasks may finish. Thus a dynamic restructuring of the application pipeline is needed in order to release some processors, or, conversely, to exploit some of the processors being released. To enable the flexible and concurrent deployment and execution of pipelined applications PipeIt comes with built- in support for dynamic load balancing. Specifically, the runtime can assign the component of a pipeline on the same or different processors in a transparent way. The assignment obeys the following rule: if two components are assigned on the same processor, every component between them must also be assigned on that processor. Figure 2 shows all such configurations for a pipeline with four components, including the root and sink. The components to be executed on each processor are specified using a so-called ComponentExecutionMap, which is disseminated from the master to the PPA processors via a simple protocol. On each processor, PipeIt uses a simple scheduler to execute all co-located components sequentially. No real data transfer is performed between co-located components. Instead, both ends of each local link share a data buffer which is accessed from within the respective I/O calls. The I/O behavior of each component is controlled via a so-called IOMap, which indicates whether the input and output calls should perform a remote or local/virtual data transfer. The initial configuration of the pipeline is established as follows. When the computation is first deployed, PipeIt runs the pipeline for a number of iterations sequentially on the master processor. During this initial execution phase, an appropriately instrumented version of the exec call is employed, through which PipeIt collects information that can be used to estimate the processing overhead of each component. Next, the component assignment scheme that will be used given the available number of processors is decided. Finally, the required processors are allocated, and the application code (containing the code for all components) along with the corresponding ComponentExecutionMap and IOMaps are loaded on each processor. The pipeline can be reconfigured at any point in time during execution. This makes it possible to adapt to changing workload conditions, exploiting processors that become available or releasing processors for the benefit of other applications. Load balancing is guided using a system service which must be inquired periodically to determine the most appropriate configuration. In our current implementation, where frequent monitoring introduces considerable overhead, the rate at which this need to be done is specified by the programmer during compilation. If a new processor and/or component assignment is determined to be more beneficial, the PipeIt runtime performs the respective processor allocation and loading, updates the ComponentExecutionMap and IOMaps, pushes this information down the pipeline, and proceeds with the execution. Figure 3 shows three indicative configurations for an application with five components, together with the corresponding data transfer and execution mappings. In case (A) all components execute sequentially on the master processor, and communication is done using shared buffers. In the case (B) the third component is set for execution on a PPA processor, and the output mapping of the second component as well as the input mapping of the fourth component are set to invoke the appropriate communication primitives to send/receive data between the master processor and the PPA processor. Finally, case (C) depicts the deployment of second and third component on distinct PPA processors. 3.5 Application development and tools The developer must first provide at least the skeleton for each component, and then write the application configuration file. The PipeIt compiler parses the file, creates the appropriate data structures used to configure all aspects of the pipeline structure, and generates corresponding flavours of pipeit.h files, to be included by convention in each component implementation. These header files contain static declarations of various required variables, including the transfer bitmaps and profiling structures, as well as support structures to map the port numbers onto the platform-specific addressing primitives for each component. At this point, regular development toolchains can be used to compile the generated code for the target and the emulation platform. The default mode is to produce code for all components to execute on the master processor environment, and for all components except the root and sink to execute on an ordinary PPA processor environment. The compiler also accepts a hint in terms of preferred component co-location for the case where there are not enough PPA processors or the local memory of a PPA processor cannot host all components; in the latter case, different executables will be generated for different sets of PPA processors. The PipeIt toolchain can also be used to generate executables for emulated execution on a Linux host. In essence, the master and PPA processors are emulated using distinct processes and interconnections are emulated via unix named pipes. The number of available PPA processors is specified by the user. Running an application in emulation mode simplifies debugging. Moreover, it provides a computation to communication ratio estimate and enables the use of sophisticated profiling tools like gprof to guide component partitioning and co-location preferences. Another motivation for using the emulation mode is to assess the expected performance on the target platform for various pipeline configurations. Of course, if the emulation host has a radically different architecture from the target platform, it might not be possible to make an accurate estimation. 4. **PipeIt Prototype and Applications** Our prototype is a custom PPA system implemented on a FPGA as a system-on-chip. The hardware platform is an Atmark Techno Suzaku [2], which features an Xilinx Spartan 3 FPGA along with off-chip peripherals. For the master and ordinary PPA cores we use the Xilinx Microblaze soft processor, a classic 32-bit RISC architecture. Microblaze features a fast bus architecture named Fast Simplex Links (FSL). This is a dedicated 32-bit wide unidirectional point-to-point communication channel, which does not need arbitration, provides hardware support to distinguish between data and control communication, and supports blocking/non-blocking asynchronous access. The master processor is interfaced to all platform peripherals and is responsible for running a customized version of the uClinux embedded operating system [3], achieving 25.29 BogoMIPS. The PPA processors have only local memories and are connected to each other with FSL links. The entire system can be dynamically reconfigured at runtime using special support which we have developed in previous work [4]. For the purpose of this work, we have also developed an OS service that exports information about the number of available PPA cores and master cpu usage via proc filesystem, used by the PipeIt runtime in order to determine the component placement for application pipelines. 4.1 Applications As a proof of concept application, we have implemented a PipeIt version of the Secure Hash Algorithm and HMAC authentication code. We also integrated this implementation in the Crypto library, using the PipeIt library mode. The SHA1 code employs 4 different functions which perform the same amount of computation on a data block, doing 80 sequential invocations with varying parameters in total (the original code is highly optimized, using inline functions etc). Hence, the PipeIt implementation is based on 6 component types (for the root, the sink, and each function) which are used to construct a pipeline of 80 components plus the root and sink. Below we list a simplified version of the source code of a typical component type, followed by an excerpt of the configuration file: ```c class R0: public PipeItBIOS { public: R0(); ~R0(); struct Data *d; int arg1, arg2, arg3, arg4, arg5, offset; void config(char *args) { d=pipeit_malloc(sizeof(struct Data)); pipeit_add_input(0, sizeof(Data)); pipeit_add_output(0, sizeof(Data)); parse_args(args); } void exec(void *d, int size) { pipeit_input(0, (void *)d, sizeof(Data)); R0Calc(d+arg1, d+arg2, d+arg3, d+arg4, d+arg5, offset); pipeit_output(0, (void *)d, sizeof(Data)); } void R0Calc(void *p1, void *p2, void *p3, void *p4, void *p5, int offset) { ... } }; ``` ```c PipeItArray SHA1 {{ input[0] -> [0]R0("a b c d e 0")[0] R0("a b c d e a 79")[0] -> output[0] }; ``` ```c PipeItMaster SHA1App { SHAIRoot() -> output[0]; input[0] -> [0]SHA1Sink(); SHAIAppl[0] -> [0]SHA1App; ``` The HMAC computation is based on SHA1. Indeed, the PipeIt version of HMAC reuses the PipeIt version of SHA1, as it can be seen from the corresponding configuration file: ```c PipeItArray SHA1 { input[0] .. -> output[0] }; ``` ```c PipeItMaster HMACApp { HMACLibRoot() -> output[0]; input[0] -> [0]HMACLibSink(); ``` ```c HMACAppl[0] -> [0]SHA1; SHA1[0] -> [0]HMACApp; ``` 4.2 Experimental results We tested the PipeIt version of SHA1 on our platform as an autonomous application as well as as a part of an HMAC application (the HMAC keys are set once, at the beginning of the program). Both applications feed the pipeline with predefined data blocks in an endless loop, mimicking a continuous stream. Taking advantage of the library-oriented execution mode of PipeIt, we also integrated the pipelined version of HMAC (and SHA1) in the Crypto library. In a first series of experiments, we performed measurements for the case where the pipelined computation runs: (i) only on the master processor; (ii) on 4 processors including the master; and (iii) on 5 processors including the master. In all cases, the system was unloaded, i.e., there were no other applications running at the same time. The results are shown in Table 1, including the performance of the original (highly optimized) sequential programs as a reference. The performance of the PipeIt HMAC is naturally dominated by the performance of SHA1. The first observation is that the sequential execution of the pipelined versions introduces a notable overhead, performing at about 0.8x compared to the original code. This is because PipeIt explicitly invokes each component in a loop, thus it is not possible to optimize code, e.g., by using inline functions. The second observation is that the speedup achieved when using 5 processors is 4.45x compared to the sequential PipeIt execution, and 3.5x compared to the original version. The latter is far away from what is theoretically possible (5x), mainly due to the execution and communication overhead imposed by the PipeIt runtime for co-located components. Specifically, for SHA1, 12 components plus the root and sink are assigned to the master, while other processors are assigned 17 components each. The overhead increases as less processors are used and the number of co-located components increases, as demonstrated for the case of using just 4 processors, each having 20 assigned components (the master also runs the root and sink). It is of course possible to boost performance by adjusting the number of application-level components (pipeline stages) to better fit the actual platform capabilities; in this case, by introducing fewer and more heavyweight components. To demonstrate this we created a second pipelined version of SHA1 with just 5 components in addition to the root and sink. Each component contains an optimized integrated version of the code of the components that were co-located in the 5-processor execution scenario. This program achieved a throughput of 253.9kb/s which roughly equals to a 4.6x performance improvement over the original sequential code, and 4.76x compared to the corresponding sequential PipeIt execution. The downside is that the 5-component pipeline is very coarse-grained and cannot possibly give a speedup greater than 5x, even if the underlying platform features more cores. On the contrary, the 80-component pipeline version may theoretically reach a 80x speedup if run on a PPA with 80 (idle) cores. Another issue is that having fewer pipeline components/stages also limits the options of the runtime in terms of evenly distributing the amount of processing on top of fewer processors. To verify the ability of our system to perform dynamic load balancing, we run two instances of the PipeIt SHA1 computation concurrently to each other, using a total of 5 processors including the master. The first instance is started on an idle system, exploiting all 5 processors as discussed above. The second instance is started at a later point, when the pipeline of the first instance is already running. As a result of dynamic balancing, each computation is assigned 2 processors in addition to the master, and the two pipelines are configured appropriately, having just the root and sink on the master and 40 components on each other processor. The throughput achieved by each computation is 76Kb/s, giving a total of 152Kb/s. For comparison, the performance achieved for a single computation with the same pipeline-component configuration on an idle system is 78Kb/s. This overhead is due to the fact that the master processor is shared between the (root and sink of) the two computations, and this contention leads to occasional pipeline stalls. Having the pipelined version of HMAC (and SHA1) integrated in the Crypto library, makes it trivial to exploit it from within existing and conventional applications in a transparent fashion. As a proof of concept, we used scp (version 2 of the SCP protocol) to copy a 10MByte file over Ethernet from a PC connected on the same switch as the Suzaku board. Using the original HMAC/SHA1, the transfer was performed at 18.1Kb/s vs 22.4Kb/s when using the PipeIt versions, giving an improvement of 1.23x. The speedup is relatively small because HMAC accounts for a rather small part of the processing done by scp (which was left untouched). It is also important to note that since the conventional part of scp runs on the master processor, the PipeIt runtime deploys the HMAC/SHA1 pipeline on just 4 processors, using the master only for the root and sink. To confirm that load balancing works better when the pipeline has a finer granularity, we repeated the same file transfer experiment using the 5-component pipeline version of HMAC/SHA1 discussed previously. In this case, the throughput was only 20Kb/s. Given that the PipeIt runtime decides avoid using the master (except for the root and sink), 2 out of the 5 (coarse-grained) components end up being executed on the same processor, resulting in an unbalanced distribution of the pipelined computation which in turn leads to a deteriorated performance. ### Table 1: Performance of SHA1 and HMAC <table> <thead> <tr> <th>Computation</th> <th>original</th> <th>PipeIt seq</th> <th>4 CPUs</th> <th>5 CPUs</th> </tr> </thead> <tbody> <tr> <td>SHA1</td> <td>55.2Kb/s</td> <td>43.1Kb/s</td> <td>156Kb/s</td> <td>193Kb/s</td> </tr> <tr> <td>HMAC</td> <td>54.1Kb/s</td> <td>42.8Kb/s</td> <td>154.9Kb/s</td> <td>191Kb/s</td> </tr> </tbody> </table> 5. Related Work The Ambric PPA architecture [1] integrated with a master CPU that can run a full-fledged OS would be an ideal high-performance target for our framework. Currently Ambric uses a structured object programming model. The programmer separates the application into high-level processing objects which can be developed independently and can execute asynchronously with each other, at their own clock. speed, on their own dedicated processor core. We believe that the integration of PPA functionality in an OS context with dynamic load balancing is also important. To that end, our approach could be used to efficiently accommodate concurrently running applications on such platforms. The component-based design of PipeIt has been heavily influenced by the Click framework [5], although this targets a different application domain, namely the modular implementation of router functionality. Click features C++ objects and employs a configuration language for specifying a network of objects. The nesC language [6], introduced to support application development for resource constrained wireless nodes (motes), also relies on the notion of components and wiring configurations. StreamIt [7] also introduces a high-level programming model, but for a broader application domain than PipeIt, including all types of applications that use a stream as an abstraction. In this case, there is no notion of a component configuration, instead components are explicitly interfaced to each other as a part of their implementation. The StreamIt compiler can automate tasks such as partitioning, static load balancing, layout, and memory management. However, to our knowledge, there is no support for balancing a computation at runtime. Coarse-grained pipelining is addressed in [8], where annotations are proposed in order to perform the required code restructuring at the source level. Pipeline parallelism is also exploited in [9] using the techniques of Decoupled Software Pipelining [10], also employing thread-level speculation to opportunistically execute multiple loop iterations in parallel. While these approaches could be used for a PPA target, they both assume a homogeneous shared memory system where processors are a priori assigned to applications. In the spirit of PipeIt, work in [11] also targets a dynamic computing environment where platform resources are not statically dedicated to computations. To achieve balanced execution of parallel applications, a user-level scheduler is proposed, which dynamically distributes tasks over a fixed collection of processes, which in turn are scheduled on a fixed collection of processors by the operating system kernel. Contrary to PipeIt, this approach requires a full-fledged OS on each processor in order to run OS-level processes and IPC mechanisms. Finally, dynamic task partitioning methods have been proposed to deal with unstructured mesh problems [12], [13]. In [12] the tasks of a computation are developed using an appropriate programming model and a software framework, which features an extension called Mobile Object Layer (MOL) [14] that enables transparent task migration between processors at runtime. The authors have extended the MOL concept by adding load balancing routines that communicate with respective runtime support. In this case, the target platform is not a tightly coupled SoC and load balancing remains an application-level task, i.e., the programmer has to write code explicitly for this purpose. 6. Conclusion We have presented PipeIt, a framework that supports the development of pipelined applications for embedded PPA targets in the context of an open, general-purpose runtime environment. Our vision is to have a system where CPU-intensive tasks of various applications are implemented as fine-grained pipelines (if appropriate) which are deployed on the available PPA processors in a flexible way, as a function of the current system workload. To that end, providing support for dynamic load balancing, without forcing the programmer to think about the platform constraints and without requiring full-fledged OS support on each PPA processor, is of major importance. 7. Acknowledgments This paper is part of the 03ED918 research project, implemented within the framework of the “Reinforcement Programme of Human Research Manpower” (PENED) and co-financed by National and Community Funds (75% from E.U.-European Social Fund and 25% from the Greek Ministry of Development-General Secretariat of Research and Technology). References
{"Source-Url": "http://inf-server.inf.uth.gr/~lalis/papers/pipelines_esa09.pdf", "len_cl100k_base": 7141, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24946, "total-output-tokens": 8100, "length": "2e12", "weborganizer": {"__label__adult": 0.0006728172302246094, "__label__art_design": 0.0005950927734375, "__label__crime_law": 0.0006113052368164062, "__label__education_jobs": 0.0004277229309082031, "__label__entertainment": 0.00012874603271484375, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.00032520294189453125, "__label__food_dining": 0.0005536079406738281, "__label__games": 0.0011777877807617188, "__label__hardware": 0.015289306640625, "__label__health": 0.0007991790771484375, "__label__history": 0.0005087852478027344, "__label__home_hobbies": 0.00020194053649902344, "__label__industrial": 0.0015134811401367188, "__label__literature": 0.00024211406707763672, "__label__politics": 0.000469207763671875, "__label__religion": 0.00106048583984375, "__label__science_tech": 0.1441650390625, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.0076446533203125, "__label__software_dev": 0.8203125, "__label__sports_fitness": 0.0006160736083984375, "__label__transportation": 0.0019388198852539065, "__label__travel": 0.0003693103790283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36757, 0.02432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36757, 0.42179]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36757, 0.90269]], "google_gemma-3-12b-it_contains_pii": [[0, 5079, false], [5079, 10059, null], [10059, 15623, null], [15623, 19886, null], [19886, 24826, null], [24826, 30946, null], [30946, 36757, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5079, true], [5079, 10059, null], [10059, 15623, null], [15623, 19886, null], [19886, 24826, null], [24826, 30946, null], [30946, 36757, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36757, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36757, null]], "pdf_page_numbers": [[0, 5079, 1], [5079, 10059, 2], [10059, 15623, 3], [15623, 19886, 4], [19886, 24826, 5], [24826, 30946, 6], [30946, 36757, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36757, 0.025]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
28652df74c998d9232823b7039440ea59f2243ce
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783319665610-c2.pdf?SGWID=0-0-45-1614293-p181097418", "len_cl100k_base": 7723, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34184, "total-output-tokens": 8555, "length": "2e12", "weborganizer": {"__label__adult": 0.0003216266632080078, "__label__art_design": 0.0011005401611328125, "__label__crime_law": 0.000370025634765625, "__label__education_jobs": 0.0015048980712890625, "__label__entertainment": 0.00011277198791503906, "__label__fashion_beauty": 0.00019752979278564453, "__label__finance_business": 0.0011091232299804688, "__label__food_dining": 0.00034165382385253906, "__label__games": 0.0004622936248779297, "__label__hardware": 0.0012979507446289062, "__label__health": 0.0005888938903808594, "__label__history": 0.00045371055603027344, "__label__home_hobbies": 0.0001245737075805664, "__label__industrial": 0.00075531005859375, "__label__literature": 0.00034356117248535156, "__label__politics": 0.00029397010803222656, "__label__religion": 0.0005359649658203125, "__label__science_tech": 0.164306640625, "__label__social_life": 0.00011795759201049803, "__label__software": 0.02215576171875, "__label__software_dev": 0.80224609375, "__label__sports_fitness": 0.0002262592315673828, "__label__transportation": 0.0005812644958496094, "__label__travel": 0.00024127960205078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35822, 0.02229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35822, 0.28015]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35822, 0.89333]], "google_gemma-3-12b-it_contains_pii": [[0, 2388, false], [2388, 5462, null], [5462, 8732, null], [8732, 10351, null], [10351, 12690, null], [12690, 14884, null], [14884, 17613, null], [17613, 20141, null], [20141, 20772, null], [20772, 22424, null], [22424, 24655, null], [24655, 27757, null], [27757, 29827, null], [29827, 32974, null], [32974, 35619, null], [35619, 35822, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2388, true], [2388, 5462, null], [5462, 8732, null], [8732, 10351, null], [10351, 12690, null], [12690, 14884, null], [14884, 17613, null], [17613, 20141, null], [20141, 20772, null], [20772, 22424, null], [22424, 24655, null], [24655, 27757, null], [27757, 29827, null], [29827, 32974, null], [32974, 35619, null], [35619, 35822, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35822, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35822, null]], "pdf_page_numbers": [[0, 2388, 1], [2388, 5462, 2], [5462, 8732, 3], [8732, 10351, 4], [10351, 12690, 5], [12690, 14884, 6], [14884, 17613, 7], [17613, 20141, 8], [20141, 20772, 9], [20772, 22424, 10], [22424, 24655, 11], [24655, 27757, 12], [27757, 29827, 13], [29827, 32974, 14], [32974, 35619, 15], [35619, 35822, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35822, 0.22881]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b93ba4283b2f7d20da630d056a40844a1feb3d71
Design of an SNMP Agent for OSGi Service Platforms Pedro J. Muñoz Merino, Natividad Martínez Madrid, and Ralf E. D. Seepold Abstract—On one hand, SNMP (Simple Network Management Protocol) allows integrating different enterprise elements connected through Internet into a standardized remote management. On the other hand, as a consequence of the success of Intelligent Houses they can be connected through Internet now by means of a residential gateway according to a common standard called OSGi (Open Services Gateway initiative). Due to the specifics of OSGi Service Platforms and their dynamic nature, specific design criterions should be defined to implement SNMP Agents for OSGi in order to integrate them into the SNMP remote management. Based on the analysis of the relation between both standards (SNMP and OSGi), this paper shows how OSGi Service Platforms can be included into the SNMP management of a global enterprise, giving implementation details about an SNMP Agent solution and the definition of a new MIB (Management Information Base) for managing OSGi platforms that takes into account the specifics and dynamic nature of OSGi. Keywords—MIB, OSGi, Remote Management, SNMP. I. INTRODUCTION The management of devices in a corporation was initially focused in network parameters and statistics. However, the increasing use of the Internet has connected new devices and applications, and as a consequence, the necessity of a dedicated management of applications or/and services has arisen. For example, [1] explains a solution for including Siemens PLCs (Programmable Logic Controllers) features in remote management through SNMP (Simple Network Management Protocol). Network parameters, applications or services can be managed through proprietary protocols. Each device can set its own proprietary mechanisms for being managed through Internet or even through other proprietary networks. These solutions can be complete in terms of quantity and quality of managed parameters, but they are not interoperable. In this case, each device or type of devices needs to be managed independently from the others and the corporation cannot have a global vision of all their devices nor they can use standard protocols or tools. The SNMP [2], [3] aims to provide a common protocol for management to allow interoperability. SNMP is de facto standard in network management and it is widely used in Internet. This is the reason why SNMP represents an advantage with respect to other management solutions such as those based on CORBA [4] (Common Object Request Broker Architecture) or RMI [5] (Remote Method Invocation). On the other hand, as a consequence of the success of Intelligent Houses [6] they can be connected through Internet now by means of a residential gateway according to a common standard called OSGi [7] (Open Services Gateway initiative) that has been defined running in the residential gateway. The residential gateway is a platform where all the devices of an Intelligent House can be connected to. The residential gateway provides so-called bundles that offer a set of services available for the devices connected to the platform. These OSGi-based residential gateways can be included in the general schema of management as a new element, providing a remote control in terms of features, network parameters, services, applications, etc. This work provides a proposal of how to include residential gateways inside the global SNMP management framework. One of the main aspects is to define the features to be managed in SNMP thanks to a MIB (Management Information Base) [8]. OSGi-based residential gateways have a dynamical nature because services they provide can change dynamically (hot-plug). This implies to solve some problems in the definition of the MIB because MIBs store static information. The remainder of this paper is organized as follows. The section “Related Work” describes related work in the area of remote management for OSGi Service Platforms. The section “Implementation details for an SNMP Agent for OSGi” describes some implementation decisions and the Agent functional block diagram. The second last section describes a new MIB proposal called OSGi-MIB for managing OSGi gateways that is according to the OSGi specific features and solves the dynamic nature problem. Finally, there is a section with our conclusions. II. RELATED WORK There are commercial implementations of the OSGi gateways available that provide already SNMP solutions such as Prosys [9]. The standard MIB that Prosys provides is very limited. Nevertheless, it allows expanding the standard MIB implementing new objects (scalars and tables) and even for new services. The Prosyst implementation provides a SNMP protocol and the user must map new parameters to new MIB objects. In addition, the user should write the JAVA code for the retrieval and setting of these new objects. Anyway, the new nodes of the MIB should be statically predefined and it does not allow a dynamic load of the new service data. Therefore, a manager should load the MIB each time there is a new object added. In this paper, we propose a MIB solution that is called OSGI-MIB. This OSGI-MIB should be loaded only once by a manager. If a new object is added, then the manager can find this new object due to the OSGi-MIB information, without the necessity of loading a new MIB. Furthermore, the OSGI-MIB proposed includes more information about OSGi than the Prosyst commercial solution. There are other non-commercial OSGi solutions available at present that support remote management, like the one proposed with OSCAR using JMX [10]. JMX is a technology for getting and setting management information. JMX provides a JAVA oriented-base approach, which is indicated in dynamic environments such as OSGi platforms. Although JMX can be integrated with whichever management protocol such as RMI, CORBA or SNMP, however the commented solution does not explain the specific integration with SNMP. In order to use JMX together with SNMP, two adaptations should be required: 1) Adaptation of the retrieved information from JMX to a MIB. The description of dynamic information is easier in JMX than in SNMP due to the static nature of MIBs 2) Adaptation to translate from JMX to SNMP protocol. III. AGENT IMPLEMENTATION DETAILS Different OSGi gateway's aspects can be managed. Network administrators should decide which aspects are going to be managed. The proposed solution sets that the SNMP agent should be divided into sub-agents in which each one is in charge of a specific part. In a general case, it could be managed these different aspects: network parameters (for example with the MIB-2 [11]), services, the common OSGi framework and the OSGi common bundles. Fig. 1 shows this division in sub-agents. ![Agent functional block diagram](image) Fig. 1 Agent functional block diagram Fig. 1 shows a possible implementation of an SNMP agent for OSGi gateways. This SNMP OSGi agent would be a bundle inside the OSGi gateway. Furthermore, this bundle should inherit from the OSGi management bundle, because it has permissions to access to some features that are needed to manage remotely. First of all, an Access Control Module checks if a specific manager has permissions in order to perform a specific action on the MIB. This is important in OSGi environments because there are different actors that can manage the OSGi platform and they will have different permissions. A possible scenario could be an operator that owns all management permissions and the user of the gateway with restricted permissions. But other scenarios with different profiles and permissions of managers can be defined. In addition, there is a module where all the actions related to the RFC 1157 are supported. This module is in charge of receiving the manager’s requests (get or set), interpreting the messages, parsing the ASN1 data into the required MIB objects and redirecting each object to the correspondent OSGi SNMP sub-agent. Next, when the sub-agents retrieve (get) or establish (set) the information for all objects, the module composes the response in order to send it to the manager. This module is also in charge of creating the trap messages that are sent as notifications to the managers. Sub-agents are in charge of retrieving, setting and updating data from the MIB data base. Each sub-agent is dedicated to a specific task (network, services, common framework or common bundles). For example, the network sub-agent can retrieve and set information about network parameters and statistics (MIB-2 could be implemented here) but it cannot interpret the services of OSGi. The MIB (it will be presented in next section) is also divided into different branches for each task and each sub-agent is related to a specific MIB branch. Depending on the requested object OID (Object Identifier), the module in charge of the requests will redirect to the appropriate sub-agent. Each sub-agent get, updates and sets the specific information. In this way, it is allowed a common interface for management. Managers should not know about the internal sub-agents. In addition, each sub-agent can be modified independently, without modifying the rest of the agent implementation. This modular solution provides extensibility and scalability. The mentioned scenario, it is valid for SNMPv1, but other SNMP version can be implemented based on it. SNMPv2 adds new primitives and heed manager’s hierarchy, and SNMPv3 adds security. For OSGi gateways it is necessary to implement security aspects as well because it is critical that an attacker will not be able to control the OSGi gateway. IV. MIB DEFINITION PROPOSAL FOR OSGI GATEWAYS Our new MIB creation includes relevant information for managing the OSGi framework, services and common bundles. The MIB has been created according to an analysis of OSGi (release 3) specification, selecting the parameters and information necessary for OSGi. The MIB does not include... information about network management. This information can be taken into account from other public MIBs as the MIB-2. Fig. 2 shows the general structure of our OSGi MIB. The OSGi MIB has to model dynamic situations in two scenarios: 1) Bundles, registered services, etc. that are available in a moment in an OSGi gateway. This information changes during the time. 2) The information related to a service is not known a priori because whichever service could be added to an OSGi platform with whichever information. ![Fig. 2 General groups for the OSGi MIB](image) The OSGi MIB is divided into four groups: - **Framework**: It provides information about the OSGi framework. - **Common bundles**: This branch provides information about usual common bundles in OSGi. - **Services**: This branch provides information about specific services in a moment. Each OSGi gateway can have different services in a concrete moment and new services can appear with unknown information. - **Traps**: This branch contains a table defining the notifications that the manager configures with respect to other objects of the OSGi MIB. ### A. Framework ![Fig. 3 Framework MIB branch](image) The object in bold type is the index for the table. There are as many row instances as installed bundles (in a concrete moment). It is a dynamic table in the sense that the installed bundles number may change at any time. A manager can visualize the installed bundles that are available thanks to this table and all the information related to this bundle is available as well. The BundleTable columns objects are read-only and they represent information about a specific bundle: Bactivator (the class name for starting and stopping the bundle), Bcategory (all the category names), Bclasspath (all the JAR file names that should be searched), Bcontactaddress (the contact address of the vendor of the bundle), Bcopyright (copyright specification of the bundle), Bdescription (description of the bundle), BDoucURL (the URL where is located the documentation of the bundle), Bname (name of the bundle), Bnativecode (a specification of native code contained in this bundle), BreqEE (all the execution environments that are required for this bundle), Bupdatelocation (the location to retrieve the updated JAR file), Bvendor (bundle vendor), Bversion (bundle version), Bdynamicinput (all packages names to import dynamically), BreqEEmport (all packages names that can be exported), Bstate that is the present state of the OSGi gateway (it can be resolved, started, stopped, or active). The BundleActions branch contains the objects to install, desinstall, start, stop, and update the bundles of an OSGi gateway and to activate persistent storage for a bundle. There is one branch under BundleActions for each one of these actions. Fig. 4 shows the install branch under the BundleActions node. ![Fig. 4 Install bundle action branch](image_url) For example, for the Install Action, firstly the StringLocation object (which is read-write) should be written with the URL of this bundle location, next the SetInstall object (which is read-write) should be written with a ‘1’. By means of this, the bundle with location StringLocation will be installed in the OSGi gateway. Finally, the error code should be checked. The errorCode (read-only) will be a string which will contain the id of the last bundle that was tried to be installed and the error code in case of a not successful operation. When a successful install operation is done, then the BundleTable is modified because a new entry is added that corresponds to the new added bundle. The desinstall operation is analogous to the install and when it is successfully applied an entry from the BundleTable is removed. Start and Stop operations are analogous to the install but the parameter that must be set before the operation is the bundle identifier instead of the location. Furthermore, a start or stop operation affects to the object state of an existing bundle inside the BundleTable. Moreover, the update operation updates the data of an entry of the BundleTable but does not create a new one. Finally, the Persistent Storage operation enables persistent storage for a bundle, but does not modify anything of the BundleTable. On the other hand, the ServiceTable contains information about all the services that are in an OSGi gateway in a similar way that the BundleTable contains information about the bundles. For the ServiceTable, there are as many row instances as services. The column objects of the ServiceTable are all read-only and represent information about the services: Sidentifier (It is the identifier of the service), Sdescription (It is a short description of the service), Srank (This is the name registered service name), Svendor (It is the service vendor), Sstate (It represents the service state in a moment. Its possible values are: modified, registered and unregistered), SOIDBundle (It is the OID of the owner bundle of this service in the BundleTable), SOIDMIB (It points to the OID of the root node where the sub-MIB for the service will be located. Under this OID will be all the specific information about this service.). The ServicesActions branch has two sub-branches analogous to the BundleActions, in order to register and unregister services. When a register or unregister operation is performed, then Sstate is changed in the ServicesTable. B. CommonBundles In this subsection it is analyzed which of the common bundles that are in the OSGiV3 specification can be managed. Under the Commonbundles branch, the bundles or packages that make sense to be managed are located as a subbranch for each one. These are the ones to be managed: - Package Admin: It has two sub-branches in the MIB. The first sub-branch contains a table where all the packages for all the bundles of the OSGi gateway are shown. This table has a column called POIDBundle, where it is stored the OID of the bundle the package is exported from. The second sub-branch is related to the refresh action and it contains two scalar objects: BundleOID which is the OID of a bundle, and setrefresh which is the object to activate the refresh action of all the exported packages of the specific BundleOID bundle. - Start Level: This is to manage whichever start level for all the bundles and for the OSGi framework. This implies to add two new objects: StartLevel (a read-only integer) in the BundleTable that represents the start level for a specific bundle and SystemStartLevel (a read-write integer) in the EnvironmentTable that represents the start level for the entire system framework. Furthermore, under the StartLevel branch there are three scalar objects: 1) SLOIDbundle wich is a read-write parameter that indicates the OID of a bundle in the BundleTable. 2) SLStartLevelnumber which is a read-write integer that indicates the startlevel to be established in a bundle 3) SLsetStartlevelwich is a read-write parameter that when is set to ‘1’ sets a new start level number (SLstartlevelnumber) in a bundle (SLOIDbundle). - Permissions Admin: This is to establish permissions for all the bundles. It has two sub-branches: 1) A PermissionsTable which has as many entries as there are different established permissions. Each permission contains a name, an action, the OID of the bundle which the permission references to and the type of permission (default or associated to a bundle) 2) PermissionsAction which contains all objects necessary to add, delete and update the permissions that are in the PermissionsTable. - LogsTable: This is to retrieve information about the different logs in the system. This table has as many entries as logs have been registered. The columns of this table are read-only: Llevel (level of the log), LOIDbundle (the OID of the bundle which is associated to the log), Lexception (string that identifies the exception associated with an error in case there is an error), LOIDService (string that identifies the service associated with the log), Lmessage (the specific log message), Ltime (the time when the event associated to the log happened) - Configuration Admin: This is to set configuration parameters for a specific service when it is registered. There are two different branches that correspond to two different configuration objects: 1) SingletonConfigurationTable that stores the single configurations for services. Each entry is a configuration property for a particular service. The index of the table is composed of CAOIDService (a read-write parameter that is the OID of the pid of the referred service) and CAorderofproperty (a read-only parameter that is the order of this property inside the properties configuration for a specific service). The properties of a specific service which defines its configuration is the union of all the entries of the table which have the CAOIDService of the specific service. Each entry of the table has a set of read-write column parameters related to the property: Csale (the name), CAdescription (the description), CAType (the SNMP type), CAvalue (the value) 2) FactoryConfigurationsTable that stores the Factory Configurations for Service. - User Admin: This is to add users, groups and associated groups to allowed actions. In our proposal there are three tables for defining respectively: users, groups and actions. There are also scalar objects for doing the creation, deletion and updating of users, groups and actions tables. Furthermore, there is a mapping table in the MIB that will relate users to groups. A user could be in several groups. The mapping will be done thanks to the OID columns related to users and groups. There is also a table to associate groups to actions. This table has the mapping between the OIDs of actions and OIDs of groups. There is a field of the table to indicate if the relationship is Basic or Required according to the OSGi specificationNumber equations consecutively with equation numbers in parentheses flush C. Services The previous ServicesTable provides information for viewing all the services that are in an OSGi gateway in a concrete moment. But there is not information about the management of specific parameters of a specific service. For example, a temperature sensor could be a service where additional information related to the temperature in a concrete moment in a house room may be required. This type of information is difficult to be static because OSGi platforms are not restricted to a predefined set of services and parameters to manage. Therefore, each service can have its own proprietary set of parameters to be managed and new services can be registered dynamically in OSGi platforms. Our MIB should take into account this behaviour. Fig. 5 shows services group when all elements of the commented service are scalars. ![Fig. 5 Services group when all the service parameters are scalars](image) In a concrete moment, only services can be managed which are in the ServicesTable. Each of these services has a specific set of parameters to be managed. The sub-MIB in charge of taken into account the particular information about one service is referenced by the SOIDMIB column of the ServicesTable. The SOIDMIB (this is Services.X) indicates the root OID where all information is stored. It is not possible for two different services to have the same SOIDMIB (so two different services have a different X number). Fig. 5 shows the services group to take into account the dynamic service environment. There is a set of read-only objects that provides all the information about the next parameter of the service: nextelement (the name of the next parameter of the service). If no name is supplied then there is no more parameters for the service, nexttype (the SNMP type of the next parameter), nextdescription (the description of the next parameter), nextaccess (the access in the next parameter: read-only, write-only, read-write or not-accessible). There are as many parameters associated to a service until the correspondent nextelement is '0'. It should be noted that with this MIB group, a specific service can change dynamically in the time, including or remaining some objects. Furthermore, the number, names, etc. of parameters for a service are not predefined. As a particular service example, consider that a temperature sensor service should be added to the OSGi platform with “temperature value”, “vendor” and “precision_level” as parameters. Then the ServicesTable would have an entry that would represent this service. We would get the SOIDMIB for this entry, we will suppose it is .1.3.6.1.3.200.16 = Services.16, then the values in this services group would be: ``` Services.16.1="temperature value" Services.16.2="INTEGER" Services.16.3="Temperature in Celsius degrees" Services.16.4="read-only" Services.16.5="27" Services.16.6="vendor" Services.16.7="OCTECT STRING" Services.16.8="temperature sensor vendor" Services.16.9="read-only" Services.16.10="ACZ" Services.16.11="precision_level" Services.16.12="INTEGER (0..4)" Services.16.13="Upper number more precission" Services.16.14="read-only" Services.16.15="4" ``` D. Traps The traps are related to two alarm SNMP tables that are under the trap node. The trap conditions can be configured thanks to these two tables. Traps are related to the different OSGi-MIB objects. A trap is sent from the agent to the manager when certain OSGi-MIB object is between certain value ranges. Whichever string or integer object of the OSGi-MIB (thus for example, possible values are the state of a bundle or the state of a service) can be used for configuring a trap. One table is dedicated to string objects and the other to integer ones. These two tables let set traps about MIB object conditions in a dynamic way. V. CONCLUSION This paper motivates the necessity of SNMP management for OSGi platforms thus allowing a scalable remote management of complex devices in a dynamically changing environment. In order to achieve this objective, SNMP agent implementation details have been described. A new OSGi-MIB has been designed that includes all relevant information to be managed in an OSGi platform. The MIB is based on relevant information described in the OSGi v3 specification. The dynamical nature of OSGi about bundles, packages, services, etc. requires a special MIB. Our OSGi-MIB proposal allows such dynamic management environment without assigning the SNMP manager role to change the initial MIB. REFERENCES
{"Source-Url": "https://waset.org/publications/15012/design-of-an-snmp-agent-for-osgi-service-platforms", "len_cl100k_base": 5120, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18905, "total-output-tokens": 5900, "length": "2e12", "weborganizer": {"__label__adult": 0.0002644062042236328, "__label__art_design": 0.0003769397735595703, "__label__crime_law": 0.00033664703369140625, "__label__education_jobs": 0.0004756450653076172, "__label__entertainment": 7.808208465576172e-05, "__label__fashion_beauty": 0.00012755393981933594, "__label__finance_business": 0.0004680156707763672, "__label__food_dining": 0.00025272369384765625, "__label__games": 0.0004012584686279297, "__label__hardware": 0.003795623779296875, "__label__health": 0.00041365623474121094, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 0.00010633468627929688, "__label__industrial": 0.0006556510925292969, "__label__literature": 0.0001844167709350586, "__label__politics": 0.0002312660217285156, "__label__religion": 0.00036716461181640625, "__label__science_tech": 0.1357421875, "__label__social_life": 7.43865966796875e-05, "__label__software": 0.038604736328125, "__label__software_dev": 0.81591796875, "__label__sports_fitness": 0.00024080276489257812, "__label__transportation": 0.000530242919921875, "__label__travel": 0.00020599365234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25995, 0.0322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25995, 0.31983]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25995, 0.8945]], "google_gemma-3-12b-it_contains_pii": [[0, 4621, false], [4621, 10001, null], [10001, 11840, null], [11840, 17285, null], [17285, 22176, null], [22176, 25995, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4621, true], [4621, 10001, null], [10001, 11840, null], [11840, 17285, null], [17285, 22176, null], [22176, 25995, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25995, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25995, null]], "pdf_page_numbers": [[0, 4621, 1], [4621, 10001, 2], [10001, 11840, 3], [11840, 17285, 4], [17285, 22176, 5], [22176, 25995, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25995, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
75b957d9345811c1789c7f6b18c4ec756c05a2c4
1 Purpose nag_zgebal (f08nvc) balances a complex general matrix in order to improve the accuracy of computed eigenvalues and/or eigenvectors. 2 Specification ```c #include <nag.h> #include <nagf08.h> void nag_zgebal (Nag_OrderType order, Nag_JobType job, Integer n, Complex a[], Integer pda, Integer *ilo, Integer *ihi, double scale[], NagError *fail) ``` 3 Description nag_zgebal (f08nvc) balances a complex general matrix $A$. The term 'balancing' covers two steps, each of which involves a similarity transformation of $A$. The function can perform either or both of these steps. 1. The function first attempts to permute $A$ to block upper triangular form by a similarity transformation: $$ PAP^T = A' = \begin{pmatrix} A'_{11} & A'_{12} & A'_{13} \\ 0 & A'_{22} & A'_{23} \\ 0 & 0 & A'_{33} \end{pmatrix} $$ where $P$ is a permutation matrix, and $A'_{11}$ and $A'_{33}$ are upper triangular. Then the diagonal elements of $A'_{11}$ and $A'_{33}$ are eigenvalues of $A$. The rest of the eigenvalues of $A$ are the eigenvalues of the central diagonal block $A'_{22}$, in rows and columns $i_{lo}$ to $i_{hi}$. Subsequent operations to compute the eigenvalues of $A$ (or its Schur factorization) need only be applied to these rows and columns; this can save a significant amount of work if $i_{lo} > 1$ and $i_{hi} < n$. If no suitable permutation exists (as is often the case), the function sets $i_{lo} = 1$ and $i_{hi} = n$, and $A'_{22}$ is the whole of $A$. 2. The function applies a diagonal similarity transformation to $A'$, to make the rows and columns of $A'_{22}$ as close in norm as possible: $$ A'' = D A' D^{-1} = \begin{pmatrix} I & 0 & 0 \\ 0 & D_{22} & 0 \\ 0 & 0 & I \end{pmatrix} \begin{pmatrix} A'_{11} & A'_{12} & A'_{13} \\ 0 & A'_{22} & A'_{23} \\ 0 & 0 & A'_{33} \end{pmatrix} \begin{pmatrix} I & 0 & 0 \\ 0 & D_{22}^{-1} & 0 \\ 0 & 0 & I \end{pmatrix}. $$ This scaling can reduce the norm of the matrix (i.e., $\|A''_{22}\| < \|A'_{22}\|$) and hence reduce the effect of rounding errors on the accuracy of computed eigenvalues and eigenvectors. 4 References 5 Arguments 1: \textbf{order} – Nag_OrderType \hspace{1cm} \textit{Input} \textit{On entry}: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by order = Nag_RowMajor. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument. \textit{Constraint}: order = Nag_RowMajor or Nag_ColMajor. 2: \textbf{job} – Nag_JobType \hspace{1cm} \textit{Input} \textit{On entry}: indicates whether \(A\) is to be permuted and/or scaled (or neither). \textbf{job} = Nag_DoNothing \(A\) is neither permuted nor scaled (but values are assigned to \(ilo\), \(ihi\) and \(scale\)). \textbf{job} = Nag_Permute \(A\) is permuted but not scaled. \textbf{job} = Nag_Scale \(A\) is scaled but not permuted. \textbf{job} = Nag_DoBoth \(A\) is both permuted and scaled. \textit{Constraint}: \(job = \) Nag_DoNothing, Nag_Permute, Nag_Scale or Nag_DoBoth. 3: \(n\) – Integer \hspace{1cm} \textit{Input} \textit{On entry}: \(n\), the order of the matrix \(A\). \textit{Constraint}: \(n \geq 0\). 4: \(a[\text{dim}]\) – Complex \hspace{1cm} \textit{Input/Output} \textbf{Note}: the dimension, \(dim\), of the array \(a\) must be at least \(\max(1, pda \times n)\). Where \(A(i, j)\) appears in this document, it refers to the array element \[a[(j-1) \times pda + i - 1]\] when \(order =\) Nag_ColMajor; \[a[(i-1) \times pda + j - 1]\] when \(order =\) Nag_RowMajor. \textit{On entry}: the \(n\) by \(n\) matrix \(A\). \textit{On exit}: \(a\) is overwritten by the balanced matrix. If \(job =\) Nag_DoNothing, \(a\) is not referenced. 5: \(pda\) – Integer \hspace{1cm} \textit{Input} \textit{On entry}: the stride separating row or column elements (depending on the value of \(order\)) in the array \(a\). \textit{Constraint}: \(pda \geq \max(1, n)\). 6: \(ilo\) – Integer * \hspace{1cm} \textit{Output} 7: \(ihi\) – Integer * \hspace{1cm} \textit{Output} \textit{On exit}: the values \(i_{lo}\) and \(i_{hi}\) such that on exit \(A(i, j)\) is zero if \(i > j\) and \(1 \leq j < i_{lo}\) or \(i_{hi} < i \leq n\). If \(job =\) Nag_DoNothing or Nag_Scale, \(i_{lo} = 1\) and \(i_{hi} = n\). 8: \(scale[n]\) – double \hspace{1cm} \textit{Output} \textit{On exit}: details of the permutations and scaling factors applied to \(A\). More precisely, if \(p_i\) is the index of the row and column interchanged with row and column \(j\) and \(d_j\) is the scaling factor used to balance row and column \(j\) then The order in which the interchanges are made is $n$ to $i_{hi} + 1$ then $1$ to $i_{lo} - 1$. 9: fail – NagError * The NAG error argument (see Section 3.6 in the Essential Introduction). 6 Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. See Section 3.2.1.2 in the Essential Introduction for further information. NE_BAD_PARAM On entry, argument ⟨value⟩ had an illegal value. NE_INT On entry, $n = ⟨value⟩$. Constraint: $n \geq 0$. On entry, $pda = ⟨value⟩$. Constraint: $pda > 0$. NE_INT_2 On entry, $pda = ⟨value⟩$ and $n = ⟨value⟩$. Constraint: $pda \geq \max (1, n)$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. An unexpected error has been triggered by this function. Please contact NAG. See Section 3.6.6 in the Essential Introduction for further information. NE_NO_LICENCE Your licence key may have expired or may not have been installed correctly. See Section 3.6.5 in the Essential Introduction for further information. 7 Accuracy The errors are negligible, compared with those in subsequent computations. 8 Parallelism and Performance nag_zgebal (f08nvc) is not threaded by NAG in any implementation. nag_zgebal (f08nvc) makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information. Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this function. Please also consult the Users’ Note for your implementation for any additional implementation-specific information. Further Comments If the matrix $A$ is balanced by nag_zgebal (f08nvc), then any eigenvectors computed subsequently are eigenvectors of the matrix $A''$ (see Section 3) and hence nag_zgebak (f08nwc) must then be called to transform them back to eigenvectors of $A$. If the Schur vectors of $A$ are required, then this function must not be called with job = Nag_Scale or Nag_DoBoth, because then the balancing transformation is not unitary. If this function is called with job = Nag_Permute, then any Schur vectors computed subsequently are Schur vectors of the matrix $A''$, and nag_zgebak (f08nwc) must be called (with side = Nag_RightSide) to transform them back to Schur vectors of $A$. The total number of real floating-point operations is approximately proportional to $n^2$. The real analogue of this function is nag_dgebal (f08nhc). Example This example computes all the eigenvalues and right eigenvectors of the matrix $A$, where $$A = \begin{pmatrix} 1.50 - 2.75i & 0.00 + 0.00i & 0.00 + 0.00i & 0.00 + 0.00i \\ -8.06 - 1.24i & -2.50 - 0.50i & 0.00 + 0.00i & -0.75 + 0.50i \\ -2.09 + 7.56i & 1.39 + 3.97i & -1.25 + 0.75i & -4.82 - 5.67i \\ 6.18 + 9.79i & -0.92 - 0.62i & 0.00 + 0.00i & -2.50 - 0.50i \end{pmatrix}.$$ The program first calls nag_zgebal (f08nvc) to balance the matrix; it then computes the Schur factorization of the balanced matrix, by reduction to Hessenberg form and the $QR$ algorithm. Then it calls nag_ztrevc (f08qxc) to compute the right eigenvectors of the balanced matrix, and finally calls nag_zgebak (f08nwc) to transform the eigenvectors back to eigenvectors of the original matrix $A$. 10 Example This example computes all the eigenvalues and right eigenvectors of the matrix $A$, where $$A = \begin{pmatrix} 1.50 - 2.75i & 0.00 + 0.00i & 0.00 + 0.00i & 0.00 + 0.00i \\ -8.06 - 1.24i & -2.50 - 0.50i & 0.00 + 0.00i & -0.75 + 0.50i \\ -2.09 + 7.56i & 1.39 + 3.97i & -1.25 + 0.75i & -4.82 - 5.67i \\ 6.18 + 9.79i & -0.92 - 0.62i & 0.00 + 0.00i & -2.50 - 0.50i \end{pmatrix}.$$ ```c #define VR(I, J) vr[(I-1)*pdvr + J - 1] order = Nag_RowMajor; #endif INIT_FAIL(fail); printf("nag_zgebal (f08nvc) Example Program Results ";/* Skip heading in data file */ #ifdef _WIN32 scanf_s("%*[^ \"]"); #else scanf("%*[^ \"]"); #endif #ifdef _WIN32 scanf_s("%"NAG_IFMT"%*[^ \"]", &n); #else scanf("%"NAG_IFMT"%*[^ \"]", &n); #endif pda = n; pdh = n; pdv = n; scale_len = n; tau_len = n; w_len = n; /* Allocate memory */ if (! (a = NAG_ALLOC(n * n, Complex)) || ! (h = NAG_ALLOC(n * n, Complex)) || ! (scale = NAG_ALLOC(scale_len, double)) || !(tau = NAG_ALLOC(tau_len, Complex)) || !(vl = NAG_ALLOC(1 * 1, Complex)) || !(vr = NAG_ALLOC(n * n, Complex)) || !(w = NAG_ALLOC(w_len, Complex)) || !(select = NAG_ALLOC(1, Nag_Boolean))) { printf("Allocation failure\n""); exit_status = -1; goto END; } /* Read A from data file */ for (i = 1; i <= n; ++i) for (j = 1; j <= n; ++j) #ifdef _WIN32 scanf_s(" ( %lf , %lf )", &A(i, j).re, &A(i, j).im); #else scanf(" ( %lf , %lf )", &A(i, j).re, &A(i, j).im); #endif #endif /* Reduce A to upper Hessenberg form H = (Q**H)*A*Q */ ``` nag_zgehrd(order, n, ilo, ihi, a, pda, tau, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_zgehrd (f08nsc).\n%s\n", fail.message); exit_status = 1; goto END; } /* Copy A to H and VR */ for (i = 1; i <= n; ++i) { for (j = 1; j <= n; ++j) { H(i, j).re = A(i, j).re; H(i, j).im = A(i, j).im; VR(i, j).re = A(i, j).re; VR(i, j).im = A(i, j).im; } } /* Form Q explicitly, storing the result in VR */ /* nag_zunghr (f08ntc). * Generate unitary transformation matrix from reduction to * Hessenberg form determined by nag_zgehrd (f08nsc) */ nag_zunghr(order, n, 1, n, vr, pdvr, tau, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_zunghr (f08ntc).\n%s\n", fail.message); exit_status = 1; goto END; } /* Calculate the eigenvalues and Schur factorization of A */ /* nag_zhseqr (f08psc). * Eigenvalues and Schur factorization of complex upper * Hessenberg matrix reduced from complex general matrix */ nag_zhseqr(order, Nag_Schur, Nag_UpdateZ, n, ilo, ihi, h, pdh, w, vr, pdvr, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_zhseqr (f08psc).\n%s\n", fail.message); exit_status = 1; goto END; } printf(" Eigenvalues\n"); for (i = 0; i < n; ++i) printf(" (%7.4f,%7.4f)", w[i].re, w[i].im); printf("\n"); /* Calculate the eigenvectors of A, storing the result in VR */ /* nag_ztrevc (f08qxc). * Left and right eigenvectors of complex upper triangular * matrix */ nag_ztrevc(order, Nag_RightSide, Nag_BackTransform, select, n, h, pdh, vl, 1, vr, pdvr, n, &m, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_ztrevc (f08qxc).\n%s\n", fail.message); exit_status = 1; goto END; } /* nag_zgebak (f08nwc). * Transform eigenvectors of complex balanced matrix to * those of original matrix supplied to nag_zgebal (f08nvc) */ nag_zgebak(order, Nag_DoBoth, Nag_RightSide, n, ilo, ihi, scale, m, vr, pdvr, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_zgebak (f08nwc).\n%s\n", fail.message); exit_status = 1; goto END; } /* Normalize the eigenvectors */ for(j=1; j<=m; j++) { for(i=n; i>=1; i--) { if(VR(i, j).re != 0 || VR(i, j).im != 0) { firstnz = i; } } for(i=n; i>=1; i--) { VR(i, j) = nag_complex_divide(VR(i, j), VR(firstnz,j)); } } /* Print eigenvectors */ printf("\n"); /* nag_gen_complx_mat_print_comp (x04dbc). * Print complex general matrix (comprehensive) */ fflush(stdout); nag_gen_complx_mat_print_comp(order, Nag_GeneralMatrix, Nag_NonUnitDiag, n, m, vr, pdvr, Nag_BracketForm, "%7.4f", "Contents of array VR", Nag_IntegerLabels, 0, Nag_IntegerLabels, 0, 80, 0, 0, &fail); if (fail.code != NE_NOERROR) { printf("Error from nag_gen_complx_mat_print_comp (x04dbc).\n%s\n", fail.message); exit_status = 1; goto END; } END: NAG_FREE(a); NAG_FREE(h); NAG_FREE(scale); NAG_FREE(tau); NAG_FREE(vl); NAG_FREE(vr); NAG_FREE(w); NAG_FREE(select); return exit_status; 10.2 Program Data nag_zgebal (f08nvc) Example Program Data 4 :Value of N ( 1.50, -2.75) ( 0.00, 0.00) ( 0.00, 0.00) ( 0.00, 0.00) (-8.06, -1.24) (-2.50, -0.50) ( 0.00, 0.00) (-0.75, 0.50) (-2.09, 7.56) ( 1.39, 3.97) (-1.25, 0.75) (-4.82, -5.67) ( 6.18, 9.79) (-0.92, -0.62) ( 0.00, 0.00) (-2.50, -0.50) :End of matrix A 10.3 Program Results nag_zgebal (f08nvc) Example Program Results Eigenvalues (-1.2500, 0.7500) (-1.5000,-0.4975) (-3.5000,-0.5025) ( 1.5000,-2.7500) Contents of array VR <table> <thead> <tr> <th></th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>( 0.0000, 0.0000)</td> <td>( 0.0000, 0.0000)</td> <td>( 0.0000, 0.0000)</td> <td>( 1.0000, 0.0000)</td> </tr> <tr> <td>2</td> <td>( 0.0000, 0.0000)</td> <td>( 1.0000,-0.0000)</td> <td>( 1.0000, 0.0000)</td> <td>(-1.4269,-1.6873)</td> </tr> <tr> <td>3</td> <td>( 1.0000, 0.0000)</td> <td>(-9.7405,-0.0846)</td> <td>( 0.6466, 1.5212)</td> <td>( 5.3497, 1.5369)</td> </tr> <tr> <td>4</td> <td>( 0.0000, 0.0000)</td> <td>(-0.9215,-0.6177)</td> <td>( 0.9215, 0.6177)</td> <td>(-0.0819, 3.0107)</td> </tr> </tbody> </table>
{"Source-Url": "https://www.nag.com/numeric/cl/nagdoc_cl25/pdf/f08/f08nvc.pdf", "len_cl100k_base": 4865, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22638, "total-output-tokens": 5768, "length": "2e12", "weborganizer": {"__label__adult": 0.0003633499145507813, "__label__art_design": 0.0003123283386230469, "__label__crime_law": 0.0003943443298339844, "__label__education_jobs": 0.00036978721618652344, "__label__entertainment": 9.208917617797852e-05, "__label__fashion_beauty": 0.00015676021575927734, "__label__finance_business": 0.0001475811004638672, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.00086212158203125, "__label__hardware": 0.0026073455810546875, "__label__health": 0.0005655288696289062, "__label__history": 0.00022542476654052737, "__label__home_hobbies": 0.00012034177780151369, "__label__industrial": 0.0006437301635742188, "__label__literature": 0.00017201900482177734, "__label__politics": 0.00027871131896972656, "__label__religion": 0.0005865097045898438, "__label__science_tech": 0.0655517578125, "__label__social_life": 8.207559585571289e-05, "__label__software": 0.009033203125, "__label__software_dev": 0.916015625, "__label__sports_fitness": 0.0004010200500488281, "__label__transportation": 0.000461578369140625, "__label__travel": 0.0002084970474243164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13855, 0.0467]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13855, 0.49695]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13855, 0.60939]], "google_gemma-3-12b-it_contains_pii": [[0, 2244, false], [2244, 4814, null], [4814, 6588, null], [6588, 8610, null], [8610, 9758, null], [9758, 11664, null], [11664, 13142, null], [13142, 13855, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2244, true], [2244, 4814, null], [4814, 6588, null], [6588, 8610, null], [8610, 9758, null], [9758, 11664, null], [11664, 13142, null], [13142, 13855, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13855, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13855, null]], "pdf_page_numbers": [[0, 2244, 1], [2244, 4814, 2], [4814, 6588, 3], [6588, 8610, 4], [8610, 9758, 5], [9758, 11664, 6], [11664, 13142, 7], [13142, 13855, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13855, 0.04153]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
9411d4f10c16f25cc4e5a03f10055e3f5758baf1
Software Engineering Processes in Game Development: a Survey about Brazilian Developers’ Experiences Cristiano Politowski*, Daniel de Vargas†, Lisandra M. Fontoura†, Antônio A. Foletto§ Federal University of Santa Maria, Applied Computing Department (DCOM), Brazil ABSTRACT With the increasing participation of digital games in the economy and our society, the attention given to this subject in the academic field has also increased. However, the software engineering field and, more precisely, game development processes seems to be forgotten by researchers. In addition, game developers and big game companies prefer to keep their processes and methodologies to themselves. Studies and professional reports have shown the “ugly face” behind the game industry. Crunch Times and heavy pressure during the development are treated as normal practices in a game developer’s life. In this work, we surveyed 58 Brazilian game developers about the relations between game development process and problems in a software engineering context. We sought for answers based on empirical data collected from the questionnaire. The goal was to understand the area and provide insights to improve game development, pointing a direction for future researches. As a result, considering the Brazilian context, this paper presents three main contributions. The first shows that, on projects that used systematic approaches, regardless of the type, result in better products. The second presents that Delays, Unrealistic scope and Lack of documentation are the most common problems faced by game developers. Finally, we describe insights and considerations gathered from developers and literature studies, which may serve as a source of knowledge as well as characterization of the Brazilian game developers. Keywords: survey, game development process, game developer experience. 1 INTRODUCTION The digital game industry is a billion dollar market that has increased its revenue over the years. According to the marketing specialized company Newzoo [31], in 2016 this industry will move about US$99.6 billions, 8.5% more than the last year, with a predict of US$118 billions in 2019. Although there is not a consensus regarding the nature of digital games (if it is or not a software), game development has particular characteristics and problems which raise its complexity compared to traditional software development [4] [11]. Interviews made by Murphy-Hill et al. [29] stated that video game development has significantly differences compared to traditional software development while others authors [3] [11] say that to develop a video game is to develop a software. Therefore, due to the higher difficulty of game development, combined with professionals’ multidisciplinarity, some authors recommend the use of a Software Engineering (SE) methodology to manage and develop game projects [6]. Despite its decades of existence, development processes in game industry, in general, seems to have not evolved as much as in traditional software community. Through postmortems analysis, Petrillo et al. [36][37] diagnosed several problems faced by worldwide developers during a game project, being unrealistic scope, feature creep and cutting features the most commons. Moreover, Murphy-Hill et al. [29] shows that the game industry, by not using systematic processes, has a lack of maturity. It is a fact that is explicit in the following experience report, gathered by one of the interviewed: “We’ve got so many specialists on the team, so the kind of planning that you usually do in Agile doesn’t work quite so well... You know [specialists] are more concerned about the creative process than an engineering process”. Still, in the IGDA annual report [46], 52% of the interviewers answered “yes” when asked if crunch time was a necessary practice during a game development. Different from previous works that focused on interviews [17] [5] [29] [32], surveys [30], postmortems analysis [38] [36] [37] [35] and general game industry reports focused on pure qualitative results (more about these works in Section 2) [8] [7] [27] [48] [47] [10] [44] [24], our work offers a new approach, by surveying Brazilian game developers for relations between the engineering software processes used, problems faced and project’s success rate. To do this, we used three research questions: 1. Is there a relation between the process used and the project’s success? 2. Is there a relation between the process used and the problems faced by developers? 3. Is there a relation between the developers’ experience and the project’s success? In the absence of a better source, we used empirical data analysis to search for evidences of “why game industry not evolved, in the managerial side, like software industry” and “why so many game developers agree with harmful practices”. We believe that, in the context of developing a digital game, if a game is a software, it might have specific elements and characteristics from game development that favor these problems. The Brazilian restricted market was chosen to serve as a preliminary research. The next step is to expand the survey internationally, but first, we decided to test our hypothesis with a small sample. This paper is structured as follows. Section 2 presents more about the related works. Section 3 explains how we elaborate and conducted the survey. Section 4 shows the main and secondary results. Section 5 discuss the findings. Section 6 exposes some limitations of this work. Finally, Section 7 concludes with observations regarding the results of the survey and suggest future works. 2 RELATED WORK Burger-Helmchen et al. [5] conducted interviews with eight developers. He divided them in three main communities: users, developers and testers, with the purpose of finding how is the interaction between companies, developers and users’ community. Kasurinen et al. [17] interviewed 27 game developers from Finland, ranging four different departments, searching for their expectations regarding design and development tools that they used to use. The results stated that they are satisfied with the tools and practices used, like the use of third party engines that allow the team to focus on game’s core functionalities. Moreover, prototyping is the most common practice in the design phase. Another work made by Kasurinen [16], was a study assessing the video game development from the viewpoint of software engineering where he interviewed 11 companies and conducted a survey to understand the differences between video game and software development. As results, he stated that project management and development tasks are similar, but detailed activities, such as requirements engineering practices, are different. Moreover, he argues that current SE literature does not offer a base to improve the video game development, although Scrum can be suited for this. O’Hagan et al. [32] made a case study with quantitative interviews which found that there is not a good practices model on game development and an approach utilizing ISO/IEC 29110 could be beneficial for the game industry. Murphy-Hill et al. [29] together with a Microsoft SE research group interviewed 14 developers, with at least two year of experience in digital games and traditional software, trying to clarify if game development differs from traditional software development. The results showed that, due to subjective requirements, developing a game is different from traditional software. Schultz [42] discussed, through bibliographical and documentary research, four topics about video games: traditional cultural industry and digital game industry; market segmentation; business model and video games classification. Schetinger et al. [41] purposed to extend the user stories, an Agile practice, to provide a better documentation and communication among the game development team. They come up with a framework (“three Rs”) providing a minimal structure to encapsulate common information. Politowski et al. [38] analyzed 20 video game postmortems finding that agile (and waterfall in lower degree) process are the most used approach in video game development. A work that is similar to this one is from Musil et al. [30]. He applied an on-line questionnaire to 13 Austrian companies looking for software processes and practices that are used and also questions regarding problems faced by developers. He states that agile software processes, like Scrum, are widely used and the most common problems are crunch time and feature creep. Our study was influenced by these works mentioned before, but differs greatly by merging the software engineering discipline, more precisely, software processes, together with game development. There are other empirical studies, developer centered, with data originating from questionnaires. The variables measured are demography, diversity, life quality, job experiences, structures and practices, trends and others metrics more focused on particular aspects of the game industry [8] [7] [27] [48] [47] [10] [44] [24]. 3 METHOD Our work is based on a Grounded theory [12] [45]. We searched for patterns on empirical data gathered from an on-line survey which was answered by game developers. To make the survey we followed, in a roughly way, the guides provided by Kitchinham [20] [22] [23] [18] [21] [19]. The author described ten steps to conduct a survey since its conception till the results. We used the steps described bellow: 3.1 Setting specific, measurable objectives Initially, we elaborated a set of four main objectives and three secondary. These items are based on research questions and were used to formulate the question for the questionnaire. The main objectives are the following: 1. Gather a list of processes types used by developers, regardless the period. 2. Gather the success rate of each process type in every project. 3. Gather a list of the most common problems faced by game developers in each process type. 4. Gather information about game developers’ experience in years and if they have ever developed traditional software. We requested research participants to consider as “successful” a project that had few problems, bugs, reworks, was delivered in time (or near) and without a high budget increase. In this case, success has nothing to do with sales, critic or users reception but it is related to development time. Moving forward, the secondary objectives are the following: - Gather game developers’ opinions about the importance and adoption of Software Engineering in game development. - Gather game developers’ opinions about the differences in building a game and a traditional software. - Gather game developers’ adoption rate of each type of process. 3.2 Planning and scheduling the survey Our idea was to gather as many samples as possible in a restricted community. So, we defined the Brazilian game industry as our target and, because of it, the Brazilian game developers. As said before, we decided to work with a small scope because expanding to international community would require more time to apply the questionnaire and also to analyze the data collected. Moreover, to test our hypothesis, a small sample should be sufficient. Although, this limited scope, the data can be reused later by other researcher with the similar interests. We decided to use an on-line questionnaire, provided by Google Forms, to build the questions, send and receive the developers’ answers. The survey was scheduled to range from May 23 to June 6th. 3.3 Designing the survey We designed the survey in a way it could answer the objectives, as explained above in Section 3.1. With this in mind, we divided the processes in four categories, regarding its nature: Agile, Predictive, Ad-hoc and No-process at all. A process is Agile if the software is built in an iterative approach with continuously process improvement [9]. Developing with Agile is to use small cycles to delivery ready-to-use features each time (iteration) [26]. Examples of this kind of processes are Scrum [43], Extreme Programming (XP) [1], Kanban [33], Adaptive Software Development (ASD) [14] and Feature Driven Development (FDD) [34]. **Table 1:** Most common problems in game development. Adapted from [37]. <table> <thead> <tr> <th>Problem</th> <th>Frequency</th> </tr> </thead> <tbody> <tr> <td>Unrealistic scope</td> <td>75%</td> </tr> <tr> <td>Feature Creep</td> <td>75%</td> </tr> <tr> <td>Cutting features</td> <td>70%</td> </tr> <tr> <td>Design problems</td> <td>65%</td> </tr> <tr> <td>Delays</td> <td>65%</td> </tr> <tr> <td>Technological problems</td> <td>60%</td> </tr> <tr> <td>Crunch time</td> <td>45%</td> </tr> <tr> <td>Lack of Documentation</td> <td>40%</td> </tr> <tr> <td>Communication problems</td> <td>35%</td> </tr> <tr> <td>Tool problems</td> <td>35%</td> </tr> <tr> <td>Test problems</td> <td>35%</td> </tr> <tr> <td>Team building</td> <td>35%</td> </tr> <tr> <td>Number of defects</td> <td>30%</td> </tr> <tr> <td>Loss of Professionals</td> <td>25%</td> </tr> <tr> <td>On Over Budget</td> <td>25%</td> </tr> </tbody> </table> Predictive processes derive from Waterfall. They are composed by a set of sequential phases and each one of them must be completely finished until the next step. It causes the product value to be completely delivered by the deadline, demanding that requirements have been previously defined [40]. Examples of this kind of processes are Waterfall [2] and Rational Unified Process (RUP) [25]. We defined Ad-hoc processes those that are not fitted with Agile or Predictive. Processes that were extremely customized for the company/team needs are considered Ad-hoc too. For those who never used software processes in game projects we defined a last category, called No-process or "code- &-fix approach". The whole questionnaire is divided in five sessions. To start, in session #1, we asked the developer his/her academic and technical background, opinions about the importance of software engineering and differences between developing a software and a digital game. The remaining four sessions (sessions #2, #3, #4, #5) are for each process category: Agile, Predictive, Ad-hoc and No-Process. First we asked if the developer had experience developing games with the particular process type. If the answer was "yes", then the respondent was redirected to a new session where there were specific questions regarding that process, like process types used, number of projects, success rate and problems faced. If the respondent does not have a determined process experience, he/she will be redirected to the next category. This flow can be better visualized in Figure 1 and in the complete questionnaire. In order to populate the questionnaire with problems related to game development, we used the list provided by Petrillo *et al.* [37] [36], described in Table 1, in which he gathered, from twenty post-mortem analysis, the most common problems reported by game developers. In each questionnaire session, the respondent should mark the three most common problems that occurred during all his/her experience as game developer. The problems are listed in Table 3. **3.4 Validating the instrument** The form validation occurred in two stages. First, we asked software engineering professors to analyze the questions' correctness. Second, we sent the questionnaire to game developers to analyze the usability and understanding. --- 5The complete questionnaire can be visualized on the project’s website: http://polako.github.io/gamedev-process-survey/survey-form-export.pdf 6The list of video game associations and companies, in CSV format, can be visualized on the project’s website: http://polako.github.io/gamedev-process-survey --- **Table 2:** Relation between process used and the project success <table> <thead> <tr> <th></th> <th>Agile</th> <th>Predictive</th> <th>Ad-hoc</th> <th>No-process</th> </tr> </thead> <tbody> <tr> <td>Failure</td> <td>56</td> <td>22</td> <td>7</td> <td>39</td> </tr> <tr> <td>Success</td> <td>236</td> <td>86</td> <td>54</td> <td>76</td> </tr> <tr> <td>Success rate</td> <td>80.82%</td> <td>79.63%</td> <td>88.52%</td> <td>66.09%</td> </tr> <tr> <td>Overall</td> <td>19.18%</td> <td>20.37%</td> <td>11.48%</td> <td>33.91%</td> </tr> </tbody> </table> --- **3.5 Selecting participants** We started this step by searching for game developers and software associations using the Google search tool. We searched for the following strings: "Associações de desenvolvedores de jogos do <estado>", "APL audiovisual do <estado/região>", "APL software do <estado/região>" and “Festival de desenvolvimento de jogos do Brasil”. As result, we tabulated 125 different associations, with name, page url, summary and observations on each one. After, using this list, we sought for game developers companies. The result was 347 different companies, tabulated with “name”, “page”, “url”, “state”, “city”, “contact”, “description” and “observations”. The overall companies, grouped by states, can be visualized in Figure 2. The next step was to verify every company and check if it was active or inactive. This verification was made through a contact with the company by email or social network. A total of 253 firms answered, of which 236 said that they were active and 17 inactive. The remaining 94 companies did not reply. **3.6 Administering and scoring the instrument** Afterwards, we sent emails containing the questionnaire to all the active companies. Unfortunately, for technical reasons, 36 emails did not reach the companies. Moreover, we sent the questionnaire to developer groups in social networks, like “Game Developers” on Linkedin and “Indie Game Developers Brasil” e “Game Experience Brazil” on Facebook. --- **4 Results** Although the high number of Brazilian game companies, we got only 62 developers replies. From those replies, one sample was noisy (different values from the expected) and in three samples the answers contradicted each other. Even though the final sample size was 58, we obtained very interesting insights that are described below. In Figure 3 we can see the success rate in every process type. Projects using Ad-hoc processes, with a success rate of 88.53%, represent the best result, followed by Agile with 80.82%, Predictive with 79.63% and lastly No-process, with 66.09%. When looking for the number of projects in each process category we noticed a disparity. There are more projects related being Agile than the other types, as seen in Table 2. Other correlation is related to the process used and the problems faced by developers. In Table 3 is listed the 15 problems and its occurrence in each process type. Moreover, the last column shows the total occurrences of each problem in all the projects analyzed. In Agile the most common problems are Unrealistic scope with 15.76%, Delays with 14.55% and Communication problems with 10.91%. Predictive processes present Delays with 20.00%; Unrealistic scope, Lack of Documentation, Communication problems, Test problems all together with 8.57%. In Ad-hoc processes the most common problems are Delays with 17.14%, Lack of Documentation with 14.29% and Unrealistic scope and Cutting features together with 11.43%. Lastly, when No-process is used, the three The major problems are *Unrealistic scope* with 14.94%, *Lack of Documentation* with 11.49% and *Delays* and *Number of defects* with 9.20%. In addition, if we consider all the projects, regardless the process nature, we have *Delays* with 60.88% followed by *Unrealistic scope* with 50.70% and *Lack of Documentation* with 44.05%. All the problems, grouped by process type, are shown in Figure 4. Two questions were made regarding the developers experience: one about the experience time developing games and another about experience with traditional software development. The Figure 5 shows the relation between the years developing games and the success rate in projects. The Figure 6 shows if the experience working with traditional applications can influence the success rate in game projects. Surprisingly, developers without experience developing traditional software reported a greater success using No-process approach than others that have experience. Nonetheless, the difference is not big enough to make assumptions. Despite the main results, other interesting informations could be Table 3: Relation between process used and the problems faced by developers. <table> <thead> <tr> <th>Problem</th> <th>Agile</th> <th>Predictive</th> <th>Ad-hoc</th> <th>No-process</th> <th>Frequency</th> </tr> </thead> <tbody> <tr> <td>Delays</td> <td>14.55%</td> <td>20.00%</td> <td>17.14%</td> <td>9.20%</td> <td>60.88%</td> </tr> <tr> <td>Unrealistic scope</td> <td>15.76%</td> <td>8.57%</td> <td>11.43%</td> <td>14.94%</td> <td>50.70%</td> </tr> <tr> <td>Lack of Doc.</td> <td>9.70%</td> <td>8.57%</td> <td>14.29%</td> <td>11.49%</td> <td>44.05%</td> </tr> <tr> <td>Cutting features</td> <td>7.27%</td> <td>5.71%</td> <td>11.43%</td> <td>5.75%</td> <td>30.16%</td> </tr> <tr> <td>Design problems</td> <td>6.06%</td> <td>7.14%</td> <td>8.57%</td> <td>6.90%</td> <td>28.67%</td> </tr> <tr> <td>Com. problems</td> <td>10.91%</td> <td>8.57%</td> <td>2.86%</td> <td>5.75%</td> <td>44.05%</td> </tr> <tr> <td>Crunch time</td> <td>6.67%</td> <td>7.14%</td> <td>5.71%</td> <td>8.05%</td> <td>27.57%</td> </tr> <tr> <td>Feature Creep</td> <td>4.85%</td> <td>5.71%</td> <td>8.57%</td> <td>6.90%</td> <td>26.03%</td> </tr> <tr> <td>Test problems</td> <td>7.27%</td> <td>5.71%</td> <td>2.86%</td> <td>6.90%</td> <td>25.60%</td> </tr> <tr> <td>Num. of defects</td> <td>1.21%</td> <td>1.43%</td> <td>8.57%</td> <td>9.20%</td> <td>20.41%</td> </tr> <tr> <td>Over Budget</td> <td>4.85%</td> <td>2.86%</td> <td>2.86%</td> <td>6.90%</td> <td>17.46%</td> </tr> <tr> <td>Team building</td> <td>1.82%</td> <td>5.71%</td> <td>2.86%</td> <td>3.45%</td> <td>13.84%</td> </tr> <tr> <td>Loss of Prof.</td> <td>5.45%</td> <td>5.71%</td> <td>0.00%</td> <td>1.15%</td> <td>12.32%</td> </tr> <tr> <td>Tech. problems</td> <td>2.42%</td> <td>1.43%</td> <td>0.00%</td> <td>3.45%</td> <td>7.30%</td> </tr> <tr> <td>Tool problems</td> <td>1.21%</td> <td>2.86%</td> <td>2.86%</td> <td>0.00%</td> <td>6.93%</td> </tr> </tbody> </table> Figure 4: Aggregated data showing the most frequently problems grouped by process type. Figure 5: Relation between the developers’ experience developing games and the percentage of projects success. Figure 6: Relation between the developers’ experience regarding traditional software and the percentage of projects success. Figure 7: Game developers’ experience using Agile, Predictive, Ad-Hoc and No-process. Discussion The game development seems to be best suitable with a customized approach, different from traditional software. This is evidenced by the highest success rate showed by Ad-hoc processes type. Yet, pure traditional software methods like Agile and Predictive (Waterfall) had also a high success rate, confirming the results gathered by Politowski et al. [38]. A clarified result is regarding the success rate of No-process approach, being the lowest one with around tree times less effectiveness compared to Ad-hoc. Although we expected a clearer difference among the processes types considering the success rate, it reveals that, even not being a standard in video game development nor a well established practice, a systematic approach appears to deliver better products. Analyzing the most common problems reported by developers highlighted. The first one states that the most common approach for developing games is with Agile, with 98.28% (57 samples) of respondents reporting at least one project made using this process. With more than a half of answers, No-process is the second most used method, with 51.72% (30 samples). Predictive with 41.38% (24 samples) comes right after and lastly, Ad-hoc, with 20.69% (12 samples). The data is in Figure 7. Regarding SE, two questions were made: on the first one, it was asked about the importance of this field (SE) for game development; on the second one, it was asked how frequently SE practices are applied during a game project. The alternatives were in 5-points scale, as Figure 8 state. 5 Discussion The game development seems to be best suitable with a customized approach, different from traditional software. This is evidenced by the highest success rate showed by Ad-hoc processes type. Yet, pure traditional software methods like Agile and Predictive (Waterfall) had also a high success rate, confirming the results gathered by Politowski et al. [38]. A clarified result is regarding the success rate of No-process approach, being the lowest one with around tree times less effectiveness compared to Ad-hoc. Although we expected a clearer difference among the processes types considering the success rate, it reveals that, even not being a standard in video game development nor a well established practice, a systematic approach appears to deliver better products. The plausibility of video game development scenario appears to fit better if the feature creep problem has been spreading fast and, at a slower speed, game developers have been realizing that this problem comes together with those reported by Petrillo [37], we may state that issues regarding video game scope are the major source of headaches for game developers. Normally, this problem comes together with feature creep, however, it is not the case because feature creep appears only in the ninth place with 26.03%. This is going in the opposite direction of what happens in Austrian game industry, as stated by Musil [30], with crunch time and feature creep as being the most common problems. The second place problem, Lack of documentation, has high occurrence even in predictive methods (8.57%). Yet, it is in ad-hoc (14.29%) and no-process (11.49%) approaches where the numbers are greater. It shows that besides the high adoption of agile approaches in software game development, documentation is an important artifact and should be considered as a required step in video games project management. Even though, Delays are the most common problem related by the respondents, with 60.88% of frequency. They can be caused by several factors. The root cause may be related with the requirements phase. This step is not trivial in a video game project, notwithstanding the only real requirement is that the game must be “fun” [39] [35] [32] [28]. Better prototyping or brainstorming phase together with a special attention to documentation along the project life cycle may mitigate this problem. By grouping the most common problems in each process type, we can make a correlation between them (Table 4). Surprisingly, Agile and Predictive share similar problems, with a correlation of 79.13%. In the other hand, problems are more alike in Ad-hoc and No-process, with correlation of 71.35%. Concerning developers experience and projects success rate, there is not a clear relation between this two variables. The success rate informed by developers with less than one year (50%) of experience is similar to developers with the highest experience (65.41%). Still, developers with no experience in traditional software show better results (success) utilizing No-process approach. Strengthening the results provided by Politowski et al. [38] and Musil [30], agile appears as (by far) the most used process in video game development. Since its beginnings in mid-2001, the agile culture has been spreading fast and, at a slower speed, game developers are adopting those concepts. The unpredictability and multidisciplinarity of video game development scenario appears to fit better in small cycles of continuing delivery. Table 4: Correlations between problems in each process type. <table> <thead> <tr> <th>Correlation</th> <th>Agile</th> <th>Predictive</th> <th>Ad-hoc</th> <th>No-process</th> </tr> </thead> <tbody> <tr> <td>Agile</td> <td>100.00%</td> <td>79.13%</td> <td>60.84%</td> <td>67.95%</td> </tr> <tr> <td>Predictive</td> <td>79.13%</td> <td>100.00%</td> <td>62.84%</td> <td>39.83%</td> </tr> <tr> <td>Ad-hoc</td> <td>60.84%</td> <td>62.84%</td> <td>100.00%</td> <td>71.35%</td> </tr> <tr> <td>No-process</td> <td>67.95%</td> <td>39.83%</td> <td>71.35%</td> <td>100.00%</td> </tr> </tbody> </table> Another surprisingly result was the developers’ concern regarding software engineering field. The majority of respondents (80%) considered the discipline very important while 67% frequently use SE practices. We expected a high number of developers unfamiliar with this area, but considering that more than half of samples has a Computer Science background, the relation becomes clearer. Also, some open questions were made in the questionaire. One of them is related to game developers’ opinions about differences between building a traditional software and a video game. The large majority of respondents said that there are differences among them and only 5 developers said otherwise. Among these answers, interesting viewpoints can be highlighted: - Traditional software has a linear development while games are more dynamic; - A traditional software is thought to be eternal while video games have short life; - Unlike traditional software that uses a “definition of done”, a game “working” is just the beginning of the job; - Multidisciplinarity is stated as the most different aspect between software and games; - The game creation process involves more user testing than software; - Game engines restrict the use of some patterns in favor of better productivity; - There is a higher coupling in game development pipeline compared to traditional software development; - In software, it is easier to translate a requirement list to tasks while in game development the search for the fun factor involves several features combinations (macro-feature). With respect to ad-hoc processes, this questionnaire section asked the developer how was this kind of process, the determined steps and practices. The next list shows the more relevant answers (each item is a summary of an ad-hoc process described): - “Full autonomy during development with experienced team”; - “Initial definition with artistic freedom plus constant changes”; - “Product objective definition, brainstorming, project, prototyping, validation, project, prototyping, validation, tests and bugs corrections, postmortem and maintenance”; - “Design project and features definition”; - “Four production lines: creation, assets, assemble and test. Each one has a defined process”; - “Fixed quality: zero bugs. Minimal scope stipulated (MVP) but, after this point, flexible and validated by final users. Deadline with 25% of flexibility”; - “Milestones delivered to users without defined scope”. Lastly, it was asked why developers do not use a systematic approach to develop their video games. The majority of answers were due to the lack of knowledge or experience, followed by short time and small teams. Although, a portion of developers appears to consider SE relevant for game development, these results evidence that a good amount of projects being developed with no systematic approach. 6 TREATS OF VALIDITY There are some limitations in this work. First, the sample analyzed is small and, for this reason, hard to make a generalization. Nevertheless, if we think in a small context, like “Brazilian game developers”, the results presented here look more reliable and similar with reality. Second, the respondents are from different video game groups, genre expertise, team size, project size, among many others. It was defined as a criteria that the developer must have participated in at least one game project. Due this, the target group may seem a bit large. Lastly, although the data passed by a noise removal step, the answers may contain bias, compromising the statistics. 7 CONCLUSIONS This work presented a survey about video game developers experiences regarding software engineering processes. We sought for patterns and correlations in empirical data, gathered from an online questionnaire sent to Brazilian video game developers. In this paper we presented three primary contributions gathered from developers descriptions of their previous experiences developing video games. The data shows that, in a Brazilian context, projects that used a systematic approach, regardless of the type, resulted in better products. Although not as accurate as literature argues, Delays, Unrealistic scope and Lack of documentation are the most common problems faced by Brazilian game developers. Moreover, a correlation greater than 70% was noted between problems with Agile and Predictive and with Ad-Hoc and No-process. Considering the lack of specialized literature, the results presented here can be a source of knowledge about video game development and SE process adoption. The next steps of the research is to extend this work by expanding the scope, define a new variables set, make use of interviews and other kinds of empirical methods to extract more about video game development processes. ACKNOWLEDGMENT We would like to thank all companies and respondents for their participation. Also, thank the CAPES and CNPq for the financial support. This project was developed in the context of the Brazilian Army Strategic Project ASTROS 2020. REFERENCES Wikipedia. Cowboy coding — wikipedia, the free encyclopedia, 2016. [Online; accessed 11-July-2016].
{"Source-Url": "http://www.sbgames.org/sbgames2016/downloads/anais/157812.pdf", "len_cl100k_base": 7641, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26907, "total-output-tokens": 10396, "length": "2e12", "weborganizer": {"__label__adult": 0.0010080337524414062, "__label__art_design": 0.0004427433013916016, "__label__crime_law": 0.0007495880126953125, "__label__education_jobs": 0.002216339111328125, "__label__entertainment": 0.00012600421905517578, "__label__fashion_beauty": 0.0003998279571533203, "__label__finance_business": 0.0005321502685546875, "__label__food_dining": 0.0007643699645996094, "__label__games": 0.00833892822265625, "__label__hardware": 0.0008568763732910156, "__label__health": 0.0005660057067871094, "__label__history": 0.0003364086151123047, "__label__home_hobbies": 8.386373519897461e-05, "__label__industrial": 0.0005521774291992188, "__label__literature": 0.0004801750183105469, "__label__politics": 0.000560760498046875, "__label__religion": 0.0008029937744140625, "__label__science_tech": 0.003963470458984375, "__label__social_life": 0.0001195669174194336, "__label__software": 0.0032596588134765625, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0008454322814941406, "__label__transportation": 0.0008502006530761719, "__label__travel": 0.00033926963806152344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40452, 0.05173]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40452, 0.1004]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40452, 0.92279]], "google_gemma-3-12b-it_contains_pii": [[0, 5370, false], [5370, 12218, null], [12218, 19161, null], [19161, 20260, null], [20260, 24521, null], [24521, 30535, null], [30535, 38384, null], [38384, 40452, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5370, true], [5370, 12218, null], [12218, 19161, null], [19161, 20260, null], [20260, 24521, null], [24521, 30535, null], [30535, 38384, null], [38384, 40452, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40452, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40452, null]], "pdf_page_numbers": [[0, 5370, 1], [5370, 12218, 2], [12218, 19161, 3], [19161, 20260, 4], [20260, 24521, 5], [24521, 30535, 6], [30535, 38384, 7], [38384, 40452, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40452, 0.20721]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
25a818ced8a83ca4a8453ae88f4f5fb5352ac7d4
Computational Logic WWW Programming Using LP/CLP Systems LP/CLP, the Internet, and the WWW - Logic and Constraint Logic Programming can be an attractive alternative for Internet/WWW programming. - Shared with other net programming tools: - dynamic memory management, - well-behaved structure (and pointer!) manipulation, - robustness, compilation to architecture-independent bytecode, ... - In addition: - powerful symbolic processing capabilities, - dynamic databases, - search facilities, - grammars, - sophisticated meta-programming / higher order, - easy code (agent) motion, - well understood semantics, ... Most public-domain and commercial LP/CLP systems: - either already have Internet connection capabilities (e.g., socket interfaces), - or it is relatively easy to add it to them (e.g., through the C interface) (e.g., Quintus, LPA, PDC, Amzi!, IF-Prolog, Eclipse, SICStus, BinProlog, SWI, PrologIV, CHIP, Ciao, etc.) Some additional “glue” needed to make things really convenient: - We present several techniques for “filling in these gaps” (many implemented as public domain libraries). - Some commercial systems also include packages that provide similar high-level functionality. In doing this we also work towards answering the question: - What are useful characteristics of particular LP/CLP systems in this context? Global Outline - **PART I**: *WWW programming* - Writing cgi-scripts. - Seeing HTML structured documents as Herbrand terms. - Producing HTML *forms*. - Writing form handlers. - HTML templates. - Accessing and parsing WWW documents. - Accessing code posted at HTTP addresses. - XML, VRML, etc. - **PART II**: *Distributed/agent programming* Writing Basic CGI-bin Applications 1. A standard URL is selected in a browser (client), which is the address of the CGI application, e.g.: http://www.xxx.yyy/cgi_bin/hello_world 2. The browser sends it to the corresponding HTTP server. 3. The executable “hello_world” (in directory cgi_bin) is started by HTTP server. 4. Executable output (stdout) (which has to be in HTML –or MIME– format) is taken by the HTTP server, and passed on to the client Browser, which displays it. An example: UNIX csh See [http://www.clip.dia.fi.upm.es/demo/pillow/hw_csh.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/hw_csh.cgi) ```bash #!/bin/tcsh echo "Content-type: text/html" echo "" echo "<HTML>" echo "Hello <B>world</B>." echo "</HTML>" echo "" ``` - Similarly, with DOS/Win .bat files, etc. - The CGI application often has to be: - in a special directory (e.g., `/usr/local/etc/httpd/cgi-bin`), - or it must have a “.cgi” ending. Writing Basic CGI-bin Applications in LP/CLP See [http://www.clip.dia.fi.upm.es/demo/pillow/hw_prolog.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/hw_prolog.cgi) - A first approach: ```prolog main(_) :- write('Content-type: text/html'), nl, nl, write('<HTML>'), write('Hello <B>world</B>.'), write('</HTML>'). ``` - And the executable can be generated, e.g., by: ```bash ciaoc -o hw_prolog.cgi hw_prolog ``` Scripting Languages • “Scripting” languages (perl, csh, ...) popular for writing CGI apps: ◯ CGI’s: often non-numerical, small- to medium-sized apps. ◯ Strong support for symbol manipulation. ◯ No compilation necessary. ◯ The network is slow anyway. ◯ Small “executable” size (source file!). • A role for LP/CLP? ◯ LP/CLP languages can be great as scripting languages: built-in grammars, databases, interpreter available, fast compilation, ... ◯ But some shortcomings: awkward executable creation, large executables, ... Effective LP/CLP scripts See [http://www.clip.dia.fi.upm.es/demo/pillow/hw_pshell.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/hw_pshell.cgi) - LP/CLP systems can easily be used as scripting languages (e.g., for unix): ```bash #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- main(_) :- write('Content-type: text/html'), nl, nl, write('<HTML>'), write('Hello <B>world</B>.'), write('</HTML>'). where ciao_shell is an executable which: - skips the first line(s), - loads (consults or compiles) the rest of the file, and - starts at main/1. Effective LP/CLP scripts (Contd.) - Can easily be made to “cache” compilations using bytecode files. - Available in Ciao Prolog distribution. - Can also be done in several other ways (.sh files, .bat files, etc.). - Above solution also available for SICStus from ftp:clip.dia.fi.upm.es - (very useful also for writing “filters” —e.g., for unix pipes—, etc.) Relating HTML code and Prolog Terms - HTML is structured: it is possible to reflect this structure as Prolog terms. - Allows viewing any WWW page as a Herbrand term and manipulating it easily. - Ideally, provide bidirectional conversion between a string representing the HTML code and its term representation. - This can be easily done, for example with DCGs. - E.g, predicates for this purpose provided in \textit{Pillow}: - \texttt{html2terms(ASCII, Terms)} (and \texttt{xml2terms(ASCII, Terms)}) Relates a list of HTML terms and a list of ASCII characters (reversible). - \texttt{output_html(F)} Sends to the standard output the text corresponding to the HTML term \textit{F} (calls \texttt{html2terms/2} and then makes the necessary calls to \texttt{write/1}). \textit{Pillow}: public domain WWW/LP interface library –a Ciao library, but versions available for several popular LP/CLP systems) Relating HTML code and Prolog Terms See [http://www.clip.dia.fi.upm.es/demo/pillow/hw_pillow.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/hw_pillow.cgi) - Example: ```bash #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- :- include(library(pillow)). main(_) :- T = [ 'Content-type: text/html', html([ 'Hello', b(world) ] ), output_html(T). ``` WWW Browser std. doc. request www.xxx.yyy output 1 4 3 2 Server HTTP hello_world Relating HTML code and Prolog Terms - **PiLLoW** general HTML structures (can represent any HTML code): - \( Name \$ Atts \) (‘$/2’ is defined as an infix, binary operator.) \[ \text{img$[src='images/map.gif',alt='A map',ismap]} \Rightarrow \\ <img src="images/map.gif" alt="A map" ismap> \] - \( name(\text{Text}) \) (Term with functor \( name/1 \)) \[ \text{address('clip@dia.fi.upm.es')} \Rightarrow \\ <address>clip@dia.fi.upm.es</address> \] - \( name(\text{Atts, Text}) \) (Term with functor \( name/2 \)) \[ \text{a([href='http://www.xx.y/'],'XX home')} \Rightarrow \\ <a href="http://www.xx.y/">XX home</a> \] - \( \text{env(Name,Atts,Text)} \) \[ \text{env(a,[href='http://www.xx.y/'],'XX home')} \Rightarrow \\ <a href="http://www.xx.y/">XX home</a> \] - Also, specific structures to simplify HTML creation. A form is a standard HTML document (supported by all browsers) with - “fields” which can be filled in (each has a name), - a “submit” button, - URL of CGI application that will handle the input (the “handler”) 1. WWW Browser 2. form data 3. HTTP Server 4. form reply 5. handler.cgi 6. WWW Browser Responding to Input: Forms - **Operation (when hitting the “submit” button):** 1. We assume the URL of the handler is `http://www.xxx.yyy/handler.cgi`. 2. Handler URL and form data (input) are passed to server. 3. Server starts handler and passes form data (via stdin or standard file name) associating field names to entered values. 4. The handler produces the appropriate reply, 5. which is passed back to the browser. Writing Form Handlers in LP/CLP - Use same techniques as with standard CGI apps. - Only complication is form data parsing (names/values). - Good solution: implement a parser (easy in LP/CLP) and produce an attribute-value pair list or dictionary. - Enables the symbolic treatment of form data, hiding the low-level protocol behind. E.g., predicates provided in PiLLoW: - `get_form_input(Dic)` Translates input from the form to a dictionary `Dic` of `attribute=value` pairs. This is implemented using a simple DCG parser. ```prolog get_form_input(Dict) :- Dict = [{'name'=Anna, 'age'=23}]. ``` - `get_form_value(Dic, Var, Val)` Gets value `Val` for attribute `Var` in dictionary `Dic`. - `form_empty_value(V)` Useful to check that a value `V` from a text area is empty. - `my_url(URL)` Returns the Uniform Resource Locator (WWW address) of form. A browser can be used as a graphical interface! Writing Form Handlers in LP/CLP: Example See [http://www.clip.dia.fi.upm.es/demo/pillow/simple_form.html](http://www.clip.dia.fi.upm.es/demo/pillow/simple_form.html) - A simple form: ```html <html> <hr> <h2>Please enter input (person_name):</h2> <form method="POST" action="http://localhost/~clip/demo/pillow/simple_handler.cgi"> <input type="text" name="person_name" size="40"> <input type="submit" value="Submit"> </form> <hr> </html> ``` Writing Form Handlers in LP/CLP: Example See [http://www.clip.dia.fi.upm.es/demo/pillow/simple_handler.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/simple_handler.cgi) - A simple form handler (simple_handler.cgi): ``` #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- :- include(library(pillow)). main(_) :- get_form_input(Input), get_form_value(Input,person_name,Name), Answer = [ hr$[], h2('You submitted the name: '), em(Name), hr$[] ], output_html([ 'Content-type: text/html', html(Answer) ]). ``` Producing Forms from Programs: Example See [http://www.clip.dia.fi.upm.es/demo/pillow/simple_form_pillow.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/simple_form_pillow.cgi) - The form itself can be the result of running a program: ```bash #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- ``` ```prolog :- include(library(pillow)). main(_):- Form = [hr[], h2('Please enter input (person_name):'), form([ method=post, action='http://localhost/~clip/demo/pillow/simple_handler.cgi', [ input[type=text,name=person_name,size=40], input[type=submit,value='Submit'] ] ), hr[] ], output_html([ 'Content-type: text/html', html(Form) ]). ``` Producing Forms from Programs: Example See: http://www.clip.dia.fi.upm.es/demo/pillow/simple_form_pillow_sugar.cgi - Or using some minor syntactic sugar (really, deprecated): ```bash #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- :- include(library(pillow)). main(_) :- Form = [ --, h2('Please enter input (person_name):'), start_form('http://localhost/~clip/demo/pillow/simple_handler.cgi'), input(text,[name=person_name,size=40]), input(submit,[value='Submit']), end_form, -- ], output_html([ cgi_reply, html(Form) ]).``` Combining the Form Producer and Handler See http://www.clip.dia.fi.upm.es/demo/pillow/combined_form.cgi #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- :- include(library(pillow)). main(_) :- get_form_input(Input), get_form_value(Input,person_name,Name), output_html([ cgi_reply, html( [ --, h2('You submitted the name: '), em(Name), --, h2('Please enter input (person_name):'), start_form, %%% Refers to self! input(text,[name=person_name,size=40]), input(submit,[value='Submit']), end_form, response(Name, []) :- form_empty_value(Name), !. response(Name, ['Phone number for ',bf(Name),' is ',Info, --]) :- phone(Name,Info), !. response(Name, ['No phone number available for ',bf(Name), '.', --]). %% Database phone('CLIP', '336-7448'). phone('Paco', '554-5225'). phone('Daniel', '460-0569'). A Phones Database (Contd.) main(_):- get_form_input(Input), get_form_value(Input,person_name,Name), response(Name,Response), output_html([cgi_reply, begin(html), title('Simple CLIP telephone database'), begin(body,[background='/demo/images/Clip_bg.gif']), center([image('/demo/images/clip.gif'), heading(2,'Simple CLIP telephone database'), --, Response, start_form, 'Click here, enter name of clip member, and press return:', \, input(text,[name=person_name,size=20]), --, end_form, image('/demo/images/pillow_d.gif') ]), end(body), end(html)]). HTML/XML Templates - In the previous examples layout is hard-coded. - Sometimes desirable to have layout be an input. - One solution is to use *templates*: - File with standard HTML code, - containing “slots”, - which are given an identifier by means of a special tag. - Support predicates in $\text{PiLLoW}$: - $\text{html\_template}(\text{Chars, Terms, Dict})$ - * $\text{Chars}$ is the HTML/XML code with the slots. - * $\text{Terms}$ is the $\text{PiLLoW}$ term with variables (holes) in place of the slots. - * $\text{Dict}$ is a list of name=Variable pairs relating the holes and the slot identifiers. - The template can be created with a standard WYSIWYG HTML editor. Example of template for phones db ```html <HTML> <HEAD> <TITLE>Simple CLIP telephone database</TITLE> </HEAD> <BODY background="/demo/images/Clip_bg.gif"> <CENTER> <IMG src="/demo/images/clip.gif"> <H2>Simple CLIP telephone database</H2> <HR> <V>response</V> <form method="POST"> Click here, enter name of clip member, and press return:<br> <input type="text" name="person_name" size="20"> <HR> </form> <IMG src="/demo/images/pillow_d.gif"> </CENTER></BODY></HTML> ``` Phones db with template See [http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_template.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_template.cgi) ```sh #!/bin/sh exec /home/clip/bin/ciao-shell $0 $* # -*- mode: ciao; -*- :- include(library(pillow)). :- use_module(library(file_utils), [file_to_string/2]). main(_) :- get_form_input(Input), get_form_value(Input, person_name, Name), response(Name, Response), file_to_string('html_template.html', Contents), html_template(Contents, HTML_terms, [response = Response]), output_html([cgi_reply|HTML_terms]). % response/2 and phone/2 as before. ``` Accessing WWW/Internet documents from LP/CLP - The HTTP, FTP, etc. protocols are ASCII protocols which can be added relatively easily to an LP/CLP system (provided it has a socket interface or equivalent). - Applications: search tools, content analyzers, reading html templates, etc. Also access to remote modules via WWW: ``` :- use_module('http://www.xx.y/prolog/p.pl') ``` - E.g., PiLLoW protocol support: - `fetch_url(URL, Request, Response)` - - Some Request options: - `head`: only interested in the header. - `timeout(Time)`: specifies number of seconds for timeout (fails). - `if_modified_since(Date)` - `authorization(Scheme, Param)` (Pillow protocol support example Contd.): - Some possible elements of Response: - content(Content): document as a list of characters. - status(Type,Code,Phrase). - last_modified(Date). - expires(Date). - location(URL) (document has moved). - url_info(URL, Info): parses a URL. Example: ?- url_info('http://www.xx/foo.html', UI), fetch_url(UI, [], R), member(content(C), R), html2terms(C, Terms). check_links(URL,BadLinks) :- url_info(URL,URLInfo), fetch_url(URLInfo,[],Response), member(content_type(text,html,_),Response), member(content(Content),Response), html2terms(Content,Terms), check_source_links(Terms,URLInfo,[],BadLinks). check_source_links([],_,BL,BL). check_source_links([E|Es],BaseUrl,BL0,BL) :- check_source_links1(E,BaseUrl,BL0,BL1), check_source_links(Es,BaseUrl,BL1,BL). check_source_links1(env(a,AnchorAtts,_),BaseUrl,BL0,BL) :- member((href=URL),AnchorAtts), !, check_link(URL,BaseUrl,BL0,BL). check_source_links1(env(_,Atts,Env_html),BaseUrl,BL0,BL) :- !, check_source_links1(Env_html,BaseUrl,BL0,BL). check_source_links1(_,_,BL,BL). check_link(URL,BaseUrl,BL₀,BL) :- url_info_relative(URL,BaseUrl,URLInfo), !, fetch_url_status(URLInfo,Status,Phrase), ( Status \== success -> name(P,Phrase), name(U,URL), BL = [badlink(U,P)|BL₀] ; BL = BL₀ ). check_link(_,_,BL,BL). fetch_url_status(URL,Status,Phrase) :- fetch_url(URL,[head,timeout(20)],Response), !, member(status(Status,_,Phrase),Response). fetch_url_status(_,timeout,timeout). Limitations of CGI - The cgi-bin interface dictates that a handler of a form starts and terminates for each interaction. - Thus, form handlers in principle do not have state. - State can in fact be passed through the form interface (*using info in hidden fields*). - However, for a large application, starting and stopping can be very inefficient. Solving the Limitations of CGI - Standard solution: make the application be a permanently running process. A small CGI script (often written in perl) connects and disconnects from it for every interaction. Solving the Limitations of CGI - **In LP/CLP:** - The running process is a standard application. - The CGI executable can be an LP/CLP script or a C/perl/... program (e.g., ALS includes PiLLoW + such a solution) - Communication by standard means: sockets, blackboards, etc. - Several solutions proposed for dealing with several running sessions at the same time (in essence, concurrency is needed). - "**Active modules**" (active objects) can be used well for this purpose. Active Modules / Active Objects - Modules to which computational resources are attached. - High-level model of client-server interaction. - An active module is a network-wide server for the predicates it exports. - Any module or application can be converted into an “active module” (active object) by compiling it in a special way (creates an executable with a top-level listener). - Procedures can be imported from remote “active modules” via a simple declaration: E.g. `- use_active_module(Name, [P₁/N₁, P₂/N₂,...]).` - Calls to such imported procedures are executed remotely in a transparent way. - Typical application: client-server. Client imports module which exports the functionality provided by server. Access is transparent from then on. - Can be built as an abstraction on top of ports/sockets (see our free library for CIAO, SICStus and other systems). Using Active Modules: An Example - Server code (active module), file database.pl: ```prolog :- module(database, [stock/2]). stock(p1, 23). stock(p2, 45). stock(p3, 12). ``` - Compilation: “ciaoc -a address publishing method database” or: ```prolog ?- make_actmod('/home/clip/public_html/demo/pillow/database.pl', 'actmods/filebased_publish'). ``` produces executable called database. - Active module started as a process – e.g., in unix: database & Using Active Modules: An Example - **Client (file sales.pl):** ```prolog :- module(sales,[need_to_order/1],[actmods]). :- use_active_module(database, [stock/2]). :- use_module(library('actmods/filebased_locate')). need_to_order(P) :- stock(P, S), S < 20. ``` - **Usage:** ```prolog ?- use_module(sales). ?- need_to_order(X). ``` Application: Active Modules as Form Servers <form action="http://www.xxx.yyy/am_inter.cgi"> WWW Browser HTTP Server Active Module Form data Form reply predicate(Arg1, Arg2, ..., ArgN) Phone DB Using Active Modules: Server - Server (the active module): ```prolog :- module(_, [process_form/2], [pillow]). :- use_module(library(file_utils), [file_to_string/2]). process_form(Input, Output) :- get_form_value(Input, person_name, Name), response(Name, Response), file_to_string('html_template.html', Contents), html_template(Contents, HTML_terms, [response = Response]), Output = [cgi_reply|HTML_terms]. response(Name, []) :- form_empty_value(Name), !. response(Name, ['Phone number for ', bf(Name), ' is ', Info, '--']) :- phone(Name, Info), !. response(Name, ['No phone number available for ', bf(Name), '.', --]). %% Database phone('CLIP', '336-7448'). phone('Paco', '554-5225'). phone('Daniel', '460-0569'). ``` Phone DB Using Active Modules: Client See [http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_client.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_client.cgi) - Client (the .cgi using the active module): ```prolog :- module(_, [main/1], [actmods, pillow]). :- use_active_module(phone_db_server, [process_form/2]). :- use_module(library('actmods/filebased_locate')). main(_) :- get_form_input(Input), process_form(Input, Output), output_html(Output). ``` Phone DB Using Active Modules: Adding Phones See [http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_client2.cgi](http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_client2.cgi) - Server active module (client is as before): ```prolog :- module(_, [process_form/2], [pillow]). :- use_module(library(file_utils), [file_to_string/2]). :- use_module(library(dynamic)). process_form(Input, [cgi_reply|HTML_terms]) :- ( get_form_value(Input, input_name, IName), + form_empty_value(IName) -> get_form_value(Input, phone_name, PName), assert(phone(IName, PName)), Response = [ 'Added ', b(IName), ' / ', b(PName), -- ] ; get_form_value(Input, person_name, Name), response(Name, Response) ), file_to_string('html_template2.html', Contents), html_template(Contents, HTML_terms, [response = Response]). response(Name, []) :- form_empty_value(Name), !. response(Name, ['Phone number for ', b(Name), ' is ', Info, --]) :- phone(Name, Info), !. response(Name, ['No phone number available for ', b(Name), '.', --]). :- dynamic phone/2. phone('CLIP', '336-7448'). phone('Paco', '554-5225'). phone('Daniel', '460-0569').``` Phone DB Using Active Modules: Adding Phones w/Persistence See http://www.clip.dia.fi.upm.es/demo/pillow/phone_db_client2pers.cgi - Server active module (client is as before): ```prolog :- module(_, [process_form/2], [pillow, persdb]). :- use_module(library(file_utils), [file_to_string/2]). :- use_module(library(dynamic)). process_form(Input, [cgi_reply|HTML_terms]) :- ( get_form_value(Input, input_name, IName), get_form_value(Input, phone_name, PName), passertz_fact(phone(IName, PName)), Response = [ 'Added ', b(IName), ' / ', b(PName), hr$[] ] ; get_form_value(Input, person_name, Name), response(Name, Response) ), file_to_string('html_template2.html', Contents), html_template(Contents, HTML_terms, [response = Response]). response(Name, []) :- form_empty_value(Name), !. response(Name, ['Phone number for ', b(Name), ' is ', Info, '--']) :- phone(Name, Info), !. response(Name, ['No phone number available for ', b(Name), '.', '--']). ``` Phone DB Using Active Modules: Adding Phones w/Pers (Cont.) :- initialization(init_persdb). :- multifile persistent_dir/2. :- data persistent_dir/2. persistent_dir(db, '/'). :- persistent(phone/2, db). %%% Database phone('CLIP', '336-7448'). phone('Paco', '554-5225'). phone('Daniel', '460-0569'). Achieving Client-side (“Java-like”) Functionality • Automatic code downloading (client side processing): ◦ Can be easily done for a particular browser (e.g., as a netscape “plug-in”, or using Mosaic’s API, as “LogicWeb”). ◦ Can actually be done independently of the browser! (see later) • Supporting complex user interfaces (beyond forms): ◦ Can be done e.g. using available tcl/tk “plug-ins”. ◦ Alternative: generate Java code from Prolog / use Java’s graphical library. • Also, use a Prolog to Java compiler: Bart Demoen’s, Minerva, etc. (execution speed?). Automatic Code Downloading for Local Execution - Using only the facilities presented, automatic LP/CLP code downloading for local execution is possible, using generic browsers. - By simply clicking on a WWW pointer, and transparently for the user, remote code is automatically downloaded and locally queried via forms. - Prerequisites: - The HTTP server on the server machine is configured to give a mime.type of application/x-prolog to the files with WWW-downloadable LP code. - The browser is configured to start helper wpl_handler when receiving data of type application/x-prolog (this application starts the LP engine as an active module). - There is a local cgi-bin executable wpl_questioner.cgi (which uses that active module). Automatic Code Downloading Procedure 1. A click in a link of the query form starts the downloading of the code (alternatively it can also be done on page load using multipart/mixed mime type). 2. Browser starts a `wpl_handler` as document has type `application/x-prolog`. 3. The `wpl_handler` process starts a Prolog engine (configured as an active module) if necessary. 4. The handler asks the active module to read the code (through a `loadcode(File)` call). 5. The active module reads the code and compiles it. 6. `wpl_handler` waits for active module to complete compilation, writes “done” to browser. 7. The browser receives the “done” message. 8. Pressing “submit” button in form now: browser starts a `wpl_questioner` as form handler. 9. The `wpl_questioner` process translates form data to a dictionary `FormData`, passing it to the active module through a call `answerform(FormData,FormReply)`. 10. Active module processes request, returns in `FormReply` a WWW page (html term) containing answer. 11. The `wpl_questioner` process translates `FormReply` to raw HTML and gives it back to the browser, dying afterwards. Subsequent queries proceed at 8. Automatic Code Downloading Procedure – Figure <form action="http://localhost/wpl_questioner.cgi"> <input type="form" name="FormData" value="FormReply"> <input type="form" name="code" value="app.wpl"> </form> www.xxx.yyy loadcode('/tmp/x1349') answerform(FormData, FormReply) form data form reply loaded wpl_handler www.xxx.yyy/app.wpl /tmp/x1349 wpl_handler wpl_questioner Active Module FormData FormReply Higher-Level Models Several models can be defined which provide a higher level of abstraction (e.g., higher level than $\text{PiLLoW}$): - **LogicWeb (Loke and Davison):** - HTML pages can include Prolog code. - Any WWW page is seen by the Prolog code as a module. Module contains the page Prolog code plus some relations related to the HTML code. - Powerful module management. - Interesting applications shown. - **ALP ProWeb:** provides persistence, also has templates, ... - **Other higher-level interfaces:** - Generation of interfaces from database schemata [A.Porto]/RadioWeb, - WebDB: full database system with WWW interface (written in $\text{PiLLoW}$) Some Other Work on LP/CLP + WWW - *Pillow* includes previous work in *html.pl*, F. Bueno’s WWW Chat version, and L. Naish’s NU-Prolog forms. - Also, K. Bowen’s port of *html.pl* to ALS Prolog, which provides group processing of forms and an alternative to our use of active modules. - Szeredi’s multiple request handling through or-parallelism. - ECLiPSe HTTP support library (by replacing HTTP server). - Many LP/CLP Internet applications shown in recent workshops (many using Pillow) Some Conclusions / Other Issues - LP/CLP concepts/technology well suited for Internet applications – and exciting progress: - Many applications already developed (*WebChat*, Rent Advisor and others in the JICSLP’96 Workshop, many more now...). - Commercial systems are already providing interesting high-level functionalities. - The PiLLoW library has been designed to provide basic help in these tasks. It is available from: [http://www.clip.dia.fi.upm.es/miscdocs/pillow/pillow.html](http://www.clip.dia.fi.upm.es/miscdocs/pillow/pillow.html) - Many pointers can be found in the “CLIP/Compulog-Net LP/CLP and the WWW” Pages: [http://www.clip.dia.fi.upm.es/lpnet/index.html](http://www.clip.dia.fi.upm.es/lpnet/index.html) - Underlying support for concurrency and distribution (e.g., &-Prolog/Ciao, BinProlog/$\mu^2$-Prolog, ...) has many advantages: e.g., overlapping requests Some Conclusions / Other Issues (Contd.) - Other interesting Internet/Distributed programming issues not covered: - VRML interfaces (e.g., ProVrml). - Blackboards and shared-variable based communication. - Agent programming.
{"Source-Url": "http://www.cliplab.org/logalg/slides/C_pillow.pdf", "len_cl100k_base": 7399, "olmocr-version": "0.1.53", "pdf-total-pages": 51, "total-fallback-pages": 0, "total-input-tokens": 75259, "total-output-tokens": 9907, "length": "2e12", "weborganizer": {"__label__adult": 0.00023055076599121096, "__label__art_design": 0.00023651123046875, "__label__crime_law": 0.0001863241195678711, "__label__education_jobs": 0.0003376007080078125, "__label__entertainment": 4.7147274017333984e-05, "__label__fashion_beauty": 7.408857345581055e-05, "__label__finance_business": 0.00012218952178955078, "__label__food_dining": 0.00018477439880371096, "__label__games": 0.0002601146697998047, "__label__hardware": 0.00045561790466308594, "__label__health": 0.0001590251922607422, "__label__history": 0.00011092424392700197, "__label__home_hobbies": 4.953145980834961e-05, "__label__industrial": 0.0002269744873046875, "__label__literature": 0.0001131296157836914, "__label__politics": 0.0001245737075805664, "__label__religion": 0.0002830028533935547, "__label__science_tech": 0.0035076141357421875, "__label__social_life": 5.513429641723633e-05, "__label__software": 0.00682830810546875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00013113021850585938, "__label__transportation": 0.0002491474151611328, "__label__travel": 0.00013184547424316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28298, 0.01733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28298, 0.62556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28298, 0.62889]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 637, null], [637, 1363, null], [1363, 1721, null], [1721, 2201, null], [2201, 2652, null], [2652, 3087, null], [3087, 3624, null], [3624, 4207, null], [4207, 4566, null], [4566, 5482, null], [5482, 5989, null], [5989, 6881, null], [6881, 7179, null], [7179, 7611, null], [7611, 8522, null], [8522, 8972, null], [8972, 9561, null], [9561, 10290, null], [10290, 10868, null], [10868, 11466, null], [11466, 11772, null], [11772, 12336, null], [12336, 13032, null], [13032, 13632, null], [13632, 14267, null], [14267, 14945, null], [14945, 15355, null], [15355, 16059, null], [16059, 16509, null], [16509, 16858, null], [16858, 17065, null], [17065, 17549, null], [17549, 18415, null], [18415, 18805, null], [18805, 18872, null], [18872, 19214, null], [19214, 19405, null], [19405, 20162, null], [20162, 20643, null], [20643, 21812, null], [21812, 22808, null], [22808, 23109, null], [23109, 23680, null], [23680, 24423, null], [24423, 25593, null], [25593, 26012, null], [26012, 26689, null], [26689, 27180, null], [27180, 28067, null], [28067, 28298, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 637, null], [637, 1363, null], [1363, 1721, null], [1721, 2201, null], [2201, 2652, null], [2652, 3087, null], [3087, 3624, null], [3624, 4207, null], [4207, 4566, null], [4566, 5482, null], [5482, 5989, null], [5989, 6881, null], [6881, 7179, null], [7179, 7611, null], [7611, 8522, null], [8522, 8972, null], [8972, 9561, null], [9561, 10290, null], [10290, 10868, null], [10868, 11466, null], [11466, 11772, null], [11772, 12336, null], [12336, 13032, null], [13032, 13632, null], [13632, 14267, null], [14267, 14945, null], [14945, 15355, null], [15355, 16059, null], [16059, 16509, null], [16509, 16858, null], [16858, 17065, null], [17065, 17549, null], [17549, 18415, null], [18415, 18805, null], [18805, 18872, null], [18872, 19214, null], [19214, 19405, null], [19405, 20162, null], [20162, 20643, null], [20643, 21812, null], [21812, 22808, null], [22808, 23109, null], [23109, 23680, null], [23680, 24423, null], [24423, 25593, null], [25593, 26012, null], [26012, 26689, null], [26689, 27180, null], [27180, 28067, null], [28067, 28298, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28298, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28298, null]], "pdf_page_numbers": [[0, 58, 1], [58, 637, 2], [637, 1363, 3], [1363, 1721, 4], [1721, 2201, 5], [2201, 2652, 6], [2652, 3087, 7], [3087, 3624, 8], [3624, 4207, 9], [4207, 4566, 10], [4566, 5482, 11], [5482, 5989, 12], [5989, 6881, 13], [6881, 7179, 14], [7179, 7611, 15], [7611, 8522, 16], [8522, 8972, 17], [8972, 9561, 18], [9561, 10290, 19], [10290, 10868, 20], [10868, 11466, 21], [11466, 11772, 22], [11772, 12336, 23], [12336, 13032, 24], [13032, 13632, 25], [13632, 14267, 26], [14267, 14945, 27], [14945, 15355, 28], [15355, 16059, 29], [16059, 16509, 30], [16509, 16858, 31], [16858, 17065, 32], [17065, 17549, 33], [17549, 18415, 34], [18415, 18805, 35], [18805, 18872, 36], [18872, 19214, 37], [19214, 19405, 38], [19405, 20162, 39], [20162, 20643, 40], [20643, 21812, 41], [21812, 22808, 42], [22808, 23109, 43], [23109, 23680, 44], [23680, 24423, 45], [24423, 25593, 46], [25593, 26012, 47], [26012, 26689, 48], [26689, 27180, 49], [27180, 28067, 50], [28067, 28298, 51]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28298, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
0290dd9c80f595b1c88734529be64df07d358784
Weak Conformance of Process Models with respect to Data Objects Andreas Meyer, Artem Polyvyanyy, and Mathias Weske Business Process Technology Group Hasso Plattner Institute at the University of Potsdam Prof.-Dr.-Helmert-Str. 2-3, D-14482 Potsdam, Germany {Andreas.Meyer, Artem.Polyvyanyy, Mathias.Weske}@hpi.uni-potsdam.de Abstract. Process models specify behavioral aspects by describing ordering constraints between tasks which must be accomplished to achieve envisioned goals. Tasks usually exchange information by means of data objects, i.e., by writing information to and reading information from data objects. A data object can be characterized by its states and allowed state transitions. In this paper, we propose a notion which checks conformance of a process model with respect to data objects that its tasks access. This new notion can be used to tell whether in every execution of a process model each time a task needs to access a data object in a particular state, it is ensured that the data object is in the expected state or can reach the expected state and, hence, the process model can achieve its goals. 1 Introduction Process modeling usually comprises two aspects: The control flow perspective and the data flow perspective [1]. Control flow defines possible execution sequences of tasks, whereas data flow provides means for exchanging information between the tasks. Information gets passed between tasks of a process model by writing to and reading from data objects. A data object can be formalized as a set of data states and transitions between the data states, i.e., as a labeled transition system, which is usually referred to as an object life cycle. An object life cycle can be used to identify the current data state of the data object and the set of its reachable data states from the current one [2]. Similarly, the execution semantics of process models is often defined by employing the notion of a process state that defines a set of tasks which can be performed. A process state changes once a task gets accomplished. A process state together with all data states (one for each data object) collectively define a state of a process instance. It is usually accepted that control flow drives execution of process models, i.e., a change in a state of a process instance is triggered by a change of a process state, which in turn may activate changes of data states. In order to achieve safe execution of a process model, it must be ensured that every time a task attempts to access a data object, the data object is in a certain expected data state or is able to reach the expected data state from the current one, i.e., object life cycles of data objects must conform to the process model; otherwise, the execution of a process model may deadlock, i.e., terminate prior to reaching the goal state. In this paper, we propose a notion of weak conformance which allows for a precise characterization of the above described intuition, where “weak” reflects the fact that data states are required to be reachable via arbitrary number of data state transitions and not necessarily via a single one. In a process model which satisfies weak conformance with respect to its data objects, it is assumed that implicit data state transitions get realized by an external entity or by detailed implementations of process model tasks. Relevance for the new notion is based on the need to check for conformance of underspecified process models where, e.g., external events, not captured in the process model, change states of data objects. Events and tasks, being part of but not modeled in the process model, may also change the states of data objects. These modeling artifacts are, for instance, hidden in subprocess structures, so that process models and object life cycles are specified at different levels of detail. Practically, process models still conform to their used data objects if the hidden state changes do not contradict against data object life cycles. The remainder of the paper proceeds as follows: The next section describes process scenarios – a formalism which integrates control flow and data flow aspects of process modeling. In Section 3, we define the notion of weak conformance of the process model from a process scenario with respect to data objects it operates with. Section 4 is devoted to related works. Finally, Section 5 draws conclusion. 2 Process Scenarios In this section, we propose process scenarios – a formalism for designing concurrent systems which integrates control flow and data flow perspectives. A process scenario consists of two parts: (i) a process model which orchestrates the execution of tasks, and (ii) data objects which describe what information do tasks require to be executed and/or what information do tasks produce. We start the discussion with the definition of the first part – a process model. Definition 1 (Process model). A process model is a tuple $M = (A, G, D, R, C, F, \text{type}, \mathcal{A}, \mu)$, where $A$ is a finite set of tasks, $G$ is a finite set of gateways, $D$ is a finite set of data objects, $R$ is a finite set of data states, $C \subseteq (A \cup G) \times (A \cup G)$ is the control flow relation, $F \subseteq (A \times (D \times R)) \cup ((D \times R) \times A)$ is the data flow relation, $\text{type}: G \rightarrow \{\text{xor, and}\}$ assigns to each gateway a type, $\mathcal{A}$ is a finite set of names such that $\tau \in \mathcal{A}$ ($A$, $G$, $D$, $R$, and $\mathcal{A}$ are pairwise disjoint), and $\mu: A \rightarrow \mathcal{A}$ assigns to each task a name. We use subscripts, e.g., $A_M$, $G_M$, and $\mu_M$, to denote the relation of the sets and functions to process model $M$, and omit subscripts where the context is clear. We refer to the set $A \cup G$ as nodes of process model $M$. If $\mu(a) \neq \tau$, $a \in A$, then $a$ is observable in $M$; otherwise $a$ is silent in $M$. We expect that every process model $M$ fulfills basic structural correctness requirements: (i) every task of $M$ has at most one incoming and at most one outgoing control flow edge, (ii) every gateway has at least three incident control flow edges, (iii) $M$ has exactly one source task and at least one sink task (the source has exactly one outgoing and no incoming control flow edges, while each sink has exactly one incoming and no An observable task is drawn as a rectangle that has rounded corners with its name inside. Source and sink tasks are visualized as start and end BPMN events, respectively. Gateways are drawn as diamonds. We call a gateway \( g \in G_M \) of \( M \) an xor (an and) gateway, if \( \text{type}_M(g) = \text{xor} \) (\( \text{type}_M(g) = \text{and} \)). An xor (an and) gateway uses a marker which is shaped like “\( \times \)” (“\( + \)”) inside the diamond shape. A data object (in a particular data state) is visualized as a BPMN data object. A data object \( d \in D_M \) can appear multiple times in the visualization of the process model (also when in a particular data state \( r \in R_M \)). Control flow and data flow edges are drawn as solid and dashed directed edges, respectively. The semantics of a process model is defined as a token game. A marking of a process model is represented by tokens on its control flow edges. Given process model \( M \), a marking (or a process state) of \( M \) is a mapping \( m : C_M \rightarrow \mathbb{N}_0 \) (\( \mathbb{N}_0 \) is the set of natural numbers including zero). Fig. 1 shows a process model in its initial process state – a process state which puts one token on the only outgoing control flow edge of the source and no tokens elsewhere. Every node of a process model (except silent tasks) can be executed. The execution of an observable task removes one token from its only incoming and adds one token on its only outgoing control flow edge. The execution of an and gateway removes one token from each of its incoming control flow edges and then adds one token on each of its outgoing control flow edges. The execution of an xor gateway removes one token from one of its incoming control flow edges and afterwards adds one token on each of its outgoing control flow edges. The choice of the incoming edge as well as of the outgoing edge is done nondeterministically. Observe that we abstract from data-based decisions that are usually used to control the semantics of xor gateways. Let \( m \) and \( m' \) be two markings of \( M \). We write \( m \xrightarrow{x} m' \) to denote that \( m \) changes to \( m' \) by executing node \( x \) of \( M \). If \( \sigma = a_1a_2\ldots a_n, \ n \in \mathbb{N}_0, \) is a sequence of nodes of \( M \), \( m \xrightarrow{a_1} m_1 \xrightarrow{a_2} \ldots m_{n-1} \xrightarrow{a_n} m' \) denotes the fact that there exists a sequence of process states \( m_1m_2\ldots m_{n-1} \) such that \( m \xrightarrow{a_1} m_1 \xrightarrow{a_2} \ldots m_{n-1} \xrightarrow{a_n} m' \). We call... Fig. 2. Object life cycles of (a) “Order” and (b) “Product” data objects σ an execution sequence of M which starts with m. Let a and a’ be two nodes of M. With a ⇒_M a’ we denote the predicate which evaluates to true if a = a’ or there exists an execution sequence of M which starts with the initial marking and executes a before a’; otherwise a ⇒_M a’ evaluates to false. Next, we proceed with the definition of an object life cycle. Definition 2 (Object life cycle). An object life cycle is a tuple L = (S, Σ, ⇒, i), where S is a finite set of data states, Σ is a finite set of actions (S and Σ are disjoint), ⇒ ⊆ S × Σ × S is the data state transition relation, and i ∈ S is the initial data state. We use subscripts S_L, Σ_L, ⇒_L, and i_L, to denote the relation of the elements to the object life cycle L. Note that we omit subscripts where the context is clear. For s, s’ ∈ S and a ∈ Σ we denote by s ⇒_L s’ the fact that (s, a, s’) ∈ ⇒. If σ = a_1 a_2 ... a_n, n ∈ N_0, is a sequence of actions, s ⇒_L s’ denotes the fact that there exists a sequence of data states s_1 s_2 ... s_{n-1} such that s ⇒_L s_1 ⇒_L ... ⇒_L s_{n-1} ⇒_L s’. We call σ an execution sequence of L which starts with s, and s’ is a reachable data state from data state s via σ. With s ⇒_L s’ we denote the predicate which evaluates to true if s = s’ or there exists an execution sequence of L which starts with i_L and reaches s before s’; otherwise s ⇒_L s’ evaluates to false. Finally, a process scenario is defined as follows. Definition 3 (Process scenario). A process scenario is a tuple H = (M, L, ω), where M is a process model, L is a finite set of object life cycles, and ω : D_M → L assigns to each data object of M an object life cycle. Note that we assume that for a process scenario H = (M, L, ω) it holds that ω is injective and ∪_{L∈D_M} S_L ⊆ R_M. Fig. 1 and Fig. 2 visualize a process scenario. The process model of the scenario is given in Fig. 1. It contains two data objects: “Order” and “Product”. The life cycles of these data objects are shown in Fig. 2(a) and Fig. 2(b), respectively. 3 Weak Conformance Prior to proceeding with the definition of weak conformance, we define several notions for convenience considerations. Let f ∈ F_M be a data flow edge of process model M. With f_A, f_D, and f_R we denote the task, data object, and data state component of f, respectively. For instance, if f is equal to (a, (d, r)) or to ((d, r), a), then (in both cases) f_A = a, f_D = d, and f_R = r. We call f an input data flow edge if f ∈ ((D × R) × A), and an output data flow edge if f ∈ (A × (D × R)). Definition 4 (Weak data object conformance). Given process scenario \( H = (M, \mathcal{L}, \omega) \), \( M = (A, G, D, R, C, F, \text{type}, A, \mu) \). \( M \) satisfies weak conformance with respect to data object \( d \in D \) if for all \( f, f' \in F \) such that \( f_D = d = f'_D \) holds \( f_A \Rightarrow_M f'_A \) implies \( f_R \Rightarrow \omega(d) f'_R \), and \( f_A = f'_A \) implies \( f \) is an input edge and \( f' \) is an output edge. Given a process scenario, we say that the process model satisfies weak conformance, if it satisfies weak conformance with respect to each of its data objects. Weak data object conformance is satisfied if for each two succeeding data states of a data object there exists an execution sequence from the first to the second data state in the corresponding object life cycle. Two data states are succeeding in the process model if either (i) they are accessed by the same task with one being part of an input and one being part of an output data flow edge, or (ii) there exists an execution sequence in the process model in which two different tasks access the same data object in two data states. The process model in Fig. 1 satisfies weak conformance with respect to data object “Product” and does not satisfy weak conformance with respect to data object “Order”. Indeed, there exists an execution sequence which visits task “Analyze order” before task “Send bill”, which access data object “Order” in data states “confirmed” and “accepted”, respectively. However, data state “accepted” is not reachable in the object life cycle in Fig. 2(a) via data state “confirmed”. One can fix this flaw, for instance, by changing the data state of the only input data flow of “Send bill” task from “accepted” to “confirmed”, which also modifies the process model to satisfy weak conformance. Comparing the proposed notion of weak conformance to the one introduced in [4], the given process model would not satisfy conformance with respect to data object “Product”. In [4], the authors rely on process models with fully specified data information. For instance, task “Ship products” reads data object “Product” in data state “in stock”. However, in [4], it is required that there exists a preceding task which writes “Product” in that state. As such task does not exist, conformance is not satisfied. 4 Related Work Process models which follow on the imperative design paradigm have been studied extensively [1]. The increasing interest in the development of process models for execution has shifted the focus from control flow to data flow perspective. The first step in this regard are artifact-centric processes introduced in [5]. Artifact-centric processes connect data objects with the control flow of process models by specifying object life cycles which represent data dependencies and based thereon, the order of task execution. In [6,4], the authors present an approach which connects object life cycles with process models by determining commonalities between both representations and transforming one into the other. In [7], a rule-based approach is described; it allows to connect control flow with data flow and, thus, to automatically create data-driven executable process models. In terms of data-driven execution, case handling [8] plays a major role, as in case Handling data dependencies solely determine the order of task execution. In this paper, we also concentrate on the integrated scenarios which incorporate process models and object life cycles. However, we remove the assumption that all the approaches mentioned above follow, i.e., both representations must completely correspond to each other. Instead, we set object life cycles of data objects as references that describe what can be utilized by process models. Compliance, or correctness, in process models mostly refers to checks of the process model with respect to a defined rule set containing, for instance, business policies. The field of compliance is well researched \cite{9,10,11,12} and has already been tackled for artifact centric processes, e.g., \cite{13}. A different type of compliance is introduced in \cite{4}. There, compliance between a process model and an object life cycle of one data object used in the process model is defined as the combination of object life cycle conformance (all data state transitions induced in the process model must occur in the object life cycle) and coverage (opposite containment relation). In this paper, we proposed the definition of a similar type of compliance. As we set object life cycles to be the reference, we assume them to be correct and, therefore, we can restrict the compliance check to conformance only. For conformance, instead of working with direct data state transitions we rely on data state reachability. 5 Conclusion In this paper, we proposed a notion to check for weak conformance between a process model and object life cycles of its utilized data objects. A process model satisfies weak conformance if every time it is allowed to access states of a data object in a specific order (in the process model), these data states can also be reached in an object life cycle of the data object in the very same order. In future works, we plan to propose an algorithm to perform analysis checks based on the notion of weak conformance introduced in this paper. For process models which do not satisfy weak conformance, one can suggest, whenever applicable, changes to the process model so that the resulting model conforms to its data objects. Process model modifications may also be applicable to already conforming process models in order to simplify their structure while preserving the conformance property. Furthermore, in process scenarios with “large” object life cycles, a conforming process model can determine the relevant aspects so that the object life cycles get tailored towards the specific needs of process scenarios and, in this way, become better understandable. References
{"Source-Url": "https://bpt.hpi.uni-potsdam.de/pub/Public/AndreasMeyer/Weak_Conformance_of_Process_Models_with_respect_to_Data_Objects.pdf", "len_cl100k_base": 4108, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22750, "total-output-tokens": 5040, "length": "2e12", "weborganizer": {"__label__adult": 0.0003867149353027344, "__label__art_design": 0.0007476806640625, "__label__crime_law": 0.0006632804870605469, "__label__education_jobs": 0.0032176971435546875, "__label__entertainment": 0.00011581182479858398, "__label__fashion_beauty": 0.00021541118621826172, "__label__finance_business": 0.002887725830078125, "__label__food_dining": 0.0005545616149902344, "__label__games": 0.0005822181701660156, "__label__hardware": 0.0007891654968261719, "__label__health": 0.0009660720825195312, "__label__history": 0.00040221214294433594, "__label__home_hobbies": 0.00016498565673828125, "__label__industrial": 0.001026153564453125, "__label__literature": 0.0006875991821289062, "__label__politics": 0.00042891502380371094, "__label__religion": 0.0004367828369140625, "__label__science_tech": 0.192626953125, "__label__social_life": 0.00016367435455322266, "__label__software": 0.02105712890625, "__label__software_dev": 0.7705078125, "__label__sports_fitness": 0.0002722740173339844, "__label__transportation": 0.0008306503295898438, "__label__travel": 0.00021827220916748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19501, 0.02607]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19501, 0.62862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19501, 0.88331]], "google_gemma-3-12b-it_contains_pii": [[0, 2815, false], [2815, 6360, null], [6360, 8949, null], [8949, 11559, null], [11559, 14887, null], [14887, 17948, null], [17948, 19501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2815, true], [2815, 6360, null], [6360, 8949, null], [8949, 11559, null], [11559, 14887, null], [14887, 17948, null], [17948, 19501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19501, null]], "pdf_page_numbers": [[0, 2815, 1], [2815, 6360, 2], [6360, 8949, 3], [8949, 11559, 4], [11559, 14887, 5], [14887, 17948, 6], [17948, 19501, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19501, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a583adfa23f76d06c0a269b1099b06f183cf3f78
1971 Symmetric Binary B-Trees: Data Structure and Algorithms for Random and Sequential Information Processing Rudolf Bayer Report Number: 71-054 SYMmetric Binary B-Trees: DATA STRUCTURE AND ALGORITHMS FOR RANDOM AND SEQUENTIAL INFORMATION PROCESSING Rudolf Bayer Computer Sciences Purdue University Lafayette, Indiana 47907 CSD TR 54 November 1971 ABSTRACT A class of binary trees is described for maintaining ordered sets of data. Random insertions, deletions, and retrievals of keys can be done in time proportional to \( \log N \) where \( N \) is the cardinality of the data-set. Symmetric B-trees are a modification of B-trees described previously by Bayer and McCreight. This class of trees properly contains the balanced trees. * This work was partially supported by an NSF grant. SYMMETRIC BINARY B-TREES: DATA STRUCTURE AND ALGORITHMS FOR RANDOM AND SEQUENTIAL INFORMATION PROCESSING This paper will describe a further solution to the following well-known problem in information processing: Organize and maintain an index, i.e. an ordered set of keys or virtual addresses, used to access the elements in a set of data, in such a way that random and sequential insertions, deletions, and retrievals can be performed efficiently. Other solutions to this problem have been described for a one-level store in [1], [3], [4], [5] and for a two-level store with a pseudo-random access backup store in [2]. The following technique is suitable for a one-level store. Readers familiar with [2] and [3] will recognize the technique as a further modification of B-trees introduced in [2]. In [3] binary B-trees were considered as a special case and a subsequent modification of the B-trees of [2]. Binary B-trees are derived in a straightforward way from B-trees, they do exhibit, however, a surprising asymmetry: the left arcs in a binary B-tree must be $\delta$-arcs (downward), whereas the right arcs can be either $\delta$-arcs or $\rho$-arcs (horizontal). Removing this somewhat artificial distinction between left and right arcs naturally leads to the symmetric binary B-trees described here. After this brief digression on the relationship of this paper to earlier work we will now proceed with a self-contained presentation of symmetric binary B-trees. Definition: Symmetric binary B-trees (henceforth simply called B-trees) are directed binary trees with two kinds of arcs (pointers), namely δ-arcs (downward or vertical pointers) and υ-arcs (horizontal pointers) such that: i) All leaves are at the same δ-level. ii) All nodes except those at the lowest δ-level have 2 sons. iii) Some of the arcs may be υ-arcs, but there may be no successive υ-arcs. In addition, the keys shall be stored at the nodes of a B-tree in such a way that postorder traversal [6] of the tree yields the keys in increasing order, where postorder traversal is defined recursively as follows: 1) If the tree is empty, do nothing 2) Traverse left subtree 3) Visit root 4) Traverse right subtree. Fig. 1 shows a B-tree. Readers familiar with balanced trees [1], [4], [5], should observe that B-trees are not always balanced trees as shown by the B-tree in Fig. 1. Number of Nodes and Height of a B-tree: Let the height h of a B-tree be the maximal number of nodes in any path from the root to a leaf. Then an example of a B-tree \( T_{\min}(h) \) of even height \( h \) with the smallest number of nodes is of the form \[ \begin{array}{c} x_1 \\ x_0 \quad x_2 \\ x_3 \quad x_4 \\ x_5 \end{array} \] where \( x_0, x_2 \) are the roots of completely balanced binary trees of height \[ \frac{h}{2} - 1 \text{ and } x_4 \text{ is the root of a tree } T_{\text{min}}(h-2). \text{ } T_{\text{min}}(2) \text{ is of the form } \] \[ \text{or } \] Let \( N(T) \) be the number of nodes in tree \( T \). Let \( T_{\text{bal}}(k) \) be a completely balanced binary tree of height \( k \). Then we have: \[ N(T_{\text{min}}(h)) = 2N(T_{\text{bal}}(\frac{h}{2} - 1)) + 2 + N(T_{\text{min}}(h-2)). \] Since \[ N(T_{\text{bal}}(k)) = 2^0 + 2^1 + 2^2 + \ldots + 2^{k-1} = 2^k - 1 \] we obtain: \[ N(T_{\text{min}}(h)) = 2 \cdot (2^{\frac{h}{2}} - 1) + 2 + N(T_{\text{min}}(h-2)) \] \[ = 2^{\frac{h}{2}} + N(T_{\text{min}}(h-2)) \] \[ = 2^{\frac{h}{2}} + 2^{\frac{h}{2}} - 1 + \ldots + 2^1 \] \[ = 2^h + 1 - 2 \] For a B-tree of odd height \( h \) we obtain: \[ N(T_{\text{min}}(h)) = 1 + N(T_{\text{bal}}(\frac{h-1}{2})) + N(T_{\text{min}}(h-1)) \] \[ = 2^{\frac{h-1}{2}} + 2^{\frac{h-1}{2}} - 2 = 2^{h-1} - 2 \] This bound is better than the bound obtained for even \( h \). Using the worse bound obtained for even \( h \), and if \( N \) is the number of nodes in a B-tree of height \( h \), then we obtain as bounds for \( N \): \[ 2^h - 2 \leq N(T_{\text{min}}(h)) \leq N < N(T_{\text{bal}}(h)) = 2^h - 1 \] Fig. 1: Example of a symmetric binary B-tree Taking logarithms we obtain: \[ \frac{h}{2} + 1 \leq \log_2(N+2) \] \[ \log_2(N+1) \leq h \] and consequently as sharp bounds for the height \(h\) of a B-tree with \(N\) nodes: \[ \log_2(N+1) \leq h \leq 2 \log_2(N+2) - 2 \] \(1\) B-trees and balanced trees: **Theorem:** The class of B-trees properly contains the class of balanced trees. **Proof:** Let the \(\delta\)-height of a B-tree be defined as the number of "levels" in a B-tree, i.e. as the number of \(\delta\)-arcs plus one in any path from the root to a leaf. Then a balanced tree of height \(h\) can be transformed into a B-tree of \(\delta\)-height \(\left\lfloor \frac{h}{2} \right\rfloor\) by simply labelling the arcs as \(\delta\)-arcs or \(\rho\)-arcs. The proof is by induction on \(h\). For \(h = 1,2\) we label as follows: <table> <thead> <tr> <th>balanced tree</th> <th>B-tree</th> </tr> </thead> <tbody> <tr> <td><img src="image" alt="Balanced Tree Diagram" /></td> <td><img src="image" alt="B-Tree Diagram" /></td> </tr> </tbody> </table> and in general, letting $A_h$ stand for the root of a balanced tree of height $h$ and $B_h$ for the root of a B-tree of height $h$ we obtain: - If $\left\lfloor \frac{h}{2} \right\rfloor = \left\lfloor \frac{h-1}{2} \right\rfloor$ - If $\left\lfloor \frac{h-1}{2} \right\rfloor = \left\lfloor \frac{h-2}{2} \right\rfloor$ - If $\left\lfloor \frac{h}{2} \right\rfloor - 1 = \left\lfloor \frac{h-1}{2} \right\rfloor$ - If $\left\lfloor \frac{h}{2} \right\rfloor - 1 = \left\lfloor \frac{h-2}{2} \right\rfloor$ - If $\left\lfloor \frac{h-1}{2} \right\rfloor = \left\lfloor \frac{h}{2} \right\rfloor - 1 = \left\lfloor \frac{h-2}{2} \right\rfloor$ - If $\left\lfloor \frac{h}{2} \right\rfloor = \left\lfloor \frac{h-1}{2} \right\rfloor$ The proper containment can be seen from the B-tree in Fig. 1 which is not a balanced tree. This completes the proof. Figures 2 and 3 show a balanced tree and a B-tree obtained by labelling the arcs according to the algorithm implied in the proof. Fig. 2: A balanced tree Fig. 3: The balanced tree of Fig. 2 considered as a B-tree The upper bound on the height of a B-tree obtained in (1) is approximately \(2 \log_2 (N)\) instead of \(1.5 \log_2 (N)\) for the height of a balanced tree. This means that the upper bound for the retrieval time is better for balanced trees than for B-trees. On the other hand, these same bounds and the fact that balanced trees are a proper subclass of B-trees also suggest that less work should be required to update B-trees than to update balanced trees. **Maintenance Algorithms:** We now consider the algorithms for maintaining B-trees if keys are inserted and deleted randomly. The algorithm to retrieve keys is straightforward and will not be described here. **Insertion Algorithm:** A new key \(x\) is inserted into the tree by attaching it with a new \(r\)-arc at the lowest \(\delta\)-level. \(x\) is attached exactly at that place where the retrieval algorithm for \(x\) tries to proceed along a non-existing arc or "falls out of the tree." The possible cases are indicated by the following illustrations: - **SPLITR** - **SPLITL** - **SPLITR** - **SPLITL** Whenever two successive $p$-arcs arise the tree must be modified according to one of the following four cases, which are named like the Algol procedures performing those modifications in our implementation. It is assumed that $x_1 < x_{k+1}$ for all keys. An arc of rectangular shape can be a $p$-arc or a $q$-arc. **SPLITRR:** $\xymatrix{ x_2 \ar[r] \ar[d] & x_4 \ar[r] & x_6 \ar[d] & x_1 \ar[r] & x_3 \ar[r] & x_5 \ar[r] & x_7 }$ **SPLITRL:** $\xymatrix{ x_2 \ar[r] & x_4 \ar[r] & x_6 \ar[d] & x_1 \ar[r] & x_3 \ar[r] & x_5 \ar[r] & x_7 }$ **SPLITLL:** $\xymatrix{ x_2 \ar[r] & x_4 \ar[r] & x_6 \ar[d] & x_1 \ar[r] & x_3 \ar[r] & x_5 \ar[r] & x_7 }$ **SPLITLR:** $\xymatrix{ x_2 \ar[r] & x_4 \ar[r] & x_6 \ar[d] & x_1 \ar[r] & x_3 \ar[r] & x_5 \ar[r] & x_7 }$ In all four cases \( P \) is a left or a right \( \delta \)-arc (pointer) and must then become a left or right \( \rho \)-arc respectively. This way, of course, gives rise to successive \( \rho \)-arcs at the next \( \delta \)-level closer to the root, requiring again one of the modifications just described to remove successive \( \rho \)-arcs from the tree. These modifications may recursively propagate along the retrieval path all the way up to the root of the tree. It is crucial to observe, however, that these modifications can propagate only along the retrieval path and will not affect any other parts of the tree. Thus the total work required to modify the tree in order to remove successive \( \rho \)-arcs is at worst proportional to the length of the retrieval path for \( x \), i.e. to \( 2 \log_2 (N+2) - 2 \). **Deletion Algorithm:** To delete an element \( x \) from the tree, we first have to locate it and to replace it by the next smallest (next largest) key, say \( y \), in the tree. \( y \) is found easily proceeding from \( x \) one step along the left (right) pointer and then along the right (left) pointer as long as possible. The node containing \( y \) originally is then replaced by a dummy node \( d \) which we will delete from the tree in one of the following ways. Note that at any one time \( d \) has at most one successor. We use \( d \) only as a conceptual device for illustrative purposes. In the implementation the dummy node \( d \) is not physically represented, instead the pointer \( P \) simply points "through" to the successor of \( d \). Case A1: at the lowest 6-level: \[ \xrightarrow{P} d \] or \( \xrightarrow{P} z \) \( \xrightarrow{P} z \) terminate Case A2: at the lowest 6-level, if \( d \) is a leaf: \[ \xrightarrow{P} \] \( d \) or \( d \xleftarrow{P} \) or \( \xrightarrow{P} d \) Set \( P := 0 \) and proceed according to one of the following cases. Case B1: \[ \xrightarrow{P} \] Continue recursive deletion. Case B2: Apply SPLITLR and terminate. Case B3: Apply SPLITLL and terminate. Case C: If necessary apply SPLITLR at \( x_4 \), Case C1, or if necessary apply SPLITLL at \( x_4 \), Case C2, or do nothing further, Case C3. In all three cases terminate. Case D: This case is left-right symmetric with Case B. Apply SPLITRL at \( x_1 \) if possible and terminate, Case D2, or apply SPLITRR at \( x_1 \) if possible and terminate, Case D3, or continue recursive deletion process, Case D1. Case E: This case is left-right symmetric with Case C. Apply SPLITRL at \( x_2 \) if possible, Case E1. Apply SPLITRR at \( x_2 \) if possible, Case E2. Do nothing, Case E3. In all three cases terminate. Case F: This case is left-right symmetric with Case F. Terminate. Case G: This case is left-right symmetric with Case F. Terminate. Note that the recursive deletion process terminates in all cases except A2, B1, and D1. If d was moved all the way up to the root of the tree, then it will be deleted and the successor of d will become the new root of the tree. Also, a single retrieval, insertion, or deletion requires inspection and modification of the tree only along a single path from the root to a leaf. As a consequence of this observation and of the bounds for the height of a B-tree obtained in (1) the following main result of this paper is obtained: Main Result: The work that must be performed for random retrievals, insertions, and deletions is even in the worst cases proportional to the height of the B-tree, i.e. to \( \log_2(N+2) \) where N is the number of keys in the tree. Generalization: From the insertion and deletion algorithms discussed in this paper, it is quite clear that the class of binary B-trees could be enlarged by allowing up to n successive p-pointers for \( n = 2, 3, 4, \ldots \) before requiring any modification or "rebalancing" of the tree. This would require less rebalancing, but performance in time \( \log(N) \) would still be guaranteed. IMPLEMENTATION OF INSERTION AND DELETION ALGORITHMS FOR B-TREES For the Algol 60 implementation to be considered here a node in a B-tree shall consist of five fields, namely: LBIT: a Boolean variable to indicate that the left arc is a p-arc (true) or a δ-arc (false) LP: the left downward pointer, an integer KEY: the key in the node, a real RP: the right pointer, downward or horizontal, also an integer RBIT: a Boolean variable to indicate that the right pointer is a p-arc (true) or a δ-arc (false) The absence of a pointer shall be represented by the value 0. Thus the insertion and deletion procedure have array parameters LBIT, LP, KEY, RP, RBIT to store the nodes of the tree. The parameter x is the key to be inserted into or deleted from the tree to whose root the parameter ROOT is pointing (ROOT=0 for an empty tree). The Boolean ROOTBIT indicates ROOT as a p-arc or as a δ-arc. There are two procedure parameters to maintain a list of free nodes, namely ADDQ for the deletion procedure to enter a freed node into the free list, and GETQ for the insertion procedure to obtain a free node from the free list. Both ADDQ and GETQ have one integer parameter pointing to the node added to or obtained from the free list. If the key to be inserted is already in the tree, control will be transferred to the label parameter FOUNDX. If the key to be deleted is not in the tree, control will be transferred to the label parameter XNOTINTREE by the deletion procedure. The parameter P in SYMSERT and SYMDELETE is the pointer to the root of the subtree in which the insertion or deletion must be performed. The parameter BIT in SYMSERT indicates whether P is a p-arc or a δ-arc. The four procedures SPLITRA, SPLITRL, SPLITLL, and SPLITLR modify the B-tree in order to remove successive p-pointers. They are used both in the insertion procedure SYMINS and in the deletion procedure SYMDEL. Other local quantities in the procedures are: **AUXP**: an auxiliary integer variable used as a temporary store for pointers. **DONE**: a label to which control is transferred after completing an insertion in order to shortcut the full recursion of SYMINSERT. **AUXX**: an auxiliary integer variable pointing to the key x after it has been found in the tree. AUXX = 0 otherwise. **QUIT**: a label to which control is transferred after completing the deletion of the dummy node d in order to shortcut the full recursion of SYMDELETE. **AUXD**: an auxiliary integer variable used as temporary store for pointers. **SL**: a label from where deletion of the key from the left subtree (smaller) is continued. **GL**: a label from where deletion of the key from the right subtree (greater) is continued. The insertion (deletion) algorithm has been written as two procedures, a non-recursive outer procedure SYMINS (SYMDEL) and a recursive inner procedure SYMINSERT (SYMDELETE). The outer procedure SYMINS (SYMDEL) allows shortcutting the full recursion of SYMINSERT (SYMDELETE) via the label DONE (QUIT). The inner procedure SYMINSERT (SYMDELETE) performs insertions (deletions) in a B-tree recursively. It is assumed that the six procedures SPLITRR, SPLITRL, SPLITLL, SPLITLR, SYMINS, and SYMDEL are all declared in the same block or in such a way that SPLITRR, SPLITRL, SPLITLL, and SPLITLR can be used both in SYMINS and in SYMDEL. **Note**: The tree in Fig. 1 is a suitable tree for testing. Inserting the keys in the order 8, 9, 11, 15, 19, 20, 21, 7, 3, 2, 1, 5, 6, 4, 13, 14, 10, 12, 17, 16, 18 will build up the tree. Deleting the keys in the order 1, 6, 2, 21, 16, 20, 8, 14, 11, 9, 5, 10, 12, 13, 3, 4, 7, 15, 17, 18, 19 will exercise all the cases which can arise in any deletion process. procedure SPLITRR(P, LP, RP, RBIT); integer P; integer array RP, LP; boolean array RBIT; begin integer (\{p\}); AUXP := RP[P]; RBIT[AUXP] := false; LP[AUXP] := P; P := AUXP end OF SPLITRR procedure SPLITRL(P, LP, RP, LBIT, RBIT); integer P; integer array LP, RP; boolean array LBIT, RBIT; begin integer AUXP; AUXP := LP[RP[P]]; LBIT[RP[P]] := false; LP[RP[P]] := RP[AUXP]; RBIT[P] := false; LP[AUXP] := P; P := AUXP end OF SPLITRL procedure SPLITLL(P, LP, RP, LBIT); integer P; integer array LP, RP; boolean array LBIT; begin integer AUXP; AUXP := LP[P]; LBIT[AUXP] := false; LP[AUXP] := P; P := AUXP end of SPLITLL procedure SPLITLR(P, LP, RP, LBIT, RBIT); integer P; integer array LP, RP; boolean array LBIT, RBIT; begin integer AUXP; AUXP := RP[LP[P]]; RP[LP[P]] := LP[AUXP]; RBIT[LP[P]] := false; LP[AUXP] := LP[P]; LP[P] := RP[AUXP]; LBIT[P] := false; RP[AUXP] := P; P := AUXP end OF SPLITLR procedure SYMINS(X,ROOT,ROOTBIT,FOUNDX,LP,RP,KEY,LBIT,RBIT,GENQ); value X: real X; integer ROOT; Boolean ROOTBIT; label FOUNDX; integer array LP,RP; array KEY; boolean array LBIT,RBIT; procedure GETQ; begin procedure SYMINSERT (P,BIT); integer P; Boolean BIT; if P = 0 then begin comment INSERT X AS NEW LEAF; GETQ(P); KEY[P] := X; BIT := true; end else if X = KEY[P] then goto FOUNDX else if X less KEY[P] then begin comment INSERT X IN LEFT SUBTREE; SYMINSERT (LP[P],LBIT[P]); if LBIT[P] then begin if LBIT[LP[P]] then begin SPLITLL(P,LP,RP,LBIT); BIT := true end else if RBIT[LP[P]] then begin SPLITLR(,LP,RP,LBIT,RBIT); BIT := true end else goto DONE end else begin comment INSERT X IN RIGHT SUBTREE SYMINSERT (RP[P],RBIT[P]); if RBIT[P] then begin if RBIT[RP[P]] then begin SPLITRR(P,LP,RP,RBIT); BIT := true end else if LBIT[RP[P]] then begin SPLITLR(P,LP,RP,LBIT,RBIT); BIT := true end else goto DONE end end OF SYMINSERT; SYMINSERT(ROOT,ROOTBIT); DONE: end OF SYMINS procedure SYMBOL(X, ROOT, NOTINTREE, LP, RP, KEY, LBIT, RBIT, ADDQ); value X; real X; integer ROOT; label NOTINTREE; integer array LP, RP; array KEY; Boolean array LBIT, RBIT; procedure ADDQ; begin integer AUXX, AUXD; comment RECURSIVE B-TREE DELETION ALGORITHM; procedure SYMDELETE(P); integer P; begin comment DID WE FIND THE KEY TO BE DELETED; if X = KEY[P] then AUXX := P; if X notgreater KEY[P] and LP[P] notequal 0 then SL: begin SYMDELETE(LP[P]); comment CASES D, E, G; if LBIT[P] then begin comment CASE G; LBIT[P] := false; goto QUIT end OF CASE G else begin comment CASES E,D; if RBIT[P] then begin comment CASE E; AUXD := RP[P]; RP[P] := LP[AUXD]; LP[AUXD] := P; P := AUXD; if LBIT[RP[LP[P]]] then begin SPLITLR(LP[P], RP, LBIT, RBIT); LBIT[P] := true end else if RBIT[RP(LP[P])] then begin SPLITRR(LP[P], LP, RP, RBIT); LBIT[P] := true end; goto QUIT end OF CASE E else begin comment CASE D; RBIT[P] := true; if LBIT[RP[P]] then begin SPLITLR(P, LP, RP, LBIT, RBIT); goto QUIT end else if RBIT[RP[P]] then begin SPLITRR(P, LP, RP, RBIT); goto QUIT END end OF CASE D end OF CASES D,E end OF SL AND CASES D,E,G else if X notless KEY[P] and RP[P] notequal 0 then GL: begin SYMDELETE(RP[P]); comment CASES B,C,F; if RBIT[P] then begin comment CASE F; RBIT[P] := false; goto QUIT end OF CASE F else begin comment CASES B, C; if LBIT[P] then begin comment CASE C; AUXD := LP[P]; LP[P] := RP[AUXD]; RP[AUXD] := F; F := AUXD; if RBIT[LP[RP[P]]] then begin SPLITLA(LP[P], LP[RP, LBIT, RBIT]); end else if LBIT[LP[RP[P]]] then begin SPLITLL(LP[P], LP[RP, LBIT], RBIT); end else if RBIT[LP[RP[P]]] then begin SPLITLL(LP[P], LP[RP, LBIT, RBIT]); end else if LBIT[LP[P]] then go to QUIT end of case C else begin comment CASE B; LBIT[P] := true; if RBIT(LP[P]) then begin end else if LBIT[LP[P]] then begin SPLITLA(LP[P], LP[RP, LBIT, RBIT], goto QUIT end end else if LBIT[LP[P]] then begin SPLITLL(LP[P], LP[RP, LBIT], goto QUIT end end of case B end of cases B, C end of CL and cases B, C, F else begin comment ARRIVED AT LEAF OR NEXT TO ONE, CASE A; if AUXX = 0 then goto NOTINTREE; KEY[AUXX] := KEY[P]; AUXD := if lBIT[P] then LP[P] else RP[P]; ADDQ(P); F := AUXD; if F not equal 0 then goto QUIT end end of SYMDELETE; AUXX := 0; if ROOT = 0 then goto NOTINTREE else SYMDELETE(ROOT); QUIT; end of SYMDEL BIBLIOGRAPHY
{"Source-Url": "http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1457&context=cstech", "len_cl100k_base": 6468, "olmocr-version": "0.1.48", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 134480, "total-output-tokens": 8076, "length": "2e12", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.0004017353057861328, "__label__crime_law": 0.0004050731658935547, "__label__education_jobs": 0.0010585784912109375, "__label__entertainment": 0.00010889768600463869, "__label__fashion_beauty": 0.00017249584197998047, "__label__finance_business": 0.0004148483276367187, "__label__food_dining": 0.0004744529724121094, "__label__games": 0.0006809234619140625, "__label__hardware": 0.0021457672119140625, "__label__health": 0.000835418701171875, "__label__history": 0.00045180320739746094, "__label__home_hobbies": 0.00016415119171142578, "__label__industrial": 0.000782012939453125, "__label__literature": 0.0003769397735595703, "__label__politics": 0.00028324127197265625, "__label__religion": 0.0006561279296875, "__label__science_tech": 0.2171630859375, "__label__social_life": 0.00011551380157470704, "__label__software": 0.01076507568359375, "__label__software_dev": 0.7607421875, "__label__sports_fitness": 0.0003180503845214844, "__label__transportation": 0.0007767677307128906, "__label__travel": 0.00024259090423583984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21907, 0.02165]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21907, 0.43477]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21907, 0.77826]], "google_gemma-3-12b-it_contains_pii": [[0, 148, false], [148, 796, null], [796, 2270, null], [2270, 3579, null], [3579, 4811, null], [4811, 4856, null], [4856, 5778, null], [5778, 6517, null], [6517, 6765, null], [6765, 6789, null], [6789, 6823, null], [6823, 6848, null], [6848, 7922, null], [7922, 8692, null], [8692, 10283, null], [10283, 10931, null], [10931, 11507, null], [11507, 12661, null], [12661, 14555, null], [14555, 16359, null], [16359, 17416, null], [17416, 18536, null], [18536, 19938, null], [19938, 20945, null], [20945, 21907, null]], "google_gemma-3-12b-it_is_public_document": [[0, 148, true], [148, 796, null], [796, 2270, null], [2270, 3579, null], [3579, 4811, null], [4811, 4856, null], [4856, 5778, null], [5778, 6517, null], [6517, 6765, null], [6765, 6789, null], [6789, 6823, null], [6823, 6848, null], [6848, 7922, null], [7922, 8692, null], [8692, 10283, null], [10283, 10931, null], [10931, 11507, null], [11507, 12661, null], [12661, 14555, null], [14555, 16359, null], [16359, 17416, null], [17416, 18536, null], [18536, 19938, null], [19938, 20945, null], [20945, 21907, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21907, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21907, null]], "pdf_page_numbers": [[0, 148, 1], [148, 796, 2], [796, 2270, 3], [2270, 3579, 4], [3579, 4811, 5], [4811, 4856, 6], [4856, 5778, 7], [5778, 6517, 8], [6517, 6765, 9], [6765, 6789, 10], [6789, 6823, 11], [6823, 6848, 12], [6848, 7922, 13], [7922, 8692, 14], [8692, 10283, 15], [10283, 10931, 16], [10931, 11507, 17], [11507, 12661, 18], [12661, 14555, 19], [14555, 16359, 20], [16359, 17416, 21], [17416, 18536, 22], [18536, 19938, 23], [19938, 20945, 24], [20945, 21907, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21907, 0.00857]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
6009073bcc5d42251c29e6ae78a951e3f73d4aea
“We kept thinking and brainstorming with our TAs, and then boom, a great 15-418 project idea just popped up out of nowhere.” - Romy Madley Croft Raising level of abstraction for synchronization - **Machine-level atomic operations:** - Fetch-and-op, test-and-set, compare-and-swap, load linked-store conditional - **We used these atomic operations to construct higher level synchronization primitives in software:** - Locks, barriers - We’ve seen how it can be challenging to produce correct programs using these primitives (easy to create bugs that violate atomicity, create deadlock, etc.) - **Today: raising level of abstraction for synchronization even further:** - Idea: transactional memory What you should know - What a transaction is - The difference (in semantics) between an atomic code block and lock/unlock primitives - The basic design space of transactional memory implementations - Data versioning policy - Conflict detection policy - Granularity of detection - The basics of a hardware implementation of transactional memory (consider how it relates to the cache coherence protocol implementations we’ve discussed previously in the course) Review: ensuring atomicity via locks void deposit(Acct account, int amount) { lock(account.lock); int tmp = bank.get(account); tmp += amount; bank.put(account, tmp); unlock(account.lock); } - Deposit is a read-modify-write operation: want “deposit” to be atomic with respect to other bank operations on this account - Locks are one mechanism to synchronize threads to ensure atomicity of update (via ensuring mutual exclusion on the account) Programming with transactions void deposit(Acct account, int amount) { lock(account.lock); int tmp = bank.get(account); tmp += amount; bank.put(account, tmp); unlock(account.lock); } void deposit(Acct account, int amount) { atomic { int tmp = bank.get(account); tmp += amount; bank.put(account, tmp); } } - **Atomic construct is declarative** - Programmer states what to do (maintain atomicity of this code), not how to do it - No explicit creation or management of locks - **System implements synchronization as necessary to ensure atomicity** - Implementation discussed today uses optimistic concurrency: serialization only in situations of true contention (R-W or W-W conflicts) Declarative vs. imperative abstractions - **Declarative:** programmer defines what should be done - Execute all these independent 1000 tasks - Perform this set of operations atomically - **Imperative:** programmer states how it should be done - Spawn N worker threads. Assign work to threads by removing work from shared task queue - Acquire a lock, perform operations, release the lock Transactional Memory (TM) - **Memory transaction** - An atomic and isolated sequence of memory accesses - Inspired by database transactions - **Atomicity (all or nothing)** - Upon transaction commit, all memory writes in transaction take effect at once - On transaction abort, none of the writes appear to take effect (as if transaction never happened) - **Isolation** - No other processor can observe writes before commit - **Serializability** - Transactions appear to commit in a single serial order - But the exact order of commits is not guaranteed by semantics of transaction Motivating transactional memory Another example: Java HashMap Map: Key → Value - Implemented as a hash table with linked list per bucket ```java class HashEntry { Object key; Object value; } public class HashMap { private int size; private int bucketCount; private HashEntry[] buckets; public Object get(Object key) { int idx = hash(key); // compute hash HashEntry e = buckets[idx]; // find bucket while (e != null) { // find element in bucket if (equals(key, e.key)) return e.value; e = e.next; } return null; } } ``` Bad: not thread safe (when synchronization needed) Good: no lock overhead when synchronization not needed Synchronized HashMap - Java 1.4 solution: synchronized layer - Convert any map to thread-safe variant - Uses explicit, coarse-grained locking specified by programmer ```java public Object get(Object key) { synchronized (myHashMap) { // guards all accesses to hashMap return myHashMap.get(key); } } ``` - Coarse-grain synchronized HashMap - Good: thread-safe, easy to program - Bad: limits concurrency, poor scalability Review from earlier fine-grained sync lecture What are better solutions for making hashmap object thread-safe? ```java public Object get(Object key) { int idx = hash(key); // compute hash HashEntry e = buckets[idx]; // find bucket while (e != null) { // find element in bucket if (equals(key, e.key)) return e.value; e = e.next; } return null; } ``` - Use finer-grained synchronization: e.g., lock per bucket - Now thread safe: but incurs lock overhead even if synchronization not needed Review: performance of fine-grained locking Reduced contention leads to better performance Balanced Tree Hash-Table Execution Time Processors coarse locks fine locks Transactional HashMap - Simply enclose all operation in atomic block - Semantics of atomic block: system ensures atomicity of logic within block ```java public Object get(Object key) { atomic { // System guarantees atomicity return m.get(key); } } ``` - Transactional HashMap - Good: thread-safe, easy to program - What about performance and scalability? - Depends on the workload and implementation (to be discussed) Another example: tree update by two threads Goal: modify nodes 3 and 4 in a thread-safe way Slide credit: Austen McDonald Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Fine-grained locking example Goal: modify nodes 3 and 4 in a thread-safe way Hand-over-hand locking Locking can prevent concurrency (here: locks on node 1 and 2 during update to node 3 could delay update to 4) Transactions example Figure highlights data touched as part of transaction Transaction A READ: 1, 2, 3 Transactions example Figure highlights data touched as part of transaction Transaction A READ: 1, 2, 3 WRITE: 3 Transactions example Figure highlights data touched as part of transaction Transaction A READ: 1, 2, 3 WRITE: 3 Transaction B READ: 1, 2, 4 WRITE: 4 NO READ-WRITE or WRITE-WRITE conflicts! (no transaction writes to data that is accessed by other transactions) Slide credit: Austen McDonald Transactions example #2 (Both transactions modify node 3) Transaction A READ: 1, 2, 3 WRITE: 3 Transaction B READ: 1, 2, 3 WRITE: 3 Conflicts exist: transactions must be serialized (both transactions write to node 3) Performance: locks vs. transactions “TCC” is a HW-based TM system [Graphs showing execution time for different lock types (coarse, fine, and TCC) across varying numbers of processors for HashMap and Balanced Tree data structures.] Failure atomicity: locks ```java void transfer(A, B, amount) { synchronized(bank) { try { withdraw(A, amount); deposit(B, amount); } catch(exception1) { /* undo code 1*/ } catch(exception2) { /* undo code 2*/ } ... } } ``` - Complexity of manually catching exceptions - Programmer provides “undo” code on a case-by-case basis - Complexity: must track what to undo and how... - Some side-effects may become visible to other threads - E.g., an uncaught case can deadlock the system... null Failure atomicity: transactions ```c void transfer(A, B, amount) { atomic { withdraw(A, amount); deposit(B, amount); } } ``` - System now responsible for processing exceptions - All exceptions but those explicitly managed by the programmer - Transaction is aborted and updates are undone - No partial updates are visible to other threads - E.g., no locks held by a failing threads… Composability: locks void transfer(A, B, amount) { synchronized(A) { synchronized(B) { withdraw(A, amount); deposit(B, amount); } } } Composing lock-based code can be tricky - Requires system-wide policies to get correct - Breaks software modularity Programmer caught between an extra lock and a hard place - Coarse-grain locks: low performance - Fine-grain locking: good for performance, but can lead to deadlock Thread 0: transfer(x, y, 100) Thread 1: transfer(y, x, 100); Composability: locks Composing lock-based code can be tricky - Requires system-wide policies to get correct - Breaks software modularity Programmer caught between an extra lock and a hard place - Coarse-grain locks: low performance - Fine-grain locking: good for performance, but can lead to deadlock ```java class Example { void transfer(A, B, amount) { synchronized(A) { synchronized(B) { withdrwa(A, amount); deposit(B, amount); } } } void transfer2(A, B, amount) { synchronized(B) { synchronized(A) { withdrwa(A, 2*amount); deposit(B, 2*amount); } } } } ``` void transfer(A, B, amount) { atomic { withdraw(A, amount); deposit(B, amount); } } Transactions compose gracefully - Programmer declares global intent (atomic execution of transfer) - No need to know about global implementation strategy - Transaction in transfer subsumes any defined in withdraw and deposit - Outermost transaction defines atomicity boundary System manages concurrency as well as possible serialization - Serialization for transfer(A, B, 100) and transfer(B, A, 200) - Concurrency for transfer(A, B, 100) and transfer(C, D, 200) Advantages (promise) of transactional memory - Easy to use synchronization construct - As easy to use as coarse-grain locks - Programmer declares need for atomicity, system implements - Often performs as well as fine-grained locks - Automatic read-read concurrency and fine-grained concurrency - Failure atomicity and recovery - No lost locks when a thread fails - Failure recovery = transaction abort + restart - Composability - Safe and scalable composition of software modules Example integration with OpenMP - **Example: OpenTM = OpenMP + TM** - OpenMP: master-slave parallel model - Easy to specify parallel loops and tasks - TM: atomic and isolation execution - Easy to specify synchronization and speculation - **OpenTM features** - Transactions, transactional loops and transactional sections - Data directives for TM (e.g., thread private data) - Runtime system hints for TM - **Code example:** ```c #pragma omp target schedule (static, chunk=50) for (int i=0; i<N; i++) { bin[A[i]]++; } ``` Atomic `{ }` ≠ lock() + unlock() - The difference - Atomic: high-level declaration of atomicity - Does not specify implementation/blocking behavior - Lock: low-level blocking primitive - Does not provide atomicity or isolation on its own - Keep in mind - Locks can be used to implement an atomic block but... - Locks can be used for purposes beyond atomicity - Cannot replace all uses of locks with atomic regions - Atomic eliminates many data races, but programming with atomic blocks can still suffer from atomicity violations: e.g., programmer erroneous splits sequence that should be atomic into two atomic blocks Make sure you understand this difference in semantics! What is the problem with replacing synchronized with atomic in this example? // Thread 1 synchronized(lock1) { ... flagA = true; while (flagB == 0); ... } // Thread 2 synchronized(lock2) { ... flagB = true; while (flagA == 0); ... } Example: atomicity violation due to programmer error - Programmer mistake: logically atomic code sequence (in thread 1) is erroneously separated into two atomic blocks (allowing another thread to set pointer to NULL in between) **Transactional memory: summary + benefits** - **TM = declarative synchronization** - User specifies requirement (atomicity and isolation) - System implements semantics in best possible way - **Motivation for TM** - Difficult for programmers to get explicit synchronization right - Correctness vs. performance vs. complexity - Explicit synchronization is difficult to scale - Locking scheme for four CPUs is often not the best scheme for 64 CPUs - Explicit synchronization can break composability of software - Need a globally-adhered to locking policy - Other advantages: fault atomicity, ... - **Productivity argument for transactional memory:** - System support for transactions can achieve 90% of the benefit of programming with fined-grained locks, with 10% of the development time Implementing transactional memory Recall: transactional memory - Atomicity (all or nothing) - At commit, all memory writes take effect at once - In event of abort, none of the writes appear to take effect - Isolation - No other code can observe writes before commit - Serializability - Transactions seem to commit in a single serial order - The exact order is not guaranteed though TM implementation basics - TM systems must provide atomicity and isolation - Without sacrificing concurrency - Basic implementation requirements - Data versioning (ALLOWS abort) - Conflict detection and resolution (WHEN to abort) - Implementation options - Hardware transactional memory (HTM) - Software transactional memory (STM) - Hybrid transactional memory - e.g., hardware-accelerated STMs Data versioning Manage uncommitted (new) and previously committed (old) versions of data for concurrent transactions 1. Eager versioning (undo-log based) 2. Lazy versioning (write-buffer based) Eager versioning Update memory immediately, maintain “undo log” in case of abort **Begin Transaction** - Thread (executing transaction) - Memory: X: 10 - Undo log **Write x←15** - Thread (executing transaction) - Memory: X: 15 - Undo log: X: 10 **Commit Transaction** - Thread (executing transaction) - Memory: X: 15 - Undo log: X: 10 **Abort Transaction** - Thread (executing transaction) - Memory: X: 10 - Undo log: X: 10 Lazy versioning Log memory updates in transaction write buffer, flush buffer on commit Begin Transaction Thread (executing transaction) Write buffer X: 10 Memory Write x←15 Thread (executing transaction) Write buffer X: 15 Commit Transaction Thread (executing transaction) Write buffer X: 15 Memory Abort Transaction Thread (executing transaction) Write buffer X: 15 X: 10 Memory Data versioning - Manage uncommitted (new) and committed (old) versions of data for concurrent transactions - **Eager versioning (undo-log based)** - Update memory location directly on write - Maintain undo information in a log (incurs per-store overhead) - Good: faster commit (data is already in memory) - Bad: slower aborts, fault tolerance issues (consider: crash in middle of transaction) - **Lazy versioning (write-buffer based)** - Buffer data in a write buffer until commit - Update actual memory location on commit - Good: faster abort (just clear log), no fault tolerance issues - Bad: slower commits Eager versioning philosophy: “write to memory immediately, hoping transaction won’t abort” (but deal with aborts when you have to) Lazy versioning philosophy: “only write to memory when you have to.” Conflict detection - **Must detect and handle conflicts between transactions** - Read-write conflict: transaction A reads address X, which was written to by pending transaction B - Write-write conflict: transactions A and B are both pending, and both write to address X. - **System must track a transaction’s read set and write set** - Read-set: addresses read within the transaction - Write-set: addresses written within transaction Pessimistic detection - Check for conflicts during loads or stores - A HW implementation will check for conflicts through coherence actions (will discuss in further later) - Philosophy: “I suspect conflicts might happen, so let’s always check to see if one has occurred after each memory operation... if I’m going to have to roll back, let’s do it now to avoid wasted work.” - “Contention manager” decides to stall or abort transaction when a conflict is detected - Various priority policies to handle common case fast Pessimistic detection examples Case 1: Success - Time: T0 → T1 - rd A - wr B - wr C - commit - check Case 2: Early detect (and stall) - Time: T0 → T1 - wr A - check - rd A - check - stall Case 3: Abort - Time: T0 → T1 - rd A - check - wr A - check - restart - rd A - check - restart - stall (case 2) - commit Case 4: No progress - Time: T0 → T1 - wr A - check - wr A - check - restart - restart - restart - wr A (Note: diagrams assume “aggressive” contention manager on writes: writer wins) CMU 15-418/618, Spring 2015 Optimistic detection Detect conflicts when a transaction attempts to commit - HW: validate write set using coherence actions - Get exclusive access for cache lines in write set - Intuition: “Let’s hope for the best and sort out all the conflicts only when the transaction tries to commit.” On a conflict, give priority to committing transaction - Other transactions may abort later on - On conflicts between committing transactions, use contention manager to decide priority Note: can use optimistic and pessimistic schemes together - Several STM systems use optimistic for reads and pessimistic for writes Optimistic detection Case 1 - T0: rd A, wr B, wr C - T1: commit Success Case 2 - T0: rd A, wr A - T1: commit Abort Case 3 - T0: rd A - T1: wr A, commit Success Case 4 - T0: rd A - T1: wr A, commit Forward Progress Time Conflict detection trade-offs - **Pessimistic conflict detection (a.k.a. “eager”)** - Good: Detect conflicts early (undo less work, turn some aborts to stalls) - Bad: no forward progress guarantees, more aborts in some cases - Bad: fine-grained communication (check on each load/store) - Bad: detection on critical path - **Optimistic conflict detection (a.k.a. “commit” or “lazy”)** - Good: forward progress guarantees - Good: potentially less conflicts, bulk communication - Bad: detects conflicts late, can still have fairness problems Conflict detection granularity - Object granularity (SW-based techniques) - Good: reduced overhead (time/space) - Good: close to programmer’s reasoning - Bad: false sharing on large objects (e.g. arrays) - Machine word granularity - Good: minimize false sharing - Bad: increased overhead (time/space) - Cache-line granularity - Good: compromise between object & word - Can mix and match to get best of both worlds - Word-level for arrays, object-level for other data, … TM implementation space (examples) - **Hardware TM systems** - Lazy + optimistic: Stanford TCC - Lazy + pessimistic: MIT LTM, Intel VTM - Eager + pessimistic: Wisconsin LogTM - Eager + optimistic: not practical - **Software TM systems** - Lazy + optimistic (rd/wr): Sun TL2 - Lazy + optimistic (rd)/pessimistic (wr): MS OSTM - Eager + optimistic (rd)/pessimistic (wr): Intel STM - Eager + pessimistic (rd/wr): Intel STM - **Optimal design remains an open question** - May be different for HW, SW, and hybrid Hardware transactional memory (HTM) - Data versioning is implemented in caches - Cache the write buffer or the undo log - Add new cache line metadata to track transaction read set and write set - Conflict detection through cache coherence protocol - Coherence lookups detect conflicts between transactions - Works with snooping and directory coherence - Note: - Register checkpoint must also be taken at transaction begin (to restore execution context state on abort) HTM design - Cache lines annotated to track read set and write set - R bit: indicates data read by transaction (set on loads) - W bit: indicates data written by transaction (set on stores) - R/W bits can be at word or cache-line granularity - R/W bits gang-cleared on transaction commit or abort - For eager versioning, need a 2nd cache write for undo log Coherence requests check R/W bits to detect conflicts - Shared request to W-word is a read-write conflict - Exclusive request to R-word is a write-read conflict - Exclusive request to W-word is a write-write conflict Example HTM implementation: lazy-optimistic - CPU changes - Ability to checkpoint register state (available in many CPUs) - TM state registers (status, pointers to handlers, …) Example HTM implementation: lazy-optimistic - **Cache changes** - R bit indicates membership to read set - W bit indicates membership to write set **HTM transaction execution** - **Transaction begin** - Initialize CPU and cache state - Take register checkpoint **HTM transaction execution** - Xbegin - Load A - Load B - Store C ← 5 - Xcommit **HTM transaction execution** - **Load operation** - Serve cache miss if needed - Mark data as part of read set ``` Xbegin Load A ← Load B Store C ← 5 Xcommit ``` HTM transaction execution Xbegin Load A Load B Store C ⇐ 5 Xcommit Load operation - Serve cache miss if needed - Mark data as part of read set **HTM transaction execution** - **Store operation** - Service cache miss if needed - Mark data as part of write set (note: this is not a load into exclusive state. Why?) ![Diagram](image) Xbegin Load A Load B Store C ← 5 Xcommit HTM transaction execution: commit - **Xbegin** - Load A - Load B - Store C ← 5 - **Xcommit** ← **Cache** <table> <thead> <tr> <th>R</th> <th>W</th> <th>V</th> <th>Tag</th> <th>Data</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>1</td> <td>B</td> <td></td> </tr> <tr> <td>0</td> <td>0</td> <td>1</td> <td>A</td> <td></td> </tr> <tr> <td>0</td> <td>0</td> <td>1</td> <td>C</td> <td></td> </tr> </tbody> </table> **Fast two-phase commit** - **Validate**: request exclusive access to write set lines (if needed) - **Commit**: gang-reset R and W bits, turns write set data to valid (dirty) data **upgradeX C** (result: C is now in exclusive, dirty state) HTM transaction execution: detect/abort Assume remote processor commits transaction with writes to A and D - **Xbegin** - Load A - Load B - Store C ← 5 - **Xcommit** **Fast conflict detection and abort** - Check: lookup exclusive requests in the read set and write set - Abort: invalidate write set, gang-reset R and W bits, restore to register checkpoint (coherence requests from another core’s commit) (remote core’s write of A conflicts with local read of A: triggers abort of pending local transaction) Hardware transactional memory support in Intel Haswell architecture * - New instructions for “restricted transactional memory” (RTM) - `xbegin`: takes pointer to “fallback address” in case of abort - e.g., fallback to code-path with a spin-lock - `xend` - `xabort` - Implementation: tracks read and write set in L1 cache - Processor makes sure all memory operations commit atomically - But processor may automatically abort transaction for many reasons (e.g., eviction of line in read or write set will cause a transaction abort). - Implementation does not guarantee progress (see fallback address) - Intel optimization guide (ch 12) gives guidelines for increasing probability that transactions will not abort * Shipped with bug that caused Intel disable it when discovered in 2014, supposedly fixed in Broadwell arch chips Summary: transactional memory - Atomic construct: declaration of atomic behavior - Motivating idea: increase simplicity of synchronization, without (significantly) sacrificing performance - Transactional memory implementation - Many variants have been proposed: SW, HW, SW+HW - Implementations differ in: - Versioning policy (eager vs. lazy) - Conflict detection policy (pessimistic vs. optimistic) - Detection granularity - Hardware transactional memory - Versioned data is kept in caches - Conflict detection mechanisms built upon coherence protocol
{"Source-Url": "http://15418.courses.cs.cmu.edu/spring2015content/lectures/19_transactionalmem/19_transactionalmem_slides.pdf", "len_cl100k_base": 5993, "olmocr-version": "0.1.53", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 96565, "total-output-tokens": 8482, "length": "2e12", "weborganizer": {"__label__adult": 0.0003333091735839844, "__label__art_design": 0.0002199411392211914, "__label__crime_law": 0.0003037452697753906, "__label__education_jobs": 0.0005555152893066406, "__label__entertainment": 4.404783248901367e-05, "__label__fashion_beauty": 0.00011748075485229492, "__label__finance_business": 0.0001437664031982422, "__label__food_dining": 0.00029969215393066406, "__label__games": 0.0005559921264648438, "__label__hardware": 0.0017261505126953125, "__label__health": 0.000331878662109375, "__label__history": 0.00019240379333496096, "__label__home_hobbies": 0.0001036524772644043, "__label__industrial": 0.00047206878662109375, "__label__literature": 0.00015163421630859375, "__label__politics": 0.0002053976058959961, "__label__religion": 0.0004405975341796875, "__label__science_tech": 0.009674072265625, "__label__social_life": 7.385015487670898e-05, "__label__software": 0.0035381317138671875, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.0003628730773925781, "__label__transportation": 0.0006771087646484375, "__label__travel": 0.00018393993377685547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24663, 0.01566]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24663, 0.46674]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24663, 0.79287]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 146, false], [146, 708, null], [708, 1177, null], [1177, 1641, null], [1641, 2385, null], [2385, 2782, null], [2782, 3381, null], [3381, 3413, null], [3413, 4119, null], [4119, 4573, null], [4573, 5114, null], [5114, 5288, null], [5288, 5743, null], [5743, 5866, null], [5866, 5967, null], [5967, 6068, null], [6068, 6169, null], [6169, 6270, null], [6270, 6371, null], [6371, 6583, null], [6583, 6688, null], [6688, 6802, null], [6802, 7097, null], [7097, 7317, null], [7317, 7550, null], [7550, 8105, null], [8105, 8529, null], [8529, 9056, null], [9056, 9782, null], [9782, 10359, null], [10359, 10855, null], [10855, 11411, null], [11411, 12109, null], [12109, 12376, null], [12376, 12605, null], [12605, 13421, null], [13421, 13455, null], [13455, 13816, null], [13816, 14230, null], [14230, 14427, null], [14427, 14860, null], [14860, 15258, null], [15258, 16090, null], [16090, 16534, null], [16534, 17061, null], [17061, 17642, null], [17642, 18256, null], [18256, 18484, null], [18484, 19039, null], [19039, 19527, null], [19527, 20056, null], [20056, 20537, null], [20537, 21124, null], [21124, 21306, null], [21306, 21458, null], [21458, 21660, null], [21660, 21836, null], [21836, 21991, null], [21991, 22227, null], [22227, 22719, null], [22719, 23239, null], [23239, 24087, null], [24087, 24663, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 146, true], [146, 708, null], [708, 1177, null], [1177, 1641, null], [1641, 2385, null], [2385, 2782, null], [2782, 3381, null], [3381, 3413, null], [3413, 4119, null], [4119, 4573, null], [4573, 5114, null], [5114, 5288, null], [5288, 5743, null], [5743, 5866, null], [5866, 5967, null], [5967, 6068, null], [6068, 6169, null], [6169, 6270, null], [6270, 6371, null], [6371, 6583, null], [6583, 6688, null], [6688, 6802, null], [6802, 7097, null], [7097, 7317, null], [7317, 7550, null], [7550, 8105, null], [8105, 8529, null], [8529, 9056, null], [9056, 9782, null], [9782, 10359, null], [10359, 10855, null], [10855, 11411, null], [11411, 12109, null], [12109, 12376, null], [12376, 12605, null], [12605, 13421, null], [13421, 13455, null], [13455, 13816, null], [13816, 14230, null], [14230, 14427, null], [14427, 14860, null], [14860, 15258, null], [15258, 16090, null], [16090, 16534, null], [16534, 17061, null], [17061, 17642, null], [17642, 18256, null], [18256, 18484, null], [18484, 19039, null], [19039, 19527, null], [19527, 20056, null], [20056, 20537, null], [20537, 21124, null], [21124, 21306, null], [21306, 21458, null], [21458, 21660, null], [21660, 21836, null], [21836, 21991, null], [21991, 22227, null], [22227, 22719, null], [22719, 23239, null], [23239, 24087, null], [24087, 24663, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24663, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24663, null]], "pdf_page_numbers": [[0, 0, 1], [0, 146, 2], [146, 708, 3], [708, 1177, 4], [1177, 1641, 5], [1641, 2385, 6], [2385, 2782, 7], [2782, 3381, 8], [3381, 3413, 9], [3413, 4119, 10], [4119, 4573, 11], [4573, 5114, 12], [5114, 5288, 13], [5288, 5743, 14], [5743, 5866, 15], [5866, 5967, 16], [5967, 6068, 17], [6068, 6169, 18], [6169, 6270, 19], [6270, 6371, 20], [6371, 6583, 21], [6583, 6688, 22], [6688, 6802, 23], [6802, 7097, 24], [7097, 7317, 25], [7317, 7550, 26], [7550, 8105, 27], [8105, 8529, 28], [8529, 9056, 29], [9056, 9782, 30], [9782, 10359, 31], [10359, 10855, 32], [10855, 11411, 33], [11411, 12109, 34], [12109, 12376, 35], [12376, 12605, 36], [12605, 13421, 37], [13421, 13455, 38], [13455, 13816, 39], [13816, 14230, 40], [14230, 14427, 41], [14427, 14860, 42], [14860, 15258, 43], [15258, 16090, 44], [16090, 16534, 45], [16534, 17061, 46], [17061, 17642, 47], [17642, 18256, 48], [18256, 18484, 49], [18484, 19039, 50], [19039, 19527, 51], [19527, 20056, 52], [20056, 20537, 53], [20537, 21124, 54], [21124, 21306, 55], [21306, 21458, 56], [21458, 21660, 57], [21660, 21836, 58], [21836, 21991, 59], [21991, 22227, 60], [22227, 22719, 61], [22719, 23239, 62], [23239, 24087, 63], [24087, 24663, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24663, 0.00728]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6a26bc1ce7aa55c5b770a621fedaada72109e745
The PFILTER Firewall Compiler for Clusters Neil Gorsucha NCSA, University of Illinois, Urbana, Illinois Linux clusters range from tiny to huge, but all sizes can benefit from effective firewalls. Stateful packet filtering firewalls can provide excellent security from network attacks, but are difficult at best to set up and maintain. When packet filtering is combined with packet forwarding, NATTING, and pseudo interfaces, a single machine can provide firewall protection for a private network of machines, while allowing the protected machines to have complete access to the general networks, and can allow the protected machines to be visible at general network addresses while maintaining firewall protection for them. This paper will discuss an open source, easily configurable packet filtering compiler system for clusters that can provide all these benefits. Les grappes d'ordinateurs de toutes tailles ont avantage à comporter un coupe-feu. Les coupes-feu dynamiques offrent un excellent niveau de sécurité contre les attaques par réseau, mais il n'est pas facile de les configurer et d'en effectuer l'entretien par la suite. Lorsqu'on combine les techniques de filtrage et de réacheminement des paquets, de traduction d'adresses de réseau, ainsi que l'interfaçage réseau virtuel, il est possible d'utiliser un seul ordinateur coupe-feu pour protéger un réseau privé comportant plusieurs ordinateurs, lesquels conservent un accès complet au réseau externe. Cet article présente une application à code source libre facilitant la compilation de règles de filtrage de paquets pour la protection réseau d'une grappe d'ordinateurs. 1 Introduction Packet filtering is the process of examining each network packet as it comes into or through a device, and either allowing, dropping or rejecting the packet based on various factors such as the source and destination addresses and port numbers, and whether the packet is the start of new connection attempt. Packet filtering can block all incoming network connection attempts except for the ones that are explicitly allowed. This prevents computer configuration mistakes allowing an insecure network service from being accessed by mistake. Stateful packet filtering keeps track of each network communication sequence of packets, and provides for simpler, easier to setup and maintain, and more secure firewalls. All cluster machines that can receive packets from outside the cluster should either have their own packet filtering installed, or a common firewall machine should filter all packets going in and out of the cluster. Using packet filtering to set up a firewall on a computer system is difficult at best, being similar to writing programs in assembly language. A method is needed for system administrators to setup and maintain good packet filtering firewalls without having to learn the intricacies of packet filtering commands. 2 Security Layers To provide effective cluster security, multiple security layers should be provided. Cluster network security layers should ideally consist of: router filtering, network stack protections, IP masquerading and NATTING, packet filtering, disabling unused network services, TCPwrappers, and configuring applications. 2.1 Router Filtering If possible, routers should be set up to block various types of unwanted packets, including: all source spoofed packets, all packets claiming to be coming from an RFC1918 “private” address that arrive from a public network, all packets coming into the cluster going to most privileged ports except for a few explicitly allowed ports, and all packets destined for various “problem” high ports such as X or NFS. Redirect, echo request, timestamp request, timestamp reply, address mask request, and address mask reply ICMP packets coming into the cluster should be dropped. Some routers lack these filtering capabilities, while others are difficult to program. Because routers cannot typically track connections, outside attackers can sometimes craft special network packets to bypass router filtering. 2.2 Network Stack Protections The Linux kernel has a number of attributes that help with network security which can be turned on or off through entries in the /proc directory tree. Some of the more useful features are: source spoofing prevention, disabling of ICMP redirect packets, and syncookie attack protection. 2.3 IP Masquerading and NATTING If some of a cluster’s nodes are only connected to a private cluster network, and it is desired for the nodes to be able to access outside network resources, a machine needs to forward network packets between the private cluster network and other networks. To maintain the privacy of nodes with no direct outside network connection, they should be assigned IP addresses that are not directly accessible outside the cluster. RFC 1918 reserves the IP ranges 10.x.x.x, 172.16-31.x.x, 192.168.x.x that cannot be routed across the Internet. To allow hidden cluster nodes to access outside network resources, the machine doing packet forwarding needs to provide IP masquerading. This alters all packets being forwarded from a hidden cluster node to an address outside the cluster to appear as if they came from the forwarding machine. Packets coming back in as part of a masqueraded connection are translated and forwarded to the appropriate hidden cluster node. NAT (Network Address Translation) is this process of modifying the addresses within the packets as they are being forwarded. Setting up IP masquerading requires special packet filtering rules. 2.4 Packet Filtering Packet filtering is one of the most effective security layers, because it affects all network communication, intercepting network packets before they are passed up to application programs. Linux kernels from version 2.4 on can track active network connections packet by packet. This is called stateful connection tracking. In addition to filtering which network connections are allowed to start and proceed, stateful packet filters can block all packets that are not part of a valid ongoing network connection. This prevents attackers from passing specially crafted packets into high network ports. The main drawback of packet filtering is that it is difficult to configure. BSD derived operating systems use the netfilter command to control packet filtering, while Linux systems use the ipfw, ipfwadm, ipchains, or iptables commands to control packet filtering. Without special tools, scripts consisting of many commands have to be hand-coded and maintained. It is difficult for non-experts to find adequate information to allow them to produce scripts that provide secure firewalls. 2.5 Disabling Unused Network Services The most straightforward method of reducing network vulnerabilities is to disable incoming network services. Any unused or unnecessary incoming network services should be disabled. Most of them can be disabled using the linuxconf program. A few of them need to have their entries commented out in the /etc/inetd.conf file. Unfortunately, earlier versions of Linux and some network services in current Linux versions, do not follow this model. They can be inadvertently turned on and it is not always obvious that this has happened. 2.6 TCPwrappers Newer versions of Linux support TCPwrappers, which adds another layer of application level filtering. This is accomplished through inetd with the files /etc/hosts.deny and /etc/hosts.allow controlling access for each network service. Unfortunately, some network services cannot be filtered this way. 2.7 Application Layer Filtering Some server applications can be set to block access to certain addresses, or to allow access only to certain addresses. This feature should be used wherever possible. For example, the exporting of NFS shares can be to a limited list of machines and addresses, with or without write access. 3 Packet Re-Writing Linux and other operating systems can modify network packets as they pass through a system. This is a more sophisticated type of NAT and can yield some very useful results. Since this is controlled through the packet filtering rules, a packet filtering compiler needs to control this. 3.1 Host Aliasing Consider a firewall machine that has multiple network interfaces and that is acting as an IP masquerading firewall for one of the interfaces. In some cases it is advantageous to be able to access one or more of the machines that are on the private network through a public address. However if the access method desired for more than one of the machines is through the same type of service, a method is needed to have those machines appear as if they were also on the public network, with only outside connections to the desired services being passed on the public address of the machines. This allows one firewall machine to act as a gateway for a number of hidden machines behind it, with all of the firewalling and packet filtering protection only needing to be set up on one machine. A classic example of this is a development laboratory. There might be a dynamic mix of machines or clusters that are constantly being rebuilt and that don’t need the administrative overhead of setting up firewalls on each one every time they are rebuilt. But they all need to be accessible from the public networks through a security communication channel such as SSH. With host aliasing, this needs to only be set up once on the laboratory’s firewall machine. 3.2 Packet Redirection or Forwarding Sometimes packets need to be re-written and sent to a different destination than intended. One example of this would be having all outgoing packets that are destined for web servers being redirected to a proxy web server on a local machine. 4 Packet Filtering Details Linux kernels use “chains” of packet filtering rules to filter packets. Each filtering rule has a set of matching conditions and an action to take if the packet is matched. Some of the conditions that packets can be matched by are source address, destination address, source port, destination port, TCP flags, type, MAC address, length, throttling limits, specific byte patterns, and the user id of locally generated packets. Some chains are pre-defined and always exist; others can be defined or deleted by the user. Rules can be added to any chain, and any chain can have all it’s rules flushed from it. Packets can traverse more than one chain. Linux version 2.4 and later kernels support integrated connection tracking and stateful inspection. This adds more ways to match packets, based on whether they are new packets, packets that are part of an existing connection, or packets logically related to an existing connection. In addition greatly increasing security, this also vastly simplifies filtering rulesets. A script that controls packet filtering is difficult to set up. The cluster administrator would like to specify something like “block every incoming connection attempt except for SSH to machines 1 and 2, and allow all cluster machines to access the outside network. To do this safely requires a script that is dozens of lines long with many commands, and setting up various network stack parameters by writing into the /proc directory tree. Packet filtering scripts are akin to an assembly language. For things to run correctly, “glue” code has to be added. What system administrators need is a method to “compile” packet filtering firewall scripts that requires no knowledge of packet filtering configuration commands. Given this need, it was decided to implement a packet filtering compiler. 5 Firewall Compiler Requirements The following requirements were determined for a packet filtering firewall compiler. Firewall packet filtering rules should be specified in a high level language. The filtering language should be independent of iptables, ipchains, netfilter or whatever other packet filtering commands are actually used to implement the firewall. The compiler shall be designed so that any Unix variant that has commands to configure packet filtering can be supported. The firewall compiler shall convert a text configuration file consisting of high level filtering language directives into a complete firewall implementation that includes all commands to modify appropriate kernel network stack parameters to provide added security. A GUI shall be provided that modifies the same firewall text configuration file. An administrator may go back and forth between editing the firewall text configuration file and using the GUI, with changes done by one method being reflected in both. The compiler defaults should allow no incoming network connection other than those that are specifically enabled. The compiler defaults should allow all outgoing network connections. On Linux, the compiler should be installed as a system service so that a firewall can be enabled or disabled with the /sbin/chkconfig command, and started, stopped, and restarted with the /sbin/service command. Every time the compiler is “started” or “restarted”, the firewall configuration file should be re-compiled. When the compiler is “stopped” it should turn off all network packet filtering, letting all packets be accepted. The /sbin/service command shall also support a “status” command, and a “chains” command to list the packet filtering chains currently in effect. The compiler should be available as binary and source rpms, and as a source tarball that a “make” then “make install” will install correctly. The compiler should be implemented using an interpretive language. The compiler itself should never need to be compiled and linked as a binary image. The compiler rpm shall be of the “noarch” type, and not tied to any particular hardware platform. The firewall filtering language should be reasonably free form, and support comment lines, and comments at the end of lines. The firewall filtering language should allow for sequences of network packet matching, with each successful match allowing the packet to be accepted, dropped, or rejected. The firewall filtering language should support defined constants whose values cannot be changed once they are set, and symbolic variables that can be redefined and changed. Constant and variable values should consist of strings. The firewall filtering language should support macros that can be expanded with parameter substitution. The firewall filtering language should support conditional blocks. Multiple network interfaces have to be supported per machine. Each network interface shall be able to be marked separately as being attached to either a trusted network where no filtering is done for connection attempts from that network, or marked as being on an untrusted network where all connection attempts to the host machine shall pass through the filtering rules. One or more network interfaces can be marked to have packets forwarded by using IP masquerading NAT. All machines attached to interfaces so marked shall have access to public networks through the firewall machine. One or more machines attached to a network interface that is marked as being firewalled by IP masquerading NAT shall be able to be aliased onto an address on another network. For each aliased machine, a pseudo interface shall be created with it’s own separate address. Each pseudo interface shall be able to have it’s own filtering rules that are specific it’s address, and connection attempts from the outside that are allowed will be forwarded/translated onto the real address of the aliased machine on the private network. Outgoing connections from aliased machines will be translated so that they appear to be originating from the machine’s pseudo address. The compiler shall support connection tracked flexible packet forwarding. The “glue” code parts of the compiled output should be user configurable through text files. Network services to be allowed and/or blocked should be able to be defined in user modifiable text files. Network services should allow for more complex allowing and/or blocking, including embedded shell script fragments. At present, all of the requirements except the GUI have been met. The GUI is in development. 6 PFILTER – a Firewall Compiler The firewall compiler that was developed is called PFILTER (Packet FILTER). It was decided to implement PFILTER in the PERL language. There are better interpretive languages available, but none are so widespread as PERL. The main PFILTER program is installed as a PERL executable program at /usr/sbin/pilter. The remainder of the executable program is stored as included PERL files in the /usr/lib/pfilter directory. Included files were chosen instead of the more traditional perl modules to insure that PFILTER functions are placed in a more secure directory, and because the functions are very specific to PFILTER and generally not of use to other programs. PFILTER is an open source project hosted on SourceForge, see http://sourceforge.net/projects/pfilter/ 6.1 Ruleset Files The PFILTER executable program is designed to do as little as possible. Instead, PFILTER uses built-in files called ruleset files to do most of the work of compiling output scripts. The ruleset files are editable text files that can be modified or added to by system administrators. Heavy use is made of conditional text blocks and macros. This design was chosen for the following benefits: PFILTER determines which packet filtering system is in use such as iptables or ipchains or netfilter. Setting a constant value to indicate which type of system is in use, conditional blocks and macros in the ruleset files are used to generate compiled code for all the different types of systems. This allows for easily adding complete new types of packet filtering systems by simply adding a few macros and some glue code text. Almost all of the “glue” code that is put in the compiled output is specified in the ruleset files. This allows for a vast amount of flexibility, and support for many types of Unix variants. New types of network services to be filtered on or off are defined in ruleset files. This allows system administrators to add new types of network services to be supported. Because network services to be filtered on or off are defined by macros, they can embed shell script fragments in the compiled output code, allowing for filtering services such as NFS which dynamically allocate port numbers. Because of the built-in constants, variables, conditions, and macros, the main firewall configuration file can be quite sophisticated, and will allow the same configuration file to be used throughout a cluster. Compiled output scripts shall be optimized, with redundant or impossible combinations of packet filtering sources and destinations not being written to the output scripts. The compiled output scripts shall include verbose comments explaining what each script line does. The configuration source lines that cause generation of output script lines shall be included as comments immediately above their generated output. 6.2 The PFILTER Firewall Language The PFILTER firewall configuration is defined in the /etc/pfilter.conf file. Comments start with either the # or % characters and can be complete lines. Comments that start with the % character are not copied to the compiled output script, while comments starting with the # character are copied to the output script when appropriate. All directives and keywords are case-insensitive. 6.2.1 Constants and Variables Named constants are defined like this: %constant% constant_name strings ... A constant’s value will be set to everything after the name but not including any comments at the end of the line. Trying to redefine a constant with a new value produces an error. Variables are defined like this: ``` %variable% variable_name strings ... ``` Variables can be re-defined any number of times. To substitute a constant or variable value, simply insert it with % characters on each side. For example, these lines: ``` %variable% var b c %constant% const d e f a %var% %const% g ``` will expand into this: ``` a b c d e f g ``` There are a number of constants that are defined by PFILTER during each compilation. ### 6.2.2 Macros PFILTER supports macros with named parameters. When a macro is expanded, temporary variables that match the macro’s parameter names are created just for that expansion block. Macros are heavily used when generating the compiled firewall output script. As an example, these lines: ``` %macro% compute-node management-node # compute node compute-node open ssh from %domain% open tcp 1024:65536 from %cluster% open tcp 0:1023 from %management-node% %endmacro %compute-node% node17 mgmt12 ``` would expand to this: ``` # compute node node17 open ssh from %domain% open tcp 1024:65535 from %cluster% open tcp 0:1023 from mgmt12 ``` Macro definitions and invocations can be nested. ### 6.2.3 Conditional Blocks Blocks of text or single lines can be conditionally included in the output based on any of the following types of conditional expressions: ``` %ifdef name is true if constant/variable is defined %ifndef name is true if constant/variable is undefined %if string = string is true if strings match %if string != string is true if strings do not match %if string is true if the string is non-blank ``` Conditionals can either surround a block of lines, or only affect one line. If the conditional expression is at the beginning of a line, the lines following that up until a line that starts with %endif are included in the output if the condition is true. If the end of line is a conditional expression, that line will be included in the output if the condition is true. The conditional expressions and any possible %endif lines are not included in the output. ### 6.2.4 Protocols, Ports, and Network Services Some directives include lists of protocols, ports, or network services. These lists can include any of the following, separated by spaces or tab characters: a protocol and then one or more port numbers, port ranges, and network service names, network service name without a proceeding protocol. Network service names are defined in four ways. First, a matrix of protocols, ports, and matching service names are parsed from the /etc/services system file if it exists. Second, some symbolic names are defined by the iptables and ipchains commands, but it is not advisable to use those, since newer versions of those commands might change or remove symbolic names. Third, network services can be defined in the either the main firewall configuration file or in one of the firewall ruleset files as a simple constant. For example, scattered in the ruleset files are these lines: ``` %define service-x-protocols-ports tcp udp/6000:6063 %define service-ping-protocols-ports icmp/8 %define service-ssh-protocols-ports tcp/22 ``` This defines the network service $x$ as responding to both TCP and UDP ports ranging from 6000 to 6063, defines the network service ping as responding to ICMP packets of type 8, and defines the network service ssh to respond to TCP port 22. A service name defined in this way overrides any definitions in the /etc/services file. In the case of the SSH ruleset entry, the /etc/services lines that define the SSH service as being both TCP and UDP ports 22, are over-ridden, so that when the configuration file says: ``` open ssh ``` it will only open TCP port 22 and not open UDP port 22. The last way a network service can be defined is with a pair of PFILTER macro definitions. This allows for opening and closing of network services to involve shell script fragments, or to do anything else that isn’t just a list of TCP and/or UDP ports or ICMP types. For example, one of the ruleset files includes these segments: ``` %macro service-multicast-open source destination # Let all multicast packets through from %sources%. # The destination is always 224.0.0.0-239.255.255.255. # This method is used because multicast packets are # identified by their destination address. %open_protocol_port% %source% 224.0.0.0/4 ANY ANY %endmacro %macro service-multicast-close source destination # Block all multicast packets from %sources%. # The destination is always 224.0.0.0-239.255.255.255. # This method is used because multicast packets are # identified by their destination address. %close_protocol_port% %source% 224.0.0.0/4 ANY ANY ``` This segment defines how to open or close multicast packets going through the firewall. If someone puts this line in their `/etc/pfilter.conf` configuration file: ``` open multicast from mydomain.com ``` then the compiled output script will include directives that will accept packets from mydomain.com going to anywhere in the address range 224.0.0.0/4. ### 6.2.5 Network Addresses Network addresses can be specified as simple IP addresses, IP address ranges, DNS host names, or address ranges that include DNS host names. ### 6.2.6 Directives to Open/Close Access To allow or block incoming network connections, these directives are used: - **OPEN** protocols-ports-services [from source(s)] [to destination(s)] - **CLOSE** protocols-ports-services [from source(s)] [to destination(s)] If no source addresses are specified, incoming connections to the specified protocols-ports and/or services are allowed (or blocked) from any address. When no destination addresses are supplied, the connections are allowed or blocked going to the firewall machine. Destination addresses can be specified for other machines if the firewall machine is receiving and forwarding packets to the other machines. ### 6.2.7 Directives for Interface Attributes The following directives specify attributes of network interfaces. They all take one or more network attribute directives, followed by one or more network interface names as. - **FILTERED** or **UNTRUSTED** – all network connection attempts coming from these interfaces are packet filtered to determine if the connections should be allowed. - **UNFILTERED** or **TRUSTED** – all network connection attempts coming from these interfaces are allowed, without any packet filtering. - **PRIVATE** or **PROTECTED** – interfaces marked this way are presumed to be on a private network that is being protected by the firewall machine. IP masquerading and NAT are applied to all outgoing connections coming from these networks. - **PUBLIC** or **UNPROTECTED** – interfaces marked this way are not being protected behind the firewall machine. ### 6.2.8 Directives to Setup Host Aliasing To set up host aliasing, where a machine on a protected network is made to be partially accessible on a public network, this type of directive is used: ``` ALIAS pseudo real [ports-protocols-services] [from source[s]] ``` For example, if a machine named private1 on a protected network needed to have its SSH server accessible from machines in mydomain.com, which has a 16-bit mask, appearing to be at public address public1, a line like this could be used: ``` ALIAS public1 private1 ssh from mydomain.com/16 ``` Ports/protocols and/or services to be passed through do not have to be listed on the line that defined the ALIAS mapping, they can be listed elsewhere. The line above could also have been done like this: ``` ALIAS public1 private1 OPEN ssh from mydomain.com/16 ``` ### 6.2.9 Directives for Packet Forwarding. To set up packet forwarding/re-writing a line like this need to be included: ``` FORWARD protocols-ports-services FROM source[s] TO destination[s] ONTO forwarded-destination-address [forwarded-protocols-ports-services] ``` A list of protocols/ports and/or services, possibly matched by where they are coming from and/or going to, will be translated to instead go to the forwarded-destination-address, possible being translated to a different group of protocols/ports/services. ### 7 Conclusion The first method of network security to be installed should be packet filtering. This will block access from outside the cluster to all but needed services while allowing the cluster to access the outside freely. With tools such as pfilter, it is straightforward to generate packet filtering firewalls with ipchains/iptables rule sets.
{"Source-Url": "http://hpcs2003.ccs.usherbrooke.ca/papers/Gorsuch.pdf", "len_cl100k_base": 5703, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17594, "total-output-tokens": 6152, "length": "2e12", "weborganizer": {"__label__adult": 0.0003101825714111328, "__label__art_design": 0.00032782554626464844, "__label__crime_law": 0.0004744529724121094, "__label__education_jobs": 0.0003750324249267578, "__label__entertainment": 6.711483001708984e-05, "__label__fashion_beauty": 0.00011473894119262697, "__label__finance_business": 0.00028228759765625, "__label__food_dining": 0.00023221969604492188, "__label__games": 0.000640869140625, "__label__hardware": 0.004596710205078125, "__label__health": 0.00028777122497558594, "__label__history": 0.00017213821411132812, "__label__home_hobbies": 0.00010186433792114258, "__label__industrial": 0.0006208419799804688, "__label__literature": 0.00012409687042236328, "__label__politics": 0.0001951456069946289, "__label__religion": 0.0003590583801269531, "__label__science_tech": 0.064453125, "__label__social_life": 6.377696990966797e-05, "__label__software": 0.046051025390625, "__label__software_dev": 0.87939453125, "__label__sports_fitness": 0.0002243518829345703, "__label__transportation": 0.0003218650817871094, "__label__travel": 0.00014388561248779297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28255, 0.02413]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28255, 0.64997]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28255, 0.89369]], "google_gemma-3-12b-it_contains_pii": [[0, 4264, false], [4264, 9459, null], [9459, 14615, null], [14615, 19690, null], [19690, 24461, null], [24461, 28255, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4264, true], [4264, 9459, null], [9459, 14615, null], [14615, 19690, null], [19690, 24461, null], [24461, 28255, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28255, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28255, null]], "pdf_page_numbers": [[0, 4264, 1], [4264, 9459, 2], [9459, 14615, 3], [14615, 19690, 4], [19690, 24461, 5], [24461, 28255, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28255, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1f272d7a143e1937c79f6841410378ac7c9e858d
Semantic Web Basics (cont.) CS 431 - March 26, 2008 Carl Lagoze - Cornell University Acknowledgements for various slides and ideas - Ian Horrocks (Manchester U.K.) - Eric Miller (W3C) - Dieter Fensel (Berlin) - Volker Haarslev (Montreal) We started with the Presentation Web Atom/RSS gave us data extraction from the Web But we want to do more - Beyond just web pages We want to do more - Compound/Complex Relationships Motivating the problem: Integrating Web Resources in new ways Standards/mechanisms for doing this • **Stuff we’ve learned so far** - URIs - keys for unique identity and joining distributed information - XML - Markup for serialization of knowledge bases - Namespaces - URIs for vocabulary terms • **Stuff we’ll learn from here** - RDF - basic model for representing knowledge via binary relationships - Ontologies - definitions of vocabulary terms and their relationships - OWL - RDF-based model for expressing Ontologies - Description logic - Formal way to represent ontologies and reason with them Assertions are statements • Resource1 “is about” Resource2 • Resource1 “annotates” Resource2 • Resource1 “illustrates” Resource2 • Organization1 “owns” Resource2 • Person1 “recommends” Resource2 • RDF is a model for making assertions - Subject → Predicate → Object RDF Data Model - Directed Graph expressing typed binary relations between typed resources - Relations are: - \( P(S,O) \) or \( (:s:p:o) \) - Primitives - resource - property - literal - statement - Other constructs - container - reification - collection - URI's for everything except literals - “bnodes” are a special case, but more about that later - Common serialization is RDF/XML Why URIs • Purpose of RDF is integrating information from multiple sources - Existing web - Introduced entities (people, organizations, taxonomies) • URI’s form basis of joins of graph • Instance data combines into larger graphs • Inferences can be made based on: - RDF primitives - Ontology definitions • RDFs • OWL RDF Model Primitives Resource → Property → Resource Statement RDF Model Example #2 <?xml version="1.0"?> <rdf:RDF xmlns:gss="http://www.w3.org/2001/11/IsaViz/graphstylesheets#" xml:base="file:/C:/IsaViz/tmp/tmp41406.rdf"/> <rdf:Description rdf:about="info:uri2"> <bib:Affiliation rdf:resource="http://www.oclc.org"/> <bib:EMail>emiller@w3.org</bib:EMail> <bib:Name>Eric Miller</bib:Name> </rdf:Description> <rdf:Description rdf:about="info:uri1"> <oa:Creator rdf:resource="info:uri2"/> <dc:Title>RDF Presentation</dc:Title> </rdf:Description> </rdf:RDF> Typed Literals ```xml <?xml version="1.0" ?> <rdf:RDF xmlns:gss="http://www.w3.org/2001/11/IsaViz/graphstylesheets#" xmlns:core="http://www.example.org/terms/" xmlns:s="http://example.org/students/vocab#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:ex="http://example.org/terms/" xml:base="file://C:/cygwin/tmp/tmp2978.rdf"> <rdf:Description rdf:about="http://www.example.org/staffid/85740"> <core:age datatype="http://www.w3.org/2001/XMLSchema#integer">27</core:age> </rdf:Description> </rdf:RDF> ``` Beyond binary relations • Note mapping of RDF statements to binary relations that could be stored in a database: - (:s :p :o) maps to P(S,O) - e.g., Title(R, “War & Peace”) • But the world is more complex and statements are arbitrary n-tuples - Carl Lagoze has his office at 301 College Ave., Ithaca, NY 14850 - (“Carl Lagoze” “hasOffice” “301 College Ave, Ithaca, NY 14850”) - (“Carl Lagoze” “address” “301 College Ave” “Ithaca” “NY” “14850”) Expressing n-ary relations with blank nodes ``` \[ \text{URI}_1 \] ``` ``` street ``` ``` \text{"blank node" (think of as local variable)} ``` ``` \text{address} ``` ``` \text{city} ``` ``` \text{state} ``` ``` \text{zip} ``` ``` \text{"301 College Ave"} ``` ``` \text{"Ithaca"} ``` ``` \text{"NY"} ``` ``` \text{"14850"} ``` Another n-ary relation example ```xml <?xml version="1.0"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:extems="http://example.org/stuff/1.0/" > <rdf:Description rdf:about="http://www.w3.org/TR/rdf-syntax-grammar"> <dc:title>RDF/XML Syntax Specification (Revised)</dc:title> <extems:editor rdf:nodeID="abc"/> </rdf:Description> <rdf:Description rdf:nodeID="abc"> <extems:fullName>Dave Beckett</extems:fullName> <extems:homePage rdf:resource="http://purl.org/net/dajoeb/"/> </rdf:Description> </rdf:RDF> ``` RDF Containers • Permit the aggregation of several values for a property • Express multiple aggregation semantics - unordered - sequential or priority order - alternative RDF Containers • **Bag** - unordered grouping • **Sequence** - ordered grouping • **Alternatives** - alternate values • need to choose - at least one value - first value is default or preferred value Expressing Container Primitives in Binary Relations Jon Doe and Karin Mustermann joint their forces to create a gadget with title *Healthy Meat* ```xml <?xml version="1.0"?> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:dc="http://purl.org/dc/elements/1.1/ xmlns:dcTerms="http://purl.org/dc/elements/1.1/ <rdf:Description dc:title="Healthy Meat"> <dc:creator> <rdf:Bag> <rdf:li>Jon Doe</rdf:li> <rdf:li>Karin Mustermann</rdf:li> </rdf:Bag> </dc:creator> </rdf:Description> </rdf:RDF> ``` RDF Collections - Containers are not closed - open world assumption in all of them - Collections use lisp-like primitives (first, rest, nil) to express a close list. RDF Collections The students in course 6.001 are Amy, Mohamed, and Johann Looking behind the curtain: RDF Meta-model RDF Meta-Model provides base level for inferences • Given a set of facts... • Derive additional facts • Some facts - Sam has a Prius - A Prius is a car - A Car is a type of vehicle - Sam has a bicycle - A bicycle is a type of vehicle • Inference by subsumption: Sam has two vehicles • Inference by human judgment: Sam is an environmentalist. RDF meta-model basic elements - All defined in rdf namespace - http://www.w3.org/1999/02/22-rdf-syntax-ns# - Types (or classes) - rdf:Resource - everything that can be identified (with a URI) - rdf:Property - specialization of a resource expressing a binary relation between two resources - rdf:statement - a triple with properties rdf:subject, rdf:predicate, rdf:object - Properties - rdf:type - subject is an instance of that category or class defined by the value Use of rdf:type • “Resource named http://foo.org/inst is member of class http://foo.org/classes/cl1” • <http://foo.org/inst> <rdf:type> <http://foo.org/classes/cl1> Typing the Resources in Statements ```xml <?xml version="1.0" ?> <rdf:RDF xmlns:gss="http://www.w3.org/2001/11/IsaViz/graphstylesheets#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:xsd="http://www.w3.org/2001/XMLSchema# xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:ex="http://example.org/terms#"> <ex:person rdf:about="info:123"> </ex:person> </rdf:RDF> ``` Formalizing a statement • An RDF statement is a triple consisting of: - subject → rdf:type resource - property → rdf:type property - object → rdf:type resource | literal – Examples <mailto:lagoze@cs.cornell> • Expressible as: - triple (ns1:s ns2:p ns3:o) RDF statements and basic types RDF: - Subject (WYA) - Predicate (creator) - Object (Digital Libraries) ## Simple type inferencing <table> <thead> <tr> <th>explicit triple</th> <th>Allows inference</th> </tr> </thead> <tbody> <tr> <td>(:s :p :o)</td> <td>(:s rdf:type rdf:Resource)</td> </tr> <tr> <td></td> <td>(:p rdf:type rdf:Property)</td> </tr> <tr> <td></td> <td>(:o rdf:type rdf:Resource)</td> </tr> </tbody> </table> Reification - Statements about statements "CL says 'WYA wrote Digital Libraries'" Reification Structure Staff member 85740 said the weight of item 10245 is 2.4 units Reification XML ```xml <?xml version="1.0"?> <!DOCTYPE rdf:RDF [<!ENTITY xsd "http://www.w3.org/2001/XMLSchema#">]> <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:exterms="http://www.example.com/terms/ xml:base="http://www.example.com/2002/04/products"> <rdf:Description rdf:ID="item10245"> <exterms:weight rdf:datatype="&xsd;decimal">2.4</exterms:weight> </rdf:Description> <rdf:Statement rdf:about="#triple12345"> <rdf:subject rdf:resource="http://www.example.com/2002/04/products#item10245"/> <rdf:predicate rdf:resource="http://www.example.com/terms/weight"/> <rdf:object rdf:datatype="&xsd;decimal">2.4</rdf:object> <dc:creator rdf:resource="http://www.example.com/staffid/85740"/> </rdf:Statement> </rdf:RDF> ``` Why Schema (1)? - Enables communities to share machine readable tokens and locally define human readable labels. Why Schema (2)? Relationships among vocabularies - dc:Creator - marc:100 - ms:director - bib:Author Why Schema(3)? Relationships among vocabulary elements URI:R ms:director \isA{} dc:Creator ms:director -> "John Smith" dc:Creator RDF Schemas • Declaration of vocabularies - classes, properties, and structures defined by a particular community - relationship of properties to classes • Provides substructure for inferences based on existing triples • NOT prescriptive, but descriptive • Schema language is an expression of basic RDF model - uses meta-model constructs - schema are “legal” rdf graphs and can be expressed in RDF/XML syntax RDFs Namespace - **Class-related** - `rdfs:Class`, `rdfs:subClassOf` - **Property-related** - `rdfs:subPropertyOf`, `rdfs:domain`, `rdfs:range` RDF Schema: Specializing Properties • rdfs:subPropertyOf - allows specialization of relations - E.g., the property “father” is a subPropertyOf the property parent • subProperty semantics <table> <thead> <tr> <th>If M contains</th> <th>Then add</th> </tr> </thead> <tbody> <tr> <td>(:s rdfs:subPropertyOf :o)</td> <td>(:s rdf:type rdf:Property)</td> </tr> <tr> <td></td> <td>(:o rdf:type rdf:Property)</td> </tr> <tr> <td>(:s :p :o)</td> <td>(:s :q :o)</td> </tr> <tr> <td>(:p rdfs:subPropertyOf :q)</td> <td>(:p :r)</td> </tr> <tr> <td>(:p rdfs:subPropertyOf :q)</td> <td>(:p :r)</td> </tr> <tr> <td>(:q rdfs:subPropertyOf :r)</td> <td>(:p :q :r)</td> </tr> </tbody> </table> Inferences from Property Relationships \[ \begin{align*} (:alice :has-child :betty) \\ (:alice :has-child :charles) \\ (:betty :has-child :doris) \\ (:betty :has-child :eve) \\ (:charles : has-sibling :betty) \\ (:doris :has-sister :eve) \\ (:eve :has-sister :doris) \end{align*} \] Sub-Property Semantics - Using the intended semantics, we can infer: \[ \begin{align*} (:\text{alice} & \text{:has-descendant} :\text{betty}) \\ (:\text{alice} & \text{:has-descendant} :\text{charles}) \\ (:\text{alice} & \text{:has-descendant} :\text{doris}) \\ (:\text{alice} & \text{:has-descendant} :\text{eve}) \end{align*} \] Property-based semantics - Provide basis for type inference from properties - Not restrictive like XML schema constraints - rdfs:domain - classes of resources that have a specific property - rdfs:range - classes of resources that may be the value of a specific property <table> <thead> <tr> <th>If M contains</th> <th>Then add</th> </tr> </thead> <tbody> <tr> <td>(:s :p :o)</td> <td>(:s rdf:type :t)</td> </tr> <tr> <td>(:p rdfs:domain :t)</td> <td>(:o rdf:type :t)</td> </tr> <tr> <td>(:s :p :c')</td> <td></td> </tr> <tr> <td>(:p rdfs:range :t)</td> <td></td> </tr> </tbody> </table> Inferences from Constraints \[ \begin{align*} (:&\text{has-child} \text{ rdfs:domain} \text{ parent}) \\ (:&\text{has-child} \text{ rdfs:range} \text{ person}) \\ (:&\text{has-sibling} \text{ rdfs:domain} \text{ person}) \\ (:&\text{has-brother} \text{ rdfs:range} \text{ :male-person}) \\ (:&\text{has-sister} \text{ rdfs:range} \text{ :female-person}) \end{align*} \] - Using the intended semantics, we can infer: \[ \begin{align*} (:&\text{alice} \text{ rdf:type} \text{ parent}) \\ (:&\text{betty} \text{ rdf:type} \text{ parent}) \\ (:&\text{eve} \text{ rdf:type} \text{ female-person}) \\ (:&\text{charles} \text{ rdf:type} \text{ :person}) \end{align*} \] Class Declaration • rdfs:Class - Resources denoting a set of resources; range of rdf:type ex:MotorVehicle rdf:type rdfs:Class exthings:companyCar rdf:type ex:MotorVehicle Class Hierarchy - `rdfs:subClassOf` - Create class hierarchy ex:MotorVehicle rdf:type rdfs:Class ex:SUV rdf:type rdfs:Class ex:SUV rdf:subClassOf ex:MotorVehicle exthings:companyCar rdf:type ex:SUV ## Sub-Class Inferencing <table> <thead> <tr> <th>If M contains</th> <th>Then add</th> </tr> </thead> <tbody> <tr> <td>(:s rdf:type :o)</td> <td>(:o rdf:type rdfs:Class)</td> </tr> <tr> <td>(:s rdf:type :o)</td> <td>(:s rdf:type :c)</td> </tr> <tr> <td>(:o rdfs:subClassOf :c)</td> <td>(:s rdfs:subClassOf :c)</td> </tr> <tr> <td>(:s rdfs:subClassOf :o)</td> <td>(:s rdfs:subClassOf :c)</td> </tr> <tr> <td>(:s rdfs:subClassOf :o)</td> <td>(:s rdf:type rdfs:Class)</td> </tr> <tr> <td>(:o rdf:type rdfs:Class)</td> <td>(:s rdfs:subClassOf rdf:Resource)</td> </tr> </tbody> </table> Sub-class Inferencing Example (:parent rdfs:subClassOf :person) (:male-person rdfs:subClassOf :person) (:female-person rdfs:subClassOf :person) (:mother rdfs:subClassOf :parent) (:mother rdfs:subClassOf :female-person) • Using the intended semantics, we can infer: (:betty rdf:type person) Jena Toolkit • Robust tools for building and manipulating RDF models - HP Labs Bristol - Capabilities • Model construction • XML and N3 parsing • Model persistence (DB foundation) • Model querying • Ontology building • Inferencing • http://www.hpl.hp.com/semweb/jena2.htm IsaViz - Visualizing and constructing RDF models - http://www.w3.org/2001/11/IsaViz/
{"Source-Url": "http://www.cs.cornell.edu:80/courses/cs431/2008sp/Lectures/public/lecture_03_26_08.pdf", "len_cl100k_base": 4384, "olmocr-version": "0.1.53", "pdf-total-pages": 50, "total-fallback-pages": 0, "total-input-tokens": 64575, "total-output-tokens": 6497, "length": "2e12", "weborganizer": {"__label__adult": 0.0004143714904785156, "__label__art_design": 0.0010251998901367188, "__label__crime_law": 0.0006780624389648438, "__label__education_jobs": 0.01483917236328125, "__label__entertainment": 0.0001556873321533203, "__label__fashion_beauty": 0.0002532005310058594, "__label__finance_business": 0.000530242919921875, "__label__food_dining": 0.0005288124084472656, "__label__games": 0.0005130767822265625, "__label__hardware": 0.0007119178771972656, "__label__health": 0.0008258819580078125, "__label__history": 0.0007033348083496094, "__label__home_hobbies": 0.0002474784851074219, "__label__industrial": 0.0007925033569335938, "__label__literature": 0.0011873245239257812, "__label__politics": 0.0004734992980957031, "__label__religion": 0.0008368492126464844, "__label__science_tech": 0.1649169921875, "__label__social_life": 0.000545501708984375, "__label__software": 0.0308685302734375, "__label__software_dev": 0.77734375, "__label__sports_fitness": 0.0003085136413574219, "__label__transportation": 0.0008487701416015625, "__label__travel": 0.00035071372985839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15581, 0.01712]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15581, 0.56397]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15581, 0.56521]], "google_gemma-3-12b-it_contains_pii": [[0, 86, false], [86, 240, null], [240, 277, null], [277, 323, null], [323, 370, null], [370, 422, null], [422, 484, null], [484, 1037, null], [1037, 1307, null], [1307, 1711, null], [1711, 2048, null], [2048, 2112, null], [2112, 2133, null], [2133, 2897, null], [2897, 3537, null], [3537, 3993, null], [3993, 4330, null], [4330, 4942, null], [4942, 5120, null], [5120, 5337, null], [5337, 6010, null], [6010, 6179, null], [6179, 6254, null], [6254, 6297, null], [6297, 6652, null], [6652, 7245, null], [7245, 7411, null], [7411, 7900, null], [7900, 8345, null], [8345, 8452, null], [8452, 8856, null], [8856, 8939, null], [8939, 9024, null], [9024, 9846, null], [9846, 9960, null], [9960, 10061, null], [10061, 10195, null], [10195, 10613, null], [10613, 10763, null], [10763, 11606, null], [11606, 11890, null], [11890, 12224, null], [12224, 13005, null], [13005, 13671, null], [13671, 13846, null], [13846, 14056, null], [14056, 14901, null], [14901, 15194, null], [15194, 15496, null], [15496, 15581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 86, true], [86, 240, null], [240, 277, null], [277, 323, null], [323, 370, null], [370, 422, null], [422, 484, null], [484, 1037, null], [1037, 1307, null], [1307, 1711, null], [1711, 2048, null], [2048, 2112, null], [2112, 2133, null], [2133, 2897, null], [2897, 3537, null], [3537, 3993, null], [3993, 4330, null], [4330, 4942, null], [4942, 5120, null], [5120, 5337, null], [5337, 6010, null], [6010, 6179, null], [6179, 6254, null], [6254, 6297, null], [6297, 6652, null], [6652, 7245, null], [7245, 7411, null], [7411, 7900, null], [7900, 8345, null], [8345, 8452, null], [8452, 8856, null], [8856, 8939, null], [8939, 9024, null], [9024, 9846, null], [9846, 9960, null], [9960, 10061, null], [10061, 10195, null], [10195, 10613, null], [10613, 10763, null], [10763, 11606, null], [11606, 11890, null], [11890, 12224, null], [12224, 13005, null], [13005, 13671, null], [13671, 13846, null], [13846, 14056, null], [14056, 14901, null], [14901, 15194, null], [15194, 15496, null], [15496, 15581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15581, null]], "pdf_page_numbers": [[0, 86, 1], [86, 240, 2], [240, 277, 3], [277, 323, 4], [323, 370, 5], [370, 422, 6], [422, 484, 7], [484, 1037, 8], [1037, 1307, 9], [1307, 1711, 10], [1711, 2048, 11], [2048, 2112, 12], [2112, 2133, 13], [2133, 2897, 14], [2897, 3537, 15], [3537, 3993, 16], [3993, 4330, 17], [4330, 4942, 18], [4942, 5120, 19], [5120, 5337, 20], [5337, 6010, 21], [6010, 6179, 22], [6179, 6254, 23], [6254, 6297, 24], [6297, 6652, 25], [6652, 7245, 26], [7245, 7411, 27], [7411, 7900, 28], [7900, 8345, 29], [8345, 8452, 30], [8452, 8856, 31], [8856, 8939, 32], [8939, 9024, 33], [9024, 9846, 34], [9846, 9960, 35], [9960, 10061, 36], [10061, 10195, 37], [10195, 10613, 38], [10613, 10763, 39], [10763, 11606, 40], [11606, 11890, 41], [11890, 12224, 42], [12224, 13005, 43], [13005, 13671, 44], [13671, 13846, 45], [13846, 14056, 46], [14056, 14901, 47], [14901, 15194, 48], [15194, 15496, 49], [15496, 15581, 50]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15581, 0.06522]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
743bbaa4a37d70a9de33c0a196cb77f08cf56f68
Data as processes: introducing measurement data into CARMA models Citation for published version: Digital Object Identifier (DOI): 10.4204/EPTCS.217.5 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Data as processes: introducing measurement data into CARMA models Stephen Gilmore Laboratory for Foundations of Computer Science The University of Edinburgh Edinburgh, Scotland Stephen.Gilmore@ed.ac.uk Measurement data provides a precise and detailed description of components within a complex system but it is rarely used directly as a component of a system model. In this paper we introduce a model-based representation of measurement data and use it together with modeller-defined components expressed in the CARMA modelling language. We assess both liveness and safety properties of these models with embedded data. 1 Introduction A formal model of a real-world system uses abstraction to distill the most important elements of the system into a succinct representation which is amenable to formal reasoning and analysis. If the modeller creating the model has chosen the right level of abstraction for their analysis then the insights which are gained from model-based reasoning are also applicable to the real-world system itself. If, however, some of the important elements of the system have been mis-represented in the model then the insights gained by model-based reasoning and analysis are of no value, no matter how much trouble or care was taken to obtain them from the (flawed) formal model. For dynamic models used to study performance properties such as throughput, utilisation, and satisfaction of service-level agreements, one challenge which the modeller must face is representing the timed behaviour of systems accurately. Depending on the kind of model that is being created, either continuous sure variables or random variables from a particular random number distribution are used to abstract aspects of timed behaviour in the system under study. These variables are parameters of the model, allowing it to be used in a suite of experiments which explore the behaviour of the model when some of the parameters are perturbed. For such a modelling study to be informative about the system under study it is then necessary to ensure that these parameters are correctly chosen to reflect the durations of the corresponding events in the system. Techniques for abstracting empirical univariate distributions into statistical distributions such as phase-type distributions are well known and available as algorithms [6] and even as software tools [12]. However, in the case of systems where spatial aspects play a significant role in addition to timed behaviour we have several correlated variables and a multivariate distribution which means that finding a suitable abstraction is not so easy. Where a classical model of a spatially-distributed system would typically use a co-ordinate system to provide an abstract representation of space, our concrete component instead uses literal latitude and longitude co-ordinates to represent the current position of a mobile component. The result is a model which is a mix of abstract components crafted by the modeller and concrete components which have been automatically generated from measurement data. This allows us to build models of systems where we selectively choose not to abstract one component, but instead to represent it literally in order to ensure that we do not misrepresent it via an inappropriate abstraction. M.H. ter Beek and M. Loreti (Eds.): Workshop on FORmal methods for the quantitative Evaluation of Collective Adaptive SysTems (FORECAST’16), EPTCS 217, 2016, pp. 31–42 doi:10.4204/EPTCS.217.5 © S. Gilmore This work is licensed under the Creative Commons Attribution License. From the viewpoint of model-based testing we should see each concrete component as a *black box* component within the model. The component offers up the values of its attributes at any time, but the logic as to why the attribute values change as they do is not represented anywhere in the model, neither in the concrete components generated from measurement data nor in the abstract components defined by the modeller. Measurement data can be easily obtained from an instrumented system and one can often be in the situation of having an embarrassingly large volume of measurement data. Because the concrete components admit no compact representation of their behaviour the modelling formalism which we use must be able to tolerate large unstructured components with real-valued attributes such as latitude and longitude, paired with timestamps. Many modelling formalisms are not able to meet this challenge. Classical Petri nets, process algebras, and layered queueing networks do not provide the data types and data structures which are needed to represent concrete components within the model. Here however, we are working with CARMA (Collective Adaptive Resource-sharing Markovian Agents) [7] a modern feature-rich modelling language which in addition to providing a stochastic process algebra of guarded recursive processes with unicast and broadcast communication also provides the primitive data types and data structures of a general-purpose programming language. These features are supplemented by encapsulation mechanisms, general function definitions, and iterative constructs for defining collectives of components. Together these features give the modeller sufficient linguistic power to represent concrete components directly within CARMA models, and we utilise this strength of CARMA modelling here. ## 2 Background Models in the CARMA language consist of a *collective of components*, set in an *environment* representing the context in which the components operate. The collective is a parallel composition of components, each of which consists of a *process* which represents the component’s behaviour, and a *store* which represents the component’s knowledge. Stores map *attribute names* to *basic values* of primitive types such as boolean, integer and real. Values such as these can be passed as parameters when processes communicate. An output action $\alpha(\vec{v})$ by one process can be matched with an input action $\alpha(\vec{x})$ by another process provided the length of the vector of values $\vec{v}$ is the same as the length of the vector of variables $\vec{x}$. The arity of a communication action must be consistent throughout the model: it cannot be used to pass one value at one point and two (or more) values at another. Communication actions can either be *unicast* or *broadcast* in CARMA. In all, this provides four types of actions in CARMA: - **broadcast output** $\alpha^*[\pi](\vec{e})\sigma$ asynchronous (non-blocking) broadcast action $\alpha$ to communication partners identified by the predicate $\pi$; send the values of expressions $\vec{e}$ evaluated in the local store $\gamma$; then apply the update $\sigma$ to $\gamma$. - **broadcast input** $\alpha^*[\pi](\vec{x})\sigma$ receive a tuple $\vec{x}$ of values $\vec{v}$ sent with an action $\alpha$ from a component whose store satisfies the predicate $\pi[\vec{v}/\vec{x}]$; then apply the update $\sigma$ to local store $\gamma$. - **unicast output** $\alpha[\pi](\vec{e})\sigma$ synchronous (blocking) unicast action $\alpha$ to any communication partner satisfying the predicate $\pi$; send the values of expressions $\vec{e}$ evaluated in the local store $\gamma$; then apply the update $\sigma$ to $\gamma$. - **unicast input** $\alpha[\pi](\vec{x})\sigma$ (point-to-point) receive a tuple $\vec{x}$ of values $\vec{v}$ sent with an action $\alpha$ from a component whose store satisfies the predicate $\pi[\vec{v}/\vec{x}]$; then apply the update $\sigma$ to local store $\gamma$. The use of predicates to describe communication partners means that CARMA supports the attribute-based communication paradigm \([2]\), as found in languages such as SCEL \([9]\), where dynamic collections of components called ensembles are formed through having attributes in common. In process algebras such as PEPA \([10]\) where data is abstracted out of the model, attributes are not present and thus attribute-based communication is not possible. Communication partners are determined statically in PEPA whereas they are determined dynamically in CARMA and SCEL. Processes \((P, Q, \ldots)\) in CARMA are defined by the following grammar: \[ P, Q ::= \text{nil} | \text{kill} | \text{act}.P | P + Q | P \mid Q | \pi[P] | \sigma[P] | A \quad (A \triangleq P) \] \[ \text{act} ::= \alpha' \pi[\vec{e}][\sigma] | \alpha' \pi[\vec{x}][\sigma] | \alpha \pi[\vec{e}][\sigma] | \alpha \pi[\vec{x}][\sigma] \] By convention in a CARMA model activity names begin with a lowercase letter, function and component names begin with a capital letter, and process names are written in all caps. Expressions in the CARMA language (as used in function bodies) are generated by the following grammar. \[ e_1, e_2, e_3 ::= \text{return } e_1 | \text{if}(e_1)\{e_2\} | \text{if}(e_1)\{e_2\} \quad \text{else } \{e_3\} | e_1; e_2 | a_1 | b_1 \] \[ a_1, a_2 ::= 0 | 1 | \cdots | -a_1 | a_1 + a_2 | a_1 - a_2 | a_1 * a_2 | a_1 / a_2 \] \[ b_1, b_2 ::= \text{true} | \text{false} | a_1 > a_2 | a_1 >= a_2 | a_1 == a_2 | a_1 <= a_2 | a_1 < a_2 \] \[ | \!b_1 | b_1 \&\& b_2 | b_1 || b_2 \] ### 3 Case study: Bus fleet management For our case study in this paper we show how data on the movement of a bus travelling through the city of Edinburgh can be incorporated into a CARMA model. The purpose of the modelling will be to check whether or not the bus follows the intended route by matching its movements against a high-level description of the route in terms of regions of the city described by predicates. This is an instance of a fleet management problem as studied in the transportation modelling community: it is important to know the location of all of the vehicles in the fleet and to know that they are serving their assigned routes. Managing the assignment of buses to routes is not as easy in practice as it might appear: changes of assignment are needed during the working day as problems such as vehicular mechanical failures, road closures, or driver unavailability can cause buses to be cancelled or re-routed in ways that would be impossible to predict at the start of the day. Our specific example is Transport for Edinburgh’s Service 100, which travels between Edinburgh airport and Edinburgh city centre. We can characterise this route as having five significant regions: the airport, suburban area 1, suburban area 2, the city centre, and the garage where the bus is parked overnight. These areas are shown in Figure 1 together with a GPS trace of bus fleet number 937 serving this route. The definition of these regions is given in Table 1. In the dataset that we are working with here, the position of a bus has been registered once every minute. Regions should be chosen to be large enough to make it effectively improbable that a bus can enter the region and exit from it again without having been observed at least once within it. Regions should bound a portion of the bus route, but not so tightly that small measurement errors in GPS readings could cause a bus to be perceived as outside that region. We have chosen our regions to be simple rectangles because it is easy to test whether a point lies within a simple geometric shape such as this. We defined five CARMA predicates to test whether a point lies in a region. These are \text{AtAirport}, \text{InSuburbs1}, \text{InSuburbs2}, \text{InCentre} and \text{AtGarage}. The CARMA predicate \text{AtAirport} is shown in Figure 2. The other predicates are similarly easy to define. Figure 1: The route of bus number 937 in the fleet, assigned to service 100. The route includes the airport (in the bottom left hand corner, in blue), suburban area 1 (in purple), suburban area 2 (in orange), the city centre (in green), and the garage (in the top right hand corner, in black). <table> <thead> <tr> <th>Significant latitudes</th> <th>Significant longitudes</th> <th>Region definitions</th> </tr> </thead> <tbody> <tr> <td>$lat_1 = 55.935$</td> <td>$long_1 = -3.38$</td> <td>airport = $[(long_1, lat_1), (long_2, lat_4)]$</td> </tr> <tr> <td>$lat_2 = 55.940$</td> <td>$long_2 = -3.34$</td> <td>suburbs_1 = $[(long_2, lat_1), (long_3, lat_3)]$</td> </tr> <tr> <td>$lat_3 = 55.945$</td> <td>$long_3 = -3.28$</td> <td>suburbs_2 = $[(long_3, lat_2), (long_4, lat_4)]$</td> </tr> <tr> <td>$lat_4 = 55.950$</td> <td>$long_4 = -3.22$</td> <td>centre = $[(long_4, lat_3), (long_6, lat_5)]$</td> </tr> <tr> <td>$lat_5 = 55.955$</td> <td>$long_5 = -3.20$</td> <td>garage = $[(long_5, lat_5), (long_6, lat_6)]$</td> </tr> <tr> <td>$lat_6 = 55.965$</td> <td>$long_6 = -3.18$</td> <td></td> </tr> </tbody> </table> Table 1: Table of region definitions in terms of latitude and longitude coordinates. ```c fun bool AtAirport(real long, real lat) { if (long > long1 && long < long2 && lat > lat1 && lat < lat4) { return true; } else { return false; } } ``` Figure 2: The AtAirport function in CARMA. 3.1 Generating a concrete component from measurement data Given measurement data whose records consist of latitude and longitude coordinates together with a timestamp it is straightforward to generate a concrete component which introduces this measurement data into our CARMA model. We generate a straight-line process which broadcasts each move action as it occurs. We choose broadcast output because in general we do not want the abstract components of the model to alter the behaviour of the concrete components. The parameters of the move action are the measurement data in the form of a five-tuple \((\text{latitude}, \text{longitude}, \text{hour}, \text{minutes}, \text{seconds})\). This conversion from measurement data into a CARMA component is performed automatically with a Python script. Figure 3 illustrates this process. The updates \(\sigma_0, \sigma_1\) and \(\sigma_2\) are the obvious updates of the local store to hold the current values of latitude, longitude, hours, minutes, and seconds. <table> <thead> <tr> <th>Time</th> <th>Latitude</th> <th>Longitude</th> </tr> </thead> <tbody> <tr> <td>00:11:39</td> <td>55.948413846216582</td> <td>-3.363214449536430</td> </tr> <tr> <td>00:12:41</td> <td>55.944855742591862</td> <td>-3.361568243977290</td> </tr> <tr> <td>00:13:43</td> <td>55.937544319811479</td> <td>-3.358045792384101</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Figure 3: Converting measurement data into concrete components in our CARMA model. Now that we have our measurement data within our CARMA model we can use the CARMA Eclipse Plugin to execute the model and investigate its behaviour using a measure defined in the CARMA model. Measures are real-valued functions which compute some result of the current state of the model, allowing this to be visualised as an assessment of the model’s behaviour. We can define a measure in CARMA as shown below. ```carma measure MaxLatitude = \max \{ \text{my.latitude} \}; ``` The Bus component which we have generated has an attribute in its local state named \textit{latitude} which is updated after every movement action. A component refers to its own local state using the prefix \texttt{my} in CARMA (much like the use of \texttt{this} in Java). The particular measure shown above records the maximum value of the latitude seen at all timepoints along the trace of the Bus component as it executes. The results are shown in Figure 4. This plot assures us that the model exhibits some behaviour, in that the latitude of the bus is changing, but we do not yet know whether or not it is following the route of the 100 service, or staying within the five regions of interest specified earlier. We will define an additional component to investigate these questions. ### 3.2 Defining a probe to monitor the concrete component The next step in checking the correctness of a bus journey is to be able to monitor its behaviour by adding an abstract component (defined by the modeller) to ensure that the progress of the bus moving between regions is as we expect, and that the bus does not leave the five regions which we have defined (see Table 1). We use the term probe for a component whose purpose is simply to monitor changes in another component [5, 8]. The function of a probe is to make it convenient to express checkable properties of a model. A probe is a finite-state automaton which recognises the language of acceptable state transitions within a model and rejects all unacceptable state transition sequences. Probes are sometimes expressed in formal language terms as regular expressions and sometimes as timed automata [4, 3]. Probes can be used to check both safety and liveness properties of models. Here we are interested in two liveness properties and one safety property, as described below. **Liveness 1:** The bus visits the airport (probe reaches state AIRPORT). **Liveness 2:** The bus visits the city centre (probe reaches state CENTRE). **Safety:** The bus does not leave the five defined regions (probe never reaches state ERROR). Perhaps the most natural description of the journey of the bus would be to separate out the Airport journey (from the airport to the city centre) and the Return journey (from the city centre to the airport). Denoting these A and R respectively, we would note that the journey from the airport to the city centre passes through the two suburban regions in the order \([S_A^1; S_A^2]\) whereas the return journey passes through the two suburban regions in the order \([S_R^2; S_R^1]\). The probe which captures this separation of the Airport and Return journeys is presented in Figure 5. The predicate guards which label each arrow have been omitted to reduce clutter. The selection of start state means that this probe can only be applied to vehicles which begin their journey at the airport. State E indicates that an error has occurred. Although this description of the probe is perfectly correct from the abstract notion of the bus route it is in practice rather too unforgiving of measurement errors. For a bus stopped on the border between the region subrubs1 and the region subrubs2 a small error in GPS measurement could cause the sequence of observations \([S_A^1; S_A^2; S_A^1; S_A^2]\) to be seen, and this cannot be accepted by the probe presented in Figure 5. For this reason, we work with a looser specification of the bus route, as described by the probe in Figure 6. This does not differentiate between the Airport and Return routes and has the dual benefits of being more compact and tolerating errors in GPS measurement at the boundaries between regions. For example, the sequence of observations $[S_1;S_2;S_1;S_2]$ can be accepted by this component. The predicate guards which label each arrow have again been omitted in this diagram to reduce clutter. Of course, when this probe is expressed as a CARMA component it is necessary to be absolutely specific about the predicate guards on each transition. The overall effect of the predicates is to track the location of the bus on the basis of its reported latitude and longitude. A transition to the ERROR state of the probe may only be taken if no other outgoing transition from a state is possible. We use the predicates \texttt{AtAirport}, \texttt{InSuburbs1}, \texttt{InSuburbs2}, \texttt{InCentre} and \texttt{AtGarage} as described previously. The text of the probe as a CARMA component is shown in Figure 7. ### 3.3 Computing liveness and safety properties using probes Now we are in a position to be able to use CARMA’s measures to interrogate the probe to see states which it visits. In order to turn our probe’s observations into numerical measures we count the number of probes in each state. These measures will only ever return 0 or 1 as their results with 1 indicating that component Probe/process Z) { store {} behaviour { AIRPORT = move*[AtAirport(long, lat)](lat, long, h, m, s) \{ AIRPORT + move*[InSuburbs1(long, lat)](lat, long, h, m, s) \{ SUBURBS1 + move*[!AtAirport(long, lat) && !InSuburbs1(long, lat)](lat, long, h, m, s) \{ ERROR; SUBURBS1 = move*[AtAirport(long, lat)](lat, long, h, m, s) \{ AIRPORT + move*[InSuburbs1(long, lat)](lat, long, h, m, s) \{ SUBURBS1 + move*[InSuburbs2(long, lat)](lat, long, h, m, s) \{ SUBURBS2 + move*[!AtAirport(long, lat) && !InSuburbs2(long, lat)](lat, long, h, m, s) \{ ERROR; SUBURBS2 = move*[InSuburbs1(long, lat)](lat, long, h, m, s) \{ SUBURBS1 + move*[InSuburbs2(long, lat)](lat, long, h, m, s) \{ SUBURBS2 + move*[InCentre(long, lat)](lat, long, h, m, s) \{ CENTRE + move*[!InSuburbs1(long, lat) && !InSuburbs2(long, lat) && !InCentre(long, lat)](lat, long, h, m, s) \{ ERROR; CENTRE = move*[InSuburbs2(long, lat)](lat, long, h, m, s) \{ SUBURBS2 + move*[InCentre(long, lat)](lat, long, h, m, s) \{ CENTRE + move*[AtGarage(long, lat)](lat, long, h, m, s) \{ GARAGE + move*[!InSuburbs2(long, lat) && !InCentre(long, lat) && !AtGarage(long, lat)](lat, long, h, m, s) \{ ERROR; GARAGE = move*[InCentre(long, lat)](lat, long, h, m, s) \{ CENTRE + move*[AtGarage(long, lat)](lat, long, h, m, s) \{ GARAGE + move*[!InCentre(long, lat) && !AtGarage(long, lat)](lat, long, h, m, s) \{ ERROR; ERROR = move*[true](lat, long, h, m, s) \{ ERROR; } init \{ Z \} } Figure 7: The probe from Figure 6 represented as a CARMA component. the state (AIRPORT, SUBURBS1, . . .) has been visited. We add these measures to our model. measure ProbeInStateAIRPORT = #\{ Probe[AIRPORT] | true \}; measure ProbeInStateSUBURBS1 = #\{ Probe[SUBURBS1] | true \}; measure ProbeInStateSUBURBS2 = #\{ Probe[SUBURBS2] | true \}; measure ProbeInStateCENTRE = #\{ Probe[CENTRE] | true \}; measure ProbeInStateGARAGE = #\{ Probe[GARAGE] | true \}; measure \texttt{ProbeInStateERROR} = \#\{ \texttt{Probe[ERROR]} \mid \texttt{true} \} ; There are eleven buses from the Transport for Edinburgh fleet which serve the Airport route at any time. For the day of data which we processed here, these are fleet numbers 937, 938, 939, 940, 941, 943, 945, 947, 948 and 950. We began with bus number 937 and used the CARMA Eclipse Plugin to check our probe against the trajectory of this bus. This showed that the two liveness properties were satisfied (the bus visits the airport and the city centre) and that the safety property was also met (the \texttt{ERROR} state of the probe is never reached). The output from the CARMA Eclipse Plugin is shown in Figure 8. Figure 8: The route of bus number 937 in the fleet. The bus is initially at the airport and enters the garage shortly after midnight. The following day the bus performs the expected route between the city centre and the airport. The \texttt{ERROR} state of the probe is never reached for bus 937. After this, we applied the same analysis to the remaining ten buses which were serving the 100 route, compiling these results into Table 2. All but one of these buses passed both the liveness tests and the safety test. The bus which failed these tests was bus 947, which fails both of the liveness tests and also fails the safety test. <table> <thead> <tr> <th>Fleet number</th> <th>Initial state</th> <th>Final state</th> <th>AIRPORT visited</th> <th>CENTRE visited</th> <th>ERROR seen</th> <th>Probe 100 result</th> </tr> </thead> <tbody> <tr> <td>937</td> <td>AIRPORT</td> <td>GARAGE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>938</td> <td>GARAGE</td> <td>AIRPORT</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>939</td> <td>GARAGE</td> <td>CENTRE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>940</td> <td>SUBURBS1</td> <td>CENTRE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>941</td> <td>CENTRE</td> <td>GARAGE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>943</td> <td>GARAGE</td> <td>GARAGE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>944</td> <td>SUBURBS2</td> <td>GARAGE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>945</td> <td>CENTRE</td> <td>GARAGE</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>947</td> <td>GARAGE</td> <td>ERROR</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Rejected</td> </tr> <tr> <td>948</td> <td>GARAGE</td> <td>AIRPORT</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> <tr> <td>950</td> <td>GARAGE</td> <td>SUBURBS1</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>Accepted</td> </tr> </tbody> </table> Table 2: Results of checking our probe against traces from different buses serving the 100 route. Looking at the results from the CARMA Eclipse Plugin shown in Figure 9, the bus is initially in the garage (probe state is \texttt{GARAGE}) but immediately violates the allowable conditions on its latitude and longitude coordinates on leaving the garage (probe state is \texttt{ERROR}). The error state of the probe is an absorbing state so once the probe has entered this state it will not escape to any non-error state even if the bus later corrects its position to rejoin the correct route for the service. Our method of compiling measurement data into concrete components and evaluating it with a probe component has had the desired outcome of finding erroneous behaviour in an unlabelled collection of correct and incorrect trajectories. Bus 947 has been identified as having diverged from the expected route. We now look at its trajectory as a latitude-longitude trace and see if we can conjecture what happened to cause this deviation. Comparing the position with a map of the city of Edinburgh we see that the bus has Figure 9: The route of bus number 947 in the fleet. The bus is initially in the garage at the beginning of the trace but enters the ERROR state immediately on exiting the garage. taken a route away from the city centre towards the coast. A second Transport for Edinburgh garage is located here, and we can conjecture that this bus had a fault or needed some maintenance activity before it was able to serve the 100 route. After visiting the second garage the (now, presumably, repaired) bus returns to the city centre and begins its service from there, as detailed in Figure 10. Figure 10: The route of bus number 947 in the fleet, assigned to service 100. The route includes the expected locations of the airport (in the bottom left hand corner, in blue), suburban area 1 (in purple), suburban area 2 (in orange), the city centre (in green), and the garage (in black), as well as the unexpected locations of suburban area 3 (in yellow) and garage 2 (in the top right-hand corner, in cyan). 3.4 Practicality of the method In our case study here we used GPS measurement data from bus journeys to build our concrete components. The position of the bus is sampled every minute of a twenty-four hour period, thus giving us approximately 1,440 timestamped records of the latitude and longitude of the bus. The measurement data is compiled into a CARMA model by a Python script; this CARMA model is 14,546 lines long. This CARMA model is then compiled into a Java application by the CARMA Eclipse Plugin; this Java application is 81,162 lines long. The current compilation strategy employed by the CARMA compiler is to compile a CARMA component into a single Java method. For hand-built components created by the modeller this approach has no significant disadvantages but for large generated concrete components this approach runs the risk of overflowing the maximum Java method size of 65,534 bytes. If working with very large measurement data sets, over longer time periods or with finer sampling granularity, then it would be necessary to change the CARMA compilation strategy to compile CARMA processes into individual methods instead and have the compilation images of CARMA components call these methods, thereby moving the problem of maximum method size to reappear again only for CARMA processes which have many attributes and very large update blocks. 4 Related work In this work we have considered using data concretely as process components in a model. In earlier work, van der Aalst et al have devised algorithms for extracting compact process descriptions from data such as system event logs. These algorithms are necessarily incomplete and cannot always find a compact process representation which faithfully encodes an expansive event log. Nonetheless, these workflow mining \cite{van2004} and process mining \cite{wagner2007} approaches give valuable insights into large data sets by identifying likely causal relations between events and variants of the $\alpha$ algorithm which underlies the workflow mining approach are able to rediscover large classes of processes from event logs. The Traviando simulation trace analyser can also be applied to inverse problems like these, in that it can be used to generate so-called likely invariants from a finite execution trace. These likely invariants can then be used to help formulate a compact model of a process which would generate such an event trace \cite{gilmore2018}. 5 Conclusions We have shown a method of checking liveness and safety properties of CARMA models in which some components of the system which is being modelled are represented without abstraction, using concrete components which are generated automatically from data. The properties which can be checked are those which can be expressed by finite-state automata (“probes”) whose state-to-state transitions are guarded by predicates over the values of component attributes or parameters passed by communication actions. Checking is automatically performed by the CARMA Eclipse Plugin which can generate both graphs of the transitions of the probes and graphs of the changes in underlying values within the model (such as component attributes). Using this we were able to detect from an unlabelled set of trajectories the trajectory which failed to satisfy the requirements of a specified bus route. The methods used are generally applicable to any problem where data plays a significant role and whose correctness criterion can be expressed automata-theoretically. Our interests for future work on this topic include generating probe components directly from route descriptions which list the bus stops on the route together with their latitude and longitude coordinates. Additionally, we wish to add instrumentation to probes in order that they can count visits to the regions of interest on the route, enabling stronger liveness properties to be expressed. Acknowledgements: This work is supported by the EU QUANTICOL project, 600708. We acknowledge the assistance of Transport for Edinburgh in providing access to their data on bus movement in the city of Edinburgh. Our thanks go to Natalia Zon for her help in developing the CARMA model used in this paper. Our thanks also go to the developers of the CARMA Eclipse Plugin for providing such a useful and robust analysis platform. We are grateful to the anonymous reviewers of this paper for many helpful suggestions for improvement. References
{"Source-Url": "http://www.research.ed.ac.uk/portal/files/26693524/1607.037331.pdf", "len_cl100k_base": 7872, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 41796, "total-output-tokens": 10016, "length": "2e12", "weborganizer": {"__label__adult": 0.0005617141723632812, "__label__art_design": 0.0004925727844238281, "__label__crime_law": 0.0005898475646972656, "__label__education_jobs": 0.0018186569213867188, "__label__entertainment": 0.00019550323486328125, "__label__fashion_beauty": 0.0002722740173339844, "__label__finance_business": 0.0007920265197753906, "__label__food_dining": 0.0006475448608398438, "__label__games": 0.0011510848999023438, "__label__hardware": 0.0016994476318359375, "__label__health": 0.000926971435546875, "__label__history": 0.0007915496826171875, "__label__home_hobbies": 0.00023305416107177737, "__label__industrial": 0.0012865066528320312, "__label__literature": 0.000812530517578125, "__label__politics": 0.0006976127624511719, "__label__religion": 0.0006742477416992188, "__label__science_tech": 0.447265625, "__label__social_life": 0.00028061866760253906, "__label__software": 0.01421356201171875, "__label__software_dev": 0.5166015625, "__label__sports_fitness": 0.0005230903625488281, "__label__transportation": 0.006824493408203125, "__label__travel": 0.00039768218994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36828, 0.03796]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36828, 0.32311]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36828, 0.87007]], "google_gemma-3-12b-it_contains_pii": [[0, 1246, false], [1246, 4819, null], [4819, 8840, null], [8840, 12793, null], [12793, 14120, null], [14120, 16445, null], [16445, 19645, null], [19645, 20828, null], [20828, 23010, null], [23010, 26913, null], [26913, 29274, null], [29274, 32669, null], [32669, 36828, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1246, true], [1246, 4819, null], [4819, 8840, null], [8840, 12793, null], [12793, 14120, null], [14120, 16445, null], [16445, 19645, null], [19645, 20828, null], [20828, 23010, null], [23010, 26913, null], [26913, 29274, null], [29274, 32669, null], [32669, 36828, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36828, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36828, null]], "pdf_page_numbers": [[0, 1246, 1], [1246, 4819, 2], [4819, 8840, 3], [8840, 12793, 4], [12793, 14120, 5], [14120, 16445, 6], [16445, 19645, 7], [19645, 20828, 8], [20828, 23010, 9], [23010, 26913, 10], [26913, 29274, 11], [29274, 32669, 12], [32669, 36828, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36828, 0.135]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
10b6e4e389b3a504b4b82e051d58aecee212d3d0
Abstract. The sphere of malware attacks is expanding to engulf the compact world of smartphones. This paper sheds light on exploitation tactics used by malware writers in designing iPhone applications that exploit the integrity of the victim’s phone. Our interest is in the harder problem of malware occurrence in iPhone devices. In spite of the iPhone’s strong security platform, malware is making inroads. However, successful iPhone exploitation depends on several factors. As we know, Apple has implemented several security barricades in order to secure the iPhone environment aided by tight control of their app market. Apple considers iPhones marginalized by the jailbreaking process as unsecure since all the inherent protection mechanisms have been circumvented by the attacker. Is it possible to write a malicious application that may not exploit security vulnerability, but can still perform some spyware activity? The answer is yes. This is possible in certain scenarios where a malicious application can be designed to bypass Apple’s application review process to execute illegitimate operations on an user’s iPhone. In this paper, we discuss practical scenarios and effective techniques that can be used to host malicious applications on non-jailbroken Apple iPhones. Understanding Apple’s iPhone Applied Security Model Apple enforces strict security features in order to protect the integrity of iOS. Its security model has the following features: • With the advent of iOS 4, Apple introduced a new data protection procedure in which stored data is secured using hardware encryption. The device stores the user passcode key on an internal chip using 256-bit encryption. The Unique ID (UID) of the devices is used as a key to encrypt a file on iPhone. • The iOS environment is divided into two main partitions. Similar to UNIX, the root partition manages the kernel and base OS. The user partition contains third-party applications and data. All applications run in a user mode with a standard set of access rights and built-in restrictions. The iOS system-level binaries are related to OS X and Darwin. In order to preserve the integrity of applications, Apple implements a code signing process [8]. The code signature consists of three parts. First, the signature consists of a UID that is present in the info.plist files under CFBundleIdentifier structure. Second, it requires a seal that is built from hashes and checksums of various files and other components of the application bundle. Third, it requires a digital signature. All the signatures are stored in the MACH-O header format. Code signing code verification is implemented in a kernel level using the execv() command. • Third-party applications running on iOS are sandboxed [9]. This concept is implemented to force privilege separation among different components in iOS. It means that third-party applications are not able to run code at kernel level—a secure practice to avoid exploitation of privileges. The application sandbox is implemented using three techniques. First, entitlements which decide the functionality of the application. Second, containers that provide an application directory for supplying read/write operations. Third, powerbox which provides a secure way to open and handle dialog boxes. Together these three methods collaboratively form the application sandbox. Of greatest interest to malware writers, third-party applications are not allowed to interact with kernel-level extensions. Introduction Malware has begun infecting the mobile world. Several studies [1, 2] have been conducted showing how mobile malware is exploiting the online world. Android malware infections are exploding as compared to iPhone. The primary reason is that Android is an open source platform where as iPhone’s iOS is closed. Our target is to discuss the potential possibilities of malware occurrence in iPhone devices. In spite of the iPhone’s strong security platform, malware is making inroads. However, successful iPhone exploitation depends on several factors. As we know, Apple has implemented several security barricades in order to secure the iPhone environment aided by tight control of their app market. Apple considers iPhones marginalized by the jailbreaking process as unsecure since all the inherent protection mechanisms have been circumvented by the attacker. Is it possible to write a malicious application that may not exploit security vulnerability, but can still perform some spyware activity? The answer is yes. This is possible in certain scenarios where a malicious application can be designed to bypass Apple’s application review process to execute illegitimate operations on an user’s iPhone. In this paper, we discuss practical scenarios and effective techniques that can be used to host malicious applications on non-jailbroken Apple iPhones. Understanding Apple’s iPhone Applied Security Model Apple enforces strict security features in order to protect the integrity of iOS. Its security model has the following features: • With the advent of iOS 4, Apple introduced a new data protection procedure in which stored data is secured using hardware encryption. The device stores the user passcode key on an internal chip using 256-bit encryption. The Unique ID (UID) of the devices is used as a key to encrypt a file on iPhone. • The iOS environment is divided into two main partitions. Similar to UNIX, the root partition manages the kernel and base OS. The user partition contains third-party applications and data. All applications run in a user mode with a standard set of access rights and built-in restrictions. The iOS system-level binaries are related to OS X and Darwin. In order to preserve the integrity of applications, Apple implements a code signing process [8]. The code signature consists of three parts. First, the signature consists of a UID that is present in the info.plist files under CFBundleIdentifier structure. Second, it requires a seal that is built from hashes and checksums of various files and other components of the application bundle. Third, it requires a digital signature. All the signatures are stored in the MACH-O header format. Code signing code verification is implemented in a kernel level using the execv() command. • Third-party applications running on iOS are sandboxed [9]. This concept is implemented to force privilege separation among different components in iOS. It means that third-party applications are not able to run code at kernel level—a secure practice to avoid exploitation of privileges. The application sandbox is implemented using three techniques. First, entitlements which decide the functionality of the application. Second, containers that provide an application directory for supplying read/write operations. Third, powerbox which provides a secure way to open and handle dialog boxes. Together these three methods collaboratively form the application sandbox. Of greatest interest to malware writers, third-party applications are not allowed to interact with kernel-level extensions. Anatomy of Jailbreaking For completeness, let us take a brief look at jailbreaking. This attack exploits vulnerabilities in browser, plugin, and iOS components to take control of a victim’s iOS device. As a result, jailbreaking [3, 4] culminates in a complete compromise of the iOS device. It primarily uses security vulnerabilities that provide root control of the device. Once the vulnerability is exploited, the attacker is able to run his native code and turn the victim’s iOS device into a weapon. Jailbreaking also deploys code signing bypass mechanisms [5] in order to install open source packages such as Cydia [6]. It is also possible to spread malware after jailbreaking. In 2009, a default SSH password vulnerability was exploited on jailbroken iPhones to propagate the iKee [7] worm and its variants. iPhone Malware—Exploitation Model A malware infection in an iPhone can be categorized into three distinct classes: - The first class of malware results from exploitation of security vulnerabilities to get root-level access. Jailbreaking falls into this category. Once rooted, attackers can start services on the iPhone to turn into a malicious entity for spreading malware. In this case, the attacker has to target a specific set of victims. It is difficult because it becomes an action by choice whether the user wants to jailbreak his or her iPhone or not. As a result, attackers force the user to visit a malicious domain using social media tricks to download the malicious code. In a real-time environment, it is hard to spread this class of malware on a large scale as there is a trust layer that Apple provides its users by having applications hosted on Apple’s online store. The malware exploits the root privileges as the kernel is already compromised after the exploitation of the security vulnerability. iPhone rootkits [10] are also classified into this class. For example, the Dutch iPhone ransomware [11] belongs to this category of malware. - The second class of malware exploits the default security model of Apple. This is basically exploited by spyware applications that look legitimate and bypass Apple’s App Store verification process. Once in the App Store, infection is easier as the malicious application can be easily disseminated to a number of iOS users. The malicious application might not be able to compromise the kernel as it runs in the sandbox, but it can definitely steal users’ sensitive information, history, address book contacts, and so on. This class of malware is a classic example of iPhone spyware that exploits the trust boundary between the user and App Store. For example, SpyPhone [12, 13] falls into this category of malware. - The third type of malware is a hybrid of both classes of malware discussed above. Hybrid malware is triggered through a generic application that is hosted on the App Store. When a user downloads it, at first it looks legitimate but behind the scenes it starts sending texts to the phone numbers listed in the contacts directory of the victim’s iPhone. The text itself carries a link to a malicious website that serves a jailbreaking code. Drive-by download attacks are used extensively for spreading this class of malware. For example, iSAM [14] is a hybrid class of iPhone malware. The lifecycle of mobile malware is presented in Figure 1. Inside the Apple Kill Switch—Remotely Deactivating Applications iOS has the built-in protection of a kill switch [15, 16] that enables Apple to kill a malicious application that does not comply with its policies. Applications installed on the iPhone regularly correspond back to the App Store to provide updates about the state of the device. Apple uses blacklisting with a list of applications that are malicious and should be turned off remotely. It is kept in the ‘unauthorizedapps’ file on an Apple server. We performed a quick check on a required URL in order to see which applications are blacklisted. Figure 2 shows that currently there are no applications marked as unauthorized. Figure 1: Lifecycle of Mobile Malware Figure 2: Blacklisted—Unauthorized Apps Check This functionality is distinct from removing the applications from the App Store because this procedure is designed to deactivate rogue applications remotely. It seems like Apple usually removes the application directly from the App Store. However, the remote deactivation process exists as a proactive defense. Listing 1: Sandbox profiles kSBXProfileNoNetwork (= "nonet") kSBXProfileNoInternet (= "nointernet") kSBXProfilePureComputation (= "pure-computation") kSBXProfileNoWriteExceptTemporary (= "write-tmp-only") kSBXProfileNoWrite (= "nowrite") Listing 2: Obfuscation using NSString Object NSString *jobfuscate_code:(NSString *)string withKey:(NSString *)key { // Create data object from the string NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding]; char *code_ptr = (char *)data.raw_bytes; // Mapping the pointer to key data char *k_data = (char *)[[key dataUsingEncoding:NSUTF8StringEncoding] bytes]; char *key_ptr = k_data; int key_index = 0; // For each character in data, xor with current value in key for (int x = 0; x < [raw_data_length]; x++) { // Apply XOR operation on every character *code_ptr = *code_ptr++ ^ *key_ptr++; if (++key_index == [key_length]) key_index = 0, key_ptr = k_data; return [[NSString alloc] initWithData:code_ptr encoding:NSUTF8StringEncoding] autorelease; } } App Store Application Review—Dependencies and Reality There are not many details available about Apple's app review procedures. However, based on a developer's view some details can be deduced. Some of the procedures implemented by the App Store are as follows: • The App Store strictly requires a developer to be enrolled in the Apple's iPhone Developer program [19]. In order to get the approval, the developer has to submit a binary and not the source code, which in turn means that detailed source code analysis is not a part of the verification process. The App Store usually checks for user interface inconsistencies, private API calls and malware. However, malware scrutiny depends on the malware exploitation model mentioned earlier. It is hard to infer details of the Apple application review process, but dynamic and static analysis (pattern matching) are thought to be a part of the process. Given what we know about the review process, it is possible that stealthy programming techniques may be able to circumvent the detection modules. • The Apple iPhone Licensing Agreement [20] requires a developer not to perform reverse engineering tactics on the applications hosted on the App Store and software developer kit components. Based on this fact, it seems reasonable to assume that Apple itself is following this practice and is not performing reverse engineering on submitted applications. In practice, it is not feasible to reverse engineer the thousands of applications submitted on a weekly basis. • Most mobile malware aims to steal a user’s data at the application layer. In spite of Apple’s restrictive policies, default access to user data is available to any application running on an iPhone. The sandboxed environment prevents applications from interact- Obfuscation—Bypassing Blacklisting Obfuscation can be useful for legitimate developers as well as for malware writers. Obfuscation is used to prevent the exposure of API functionality. For example, best practices suggest avoiding the embedding of hard-coded credentials in the application. However, developers sometimes hide keys in the code using obfuscation or store credentials on a webserver and rewrite queries after verification. That is, developers implement obfuscation modules for security purposes. Such code needs to pass security testing. Apple requires the application to be robust in nature. As long as the iPhone application is stable and does not crash, the App Store easily accepts an application having obfuscated modules. While obfuscation is used by legitimate developers to prevent information leakage, a malware writer can use obfuscation to bypass the App Store verification process. Most of the static analysis tools use blacklisting in which a certain set of strings are blacklisted. When the scanner runs the application code it matches blacklist patterns using regular expressions. Knowing this, it is possible to bypass the static analysis tool using obfuscation. Let us consider an example; In iPhone applications, strings are declared as NSString [21] which are immutable and represented as an array of Unicode characters. Listing 2 shows a prototype of implementing obfuscation using NSString object. It is possible to obfuscate the strings in an iPhone application and then deobfuscate them at run time. There are many algorithms to perform this functionality. However, the XOR operation is an effective way of obfuscating strings. Generally, the following steps can result in implementation of obfuscated code in iPhone applications: • The first step is to create a data object from the required string. • The second step involves the declaration of pointers to the data and encryption key to be obfuscated. • The third step involves the implementation of counter that runs through every character in a string and embeds a key using the XOR operation. Code Hiding in Objective-C and Symbols Stripping Apple is very strict in its review policy about using private API functions that are not documented because these hidden methods can be used by malware. Generally, applications using private API functions are rejected by the App Store. Objective-C does not provide support for private methods, but it is still possible to write methods that hide malicious code. Below are the two most widely implemented steps: • Objective-C has a dynamic resolution feature in which a method is bound during compile time. The attacker can define a secret function whose signature matches Objective-C implementation. The secret function is declared in the class method. When that method gets called for the first time, the malicious code is bound to the class privately. This type of procedure is used to circumvent code detection using a tool such as Class-Dump. Listing 3 shows a code prototype that uses dynamic method resolution. However, a skilled analyst may be able to figure out the presence of stealth code. For example, running Otool on a particular method results in the list of selectors that are used by the respective method. However, it is possible to obfuscate the method by generating selectors at run time using “NSSelectorFromString()” functions. • In Objective-C, it is also possible to create functions that work similarly to instance methods. It means functions can access instance variables easily. These types of functions should be defined in the class implementation. It is not a normal way of doing things, but the desired method never appears in the Objective-C run time which hampers verification. Listing 4 shows the declaration of malicious function hide_me with instance variables. The function hide_me does not have its own selector rather it uses the selector of stealth instance (public) method defined in the class. The two methods discussed provide a way to design code which can hide from tools that examine code so they can be accepted by the App Store. Additionally, stripping is a technique used in UNIX platforms to remove unnecessary information from a binary and object files to improve performance. A malicious developer can use stripping to remove information prior to submission of an application binary to the App Store. Doing so removes clues that might indicate the malicious nature of the code. Exploiting the Remote Server End Points Generally, all iPhone applications communicate back with a webserver (HTTP End Point) in order to exchange data between the application and the server on a regular basis. It is possible for malware to exploit the HTTP end point mechanism. At the time of verification, Apple performs a behavioral analysis of the application and scrutinizes the communication pattern. At the time of submission, the attacker can make the HTTP end point legitimate and once approved by Apple, the same HTTP end point can be used to serve the exploit code which is downloaded into the victim’s phone when the application interacts with a remote server. For example: consider the following scenario: • Attacker writes an application that interacts with a remote server on the URL <http://www.mal-app-test.com/error.asp>. The error.asp webpage validates the resource and if that resource is not present then it raises an error. • During the verification process, Apple finds it legitimate and the application is treated as good enough to host on the App Store. • Once the application is hosted, it is possible to manipulate the “error.asp” webpage to deliver exploit code that is downloaded into the device and performs malicious functions. This is a legitimate scenario that can be exploited to trigger malware infections in an iPhone. Listing 3: Code hiding using dynamic resolution // Setting a Class Interface as Secret @interface Secret @end // secret function is defined in the class method void hide_me(id self, SEL _command); @end // Implementing Class @implementation Secret @synthesize handle; // Selecting hide_me secret function and binding into the class method +(BOOL)resolveInstanceMethod: (SEL)aSel { if (aSel == @selector(hide_me)) { class_addMethod(self, aSel, (IMP)hide_me, "v@:"); return YES; } return [super resolveInstanceMethod: aSel]; } // This is an Instance Method holding a reference to hide_me -(void)stealth [self hide_me]; void hide_me(id self, SEL _command) { HIDE(@"Inside hide_me: %d", (LMethod *)self->handle); } @end //Class Dump Output @interface Secret : NSObject { int handle; } // Tool does not provide information about hide_me after static discovery of Class Method +(BOOL)resolveInstanceMethod:(SEL)arg1; @property(n nonatomic) int handle; // @synthesize handle; // Class Dump only lists the Instance Method -(void)stealth; @end Listing 4: Code hiding function variables as instance methods (void)stealth { hide_me(self, _cmd);} Cautionary Steps Users play a critical role in the success of malware. There are a number of steps that a user can follow to reduce risk. These proactive steps are applicable to every smartphone whether Android or iPhone and are discussed as follows: • Mobile users should not install any unauthorized application from third-party resources. The installed applications must be verified and authorized from legitimate vendors. • The users should think twice prior to clicking any URL from non-legitimate resources. For example: users should be careful while chatting on social media applications such as Facebook and Twitter. Push notification messages should be scrutinized critically prior to executing any action based on the information in a message. E-mail attachments should not be opened directly until the user is sure about legitimacy. • It is always advised to install anti-virus software on your mobile device which scans the device for potential suspicious activities and notifying users about changes in the system. • Usage of strong passwords and avoidance of default security policies is always preferred. • Users should carefully analyze the behavior of their mobile phones against any types of anomalous activities such battery drainage, high Internet data usage, and slower execution of applications. Conclusion In this paper, we have discussed the state of iPhone malware. There is no doubt that Apple has designed a robust verification policy but it is still possible to create stealthy malware that can bypass Apple’s verification process. However, doing so requires devising a malicious application in an intelligent way using stealthy techniques such as code obfuscation, stripping, and code hiding. We believe that malware poses an increasingly serious challenge to the security of our devices and we need to be proactive in our defenses to ensure the security of our data and privacy. ABOUT THE AUTHORS Aditya K. Sood is a senior security researcher and Ph.D. candidate at Michigan State University. He has worked in the security domain for Armorize, COSEINC and KPMG. He is also a founder of SecNiche Security Labs, an independent security research arena for cutting edge computer security research. At SecNiche, he also acts as an independent researcher and security practitioner for providing services including software security and malware analysis. He has been an active speaker at industry conferences and already spoken at RSA, Virus Bulletin, HackInTheBox, ToorCon, HackerHalted, Source, OWASP AppSec USA, Troopers, OWASP AppSec USA, FOSS, CERT-IN, etc. He has written content for BH_US_11_Daswani_Mobile_Malware_Slides.pdf, iPhone App Store Secrets - Pinch Media, Aurora Feint iPhone App Delisted For Lousy Security Practices, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data, iPhone App Store Secrets - Pinch Media, SpyPhone iPhone App Can Harvest Personal Data. E-mail: adizerok@gmail.com E-mail: soodadit@cse.msu.edu Phone: 517-755-9911 Richard J. Enbody, Ph.D., is associate professor in the Department of Computer Science and Engineering at Michigan State University where he joined the faculty in 1987. He has served as acting and associate chair of the department and as director of the computer engineering undergraduate program. His research interests include computer security; computer architecture; web-based distance education; and parallel processing, especially the application of parallel processing to computational science problems. Enbody has two patents pending on hardware buffer-overflow protection that will prevent most computer worms and viruses. Email: enbody@cse.msu.edu Phone: 517-353-3389 REFERENCES 10. iPhone Rootkits, http://www.enbody@cse.msu.edu 20. iPhone Developer License Agreement, https://www.eff.org/files/20100127_iphone_dev_agr.pdf Homeland Security The Department of Homeland Security, Office of Cybersecurity and Communications, is seeking dynamic individuals to fill several positions in the areas of software assurance, information technology, network engineering, telecommunications, electrical engineering, program management and analysis, budget and finance, research and development, and public affairs. To learn more about the DHS Office of Cybersecurity and Communications and to find out how to apply for a vacant position, please go to USAJOBS at www.usajobs.gov or visit us at www.DHS.GOV; follow the link Find Career Opportunities, and then select Cybersecurity under Featured Mission Areas.
{"Source-Url": "http://static1.1.sqspcdn.com/static/f/702523/17039827/1331310022540/201203-Sood.pdf?token=eepwGYV6RWRUARULBauE4DnWB18%3D", "len_cl100k_base": 5158, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16481, "total-output-tokens": 6142, "length": "2e12", "weborganizer": {"__label__adult": 0.0011796951293945312, "__label__art_design": 0.0008301734924316406, "__label__crime_law": 0.0148773193359375, "__label__education_jobs": 0.002620697021484375, "__label__entertainment": 0.0004031658172607422, "__label__fashion_beauty": 0.0006651878356933594, "__label__finance_business": 0.0005564689636230469, "__label__food_dining": 0.0005311965942382812, "__label__games": 0.004589080810546875, "__label__hardware": 0.043212890625, "__label__health": 0.0013561248779296875, "__label__history": 0.0006017684936523438, "__label__home_hobbies": 0.00032711029052734375, "__label__industrial": 0.000988006591796875, "__label__literature": 0.0008616447448730469, "__label__politics": 0.0006537437438964844, "__label__religion": 0.0008831024169921875, "__label__science_tech": 0.29248046875, "__label__social_life": 0.00023317337036132812, "__label__software": 0.1739501953125, "__label__software_dev": 0.45654296875, "__label__sports_fitness": 0.0005865097045898438, "__label__transportation": 0.0006384849548339844, "__label__travel": 0.0001970529556274414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28603, 0.01509]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28603, 0.32284]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28603, 0.88417]], "google_gemma-3-12b-it_contains_pii": [[0, 7072, false], [7072, 11495, null], [11495, 17030, null], [17030, 22678, null], [22678, 28603, null]], "google_gemma-3-12b-it_is_public_document": [[0, 7072, true], [7072, 11495, null], [11495, 17030, null], [17030, 22678, null], [22678, 28603, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28603, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28603, null]], "pdf_page_numbers": [[0, 7072, 1], [7072, 11495, 2], [11495, 17030, 3], [17030, 22678, 4], [22678, 28603, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28603, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
ea2f1a66808dc79e2366517579ad4aaed6b4aaaf
Mining Generalized Association Rules Using Prutax and Hierarchical Bitmap Index Jakub Pieprzyk¹ and Mikołaj Morzy² ¹ Comarch S.A. Skladowa 4, 61-897 Poznań Jakub.Pieprzyk@comarch.pl ² Institute of Computing Science Poznań University of Technology Piotrowo 3A, 60-965 Poznań, Poland Mikolaj.Morzy@cs.put.poznan.pl Abstract. Association rules are among the most popular and widely used data mining techniques. Often, associations are sought between items forming a taxonomy. Patterns discovered between items from different levels of a taxonomy provide aggregated view over the data and allow to discover trends and regularities that are not apparent in the raw transactional data. Generalized association rule mining, i.e. mining in presence of a taxonomy of items, is an important augmentation to the original association rule mining framework. Unfortunately, currently available algorithms do not allow to efficiently discover generalized association rules. In this paper we present the state-of-the-art in generalized association rule mining. We describe the hierarchical bitmap index, an efficient physical structure optimized for set processing. Next, we modify the Prutax algorithm by incorporating the hierarchical bitmap index as the crucial internal structure, resulting in the advent of the PrutaxHBI algorithm. An experimental evaluation and comparison of the proposed solution with currently available algorithms clearly shows that the proposed algorithm outperforms current algorithms under all circumstances. 1 Introduction Mining of association rules is by far the most popular and widely used data mining technique. An association rule is an expression of the form \( X \Rightarrow Y \), where \( X \) and \( Y \) are sets of items. An intuitive meaning of an association rule is that whenever an itemset \( X \) appears in a collection of itemsets, with a given probability, the itemset \( Y \) is also present. Application domains of association rule discovery range from market basket analysis, recommender systems, fraud detection, to numerous practical systems, e.g., insurance policies, investment portfolios, medical database analysis, and many more. There are several efficient algorithms for mining association rules. Unfortunately, many researchers point to the fact, that association rules discovered in the raw transactional data are useless for analysts and decision makers, because such rules are too detailed to be actionable or understandable. In several applications, items constituting mined itemsets are organized into a taxonomy. Such taxonomy can reflect a discretization of supermarket goods into product categories, division of books into genres, etc. A challenging, yet indispensable task is to incorporate item taxonomies into association rule mining process. However, currently available algorithms for association rule mining are not well-suited for this task. In this paper we present an efficient algorithm that aims at generalized association rule discovery by incorporating the taxonomy of mined items into the physical indexing structure used by the algorithm. First, we present the hierarchical bitmap index, an indexing structure capable of efficient indexing of large sets with items drawn from huge domains. Next, we briefly describe Prutax, the best state-of-the-art algorithm for mining generalized association rules, and we show how we can significantly enhance Prutax by using the hierarchical bitmap index as the core physical structure for the algorithm. This leads to the development of the PrutaxHBI algorithm. A set of experiments proves the validity and efficiency of the proposed solution. This paper is organized as follows. In Section 2 we present the related work. Basic definitions used throughout the paper are presented in Section 3. In Section 4 we describe the hierarchical bitmap index which is the core structure used in our algorithm. We present the PrutaxHBI algorithm in Section 5, and we report on the results of the experimental evaluation of our proposal in Section 6. The paper concludes in Section 7 with a summary and a future work agenda. 2 Related Work The problem of association rule mining was first introduced in [2]. The paper identified the discovery of frequent itemsets as a key step in association rule mining. In [3] the authors introduced the Apriori algorithm, that quickly became the seed for numerous other association rule mining algorithms, e.g. [7]. In particular, the modification of the Apriori algorithm that allowed to mine generalized association rules was presented in [8]. The authors presented Apriori Basic algorithm that simply extended each database transaction with all ancestors of all items contained in the transaction. In addition, three optimizations of the original Apriori were proposed: Cumulate, Stratify, Estimate, and EstMerge. A similar direction has been followed in [9], where several new pruning strategies exploiting the taxonomy of items have been presented, and a new pruning strategy called Genex has been introduced. Another attempt to modify existing Apriori-based algorithms in the direction of efficient generalized association rule mining was presented in [4]. The authors present the family of ML-T* algorithms that mine associations between items from different levels of taxonomy, with minimum support thresholds varying between subsequent levels. An entirely different approach is represented by the Prutax algorithm [5]. Prutax uses a vertical database layout and avoids unnecessary candidate item- set generation by performing a depth-first traversal of the itemset lattice. Each candidate is evaluated immediately after generation and pruning is applied to remove candidate itemsets that contain both an item and its ancestor. In addition, Prutax enforces frequency ordering on items, thus directing the search through the itemset space from the most general itemsets to the most specific itemsets. However, these optimizations come at the cost of transforming the database to the vertical layout, which may be prohibitively expensive. 3 Basic Definitions Let $I = \{i_1, \ldots, i_n\}$ be a set of literals called items. Let $\tau$ be a directed acyclic graph defining a taxonomy over the set $I$. An item $i_p$ is the parent of a child item $i_c$ if there exists an edge between vertices $i_p$ and $i_c$ in the graph $\tau$. An item $i_a$ is the ancestor of a descendant item $i_d$ if there exists a path between vertices $i_a$ and $i_d$ in the graph $\tau$. An item $i_b$ is the base item if it has no descendants in the graph $\tau$. Let $D$ be a set of variable length transactions and $\forall T \in D : T \subseteq I \land \forall x \in T : x$ is a base item. We say that the transaction $T$ supports an item $x$ if $x \in T$. We say that the transaction $T$ supports an itemset $X$ if it supports every element $x \in X$. The support of an itemset is the number of transactions supporting the itemset. The problem of discovering frequent itemsets consists in finding all itemsets with the support higher than user-defined minimum support threshold denoted as $\minsup$. An itemset with the support higher than $\minsup$ is called a frequent itemset. An association rule is an expression of the form $X \rightarrow Y$ where $X \subset I, Y \subset I, X \cap Y = \emptyset$, and all items occurring in $X$ and $Y$ are base items. $X$ is called the body of the rule whilst $Y$ is called the head of the rule. Two measures represent statistical significance and strength of a rule. The support of a rule is the number of transactions that support $X \cup Y$. The confidence of a rule is the ratio of the number of transactions that support the rule to the number of transactions that support the head of the rule. The problem of discovering association rules consists in finding all rules with support and confidence higher than the user-specified thresholds of minimum support and confidence, called $\minsup$ and $\minconf$ respectively. Generalized association rules extend this base framework by allowing non-base items to appear as elements of rule body or head as well. 4 Hierarchical Bitmap Index The hierarchical bitmap index (HBI) was first introduced in [6]. Hierarchical bitmap index is based on signature index framework. It employs the idea of exact set element representation and uses hierarchical structure to compact resulting signature and reduce its sparseness. The index on a given attribute consists of a set of index keys, each representing a single set. An example of an index key is depicted in Figure 1. Every index key comprises a signature divided into $n$-bit chunks (called index key leaves) and a set of inner nodes of the index key. organized into a tree structure. The highest inner node is called the root of the index key. The signature must be long enough to represent all possible elements appearing in the indexed set (usually hundreds of thousands of bits). Every element \( i_j \) of the attribute domain \( A, i_j \in \text{dom}(A) \) is mapped to an integer \( i \). ![Hierarchical bitmap index](image) **Fig. 1.** Hierarchical bitmap index Given an indexed set \( S = \{i_1, i_2, \ldots, i_m\} \). The set is represented in the index in the following way. Let \( l \) denote the length of the index key node. An item \( i_m \in S \) is represented by a ‘1’ on the \( j \)-th position in the \( k \)-th index key leaf, where \( k = \lceil m/l \rceil \) and \( j = m - (\lfloor m/l \rfloor - 1) \times l \). Therefore, the set \( S \) is represented by \( n \) ‘1’s set at appropriate positions of the index key leaves. An index key node (either leaf node or inner node) which contains ‘1’ on at least one position is called a non-empty node, while an index key node which contains ‘0’ on all positions is called an empty node. The next level of the index key compresses the signature representing the set \( S \) by storing information only about the non-empty leaf nodes. A single bit in an inner node represents a single index key leaf. If this bit is set to ‘1’ then the corresponding index key leaf contains at least one position set to ‘1’. The \( i \)-th index key leaf is represented by \( j \)-th position in the \( k \)-th inner index key node, where \( k = \lceil i/l \rceil \) and \( j = i - (\lfloor i/l \rfloor - 1) \times l \). Every upper level of the inner nodes represents the lower level in an analogous way. This procedure repeats recursively to the index key root. The index key stores only the non-empty nodes (marked on Figure 1 with solid lines). Empty nodes (marked on Figure 1 with dashed lines) are not stored anywhere in the index key. In other words, index key leaves form an exact signature of the indexed set and subsequent levels represent coarser signatures of the indexed set. Two parameters which affect shape and capacity of the index are: \( l \) — the length of a single index key node and \( d \) — the depth of the index key tree structure. HBI allows to index attributes with a domain up to \( l^d \) distinct items. An important factor that strongly affects the performance of the index is the mapping function. This function determines the mapping of items of the indexed domain on positions in the bitmap \( B \) of the HBI key. The feature that makes the hierarchical bitmap index suitable for generalized association rule mining is the fact that HBI makes no assumptions about mapping function chosen to map domain items on signature bit positions. One possible mapping function is hierarchical mapping. Hierarchical mapping is performed by the function $f(i_j) = H(i_j)$. This mapping considers taxonomy $\tau$ defined over items. For every item $i_j$ the function $H(i_j)$ returns the hierarchy category of the item. Item hierarchies are application-dependent and must be provided by the domain experts. Using hierarchical mapping the index not only represents the physical data contained in the indexed sets, but captures the logical properties of the indexed data as well. Hierarchical mapping allows to efficiently answer queries pertaining to higher logical level of the data without the need to physically store information about the taxonomy. To put it in other words, the taxonomy over items is physically encoded in the structure of the hierarchical bitmap index. 5 PrutaxHBI Algorithm In this section we present modifications introduced to the original Prutax algorithm. One thing to note is the fact that Prutax operates on the vertical database layout. A crucial operation during Prutax execution is the join of transaction identifier ($tid$) lists pertaining to different items. The resulting list contains $tids$ of transactions containing both joined items, therefore, the length of the joined list is simply the support of the itemset consisting of joined items. As Prutax operates in the depth-first direction, the join of long $tid$ lists is performed many many times. Potentially, every optimization of this expensive process could result in huge savings in algorithm’s running time. Hierarchical bitmap index is very well suited to represent large sets of items. In this case, we’ve decided to transform $tid$ lists into hierarchical bitmap index keys, one key per item. Then, instead of joining original $tid$ lists, we significantly speed up this operation by performing it directly on hierarchical bitmap index keys. Building of hierarchical bitmap index keys is performed iteratively. The main parameter governing the building phase is the size of the memory buffer allocated to the process. Based on the available buffer space a set of items is chosen for which hierarchical bitmap index keys will be computed during single iteration. Each iteration performs a single database scan in search of transactions that support any item being processed in current iteration. After the database scan is completed, all hierarchical bitmap index keys are created and written to file. While building hierarchical bitmap index keys for items assigned to the current iteration, in parallel we create hierarchical bitmap index keys for their ancestors. This requires on-the-fly extension of each transaction with ancestors of all items contained in the transaction. Finally, hierarchical bitmap index keys are computed only for single items. During the execution of the algorithm, several hierarchical bitmap index keys are created dynamically to represent sets of items. The number of hierarchical bitmap index keys created is quite large, and some keys might be re-used for computing the support of their supersets. Unfortunately, writing these intermediate results back to the index file is extremely expensive and significantly slows down the algorithm. On the other hand, simply discarding these results wastes computational effort undertaken to create these index keys. In our implementa- Table 1. Computing the cardinality of intersection of multiple HBIs ```java int intersect(HBI[] hbis) { BitSet common = new BitSet(); common.set(0, nodeSize, true); // set '1' for all positions for (h: hbis) { common.and(h.getLevel(0)); } if (common.cardinality() == 0) return 0; BitSet[] omit = new BitSet[hbis.size()]; for (int i = 0; i < hbis.length; i++) { omit[i] = new BitSet(); ∀ k: omit[i].set(k, true) if common.get(k) == false; } int currentLevel = 1; while (true) { set all bits in common to '1'; for (int i = 0; i < hbis.length; i++) { BitSet currentCommonLevel = new BitSet(); copy to currentCommonLevel all nodes from hbis[i].getLevel(currentLevel) if respective bit in omit[i] is set to '0'; common.and(currentCommonLevel); } if (common.cardinality() == 0) return 0; for (int i = 0; i < hbis.length; i++) { BitSet newOmit = new BitSet(); for (int j = 0; j < hbis[i].getLevel(currentLevel).cardinality(); j++) { set newOmit[i].set(k, true) if k-th bit belongs to a node, whose parent was set to '1' in omit[i] or respective bit in common is set to '0'; } omit[i] = newOmit; } currentLevel++; if (currentLevel == depth) return common.cardinality(); } } ``` In Table 1 we present the pseudo-code of the core function that computes the cardinality of the set of transactions supporting a given candidate itemset based on hierarchical bitmap index keys representing items contained in the candidate itemset. It is worth noticing, that the function does not have to actually determine the set of supporting transactions, it is sufficient to compute its cardinality. For a given candidate (k+1)-itemset C, we assume it has been generated from two frequent (k)-itemsets H\textsubscript{i} and H\textsubscript{j} that share a common (k−1)-prefix. The `intersect(HBI[])` function, which computes the support of the candidate itemset $C$, takes as input an array of $k + 1$ hierarchical bitmap index keys, one for each item contained in the candidate itemset $C$. The function iteratively computes the binary intersection of hierarchical bitmap index keys on each level of the hierarchical bitmap index. Hierarchical bitmap index keys are bitwise intersected until the leaf level is reached, or the intersection becomes void. The final result of the function $\text{intersect}(HBI[\cdot])$ is the number of bits set to ‘1’ in the intersection of all compared hierarchical bitmap index keys. During implementation a library class $\text{java.util.BitSet}$ has been used to represent both leaves and internal nodes of hierarchical bitmap index keys. This base class was sub-classed to allow quick serialization and writing of bit vectors. 6 Experiments ![Running times of algorithms when varying the minsup threshold](image) Fig. 2. Running times of algorithms when varying the $\text{minsup}$ threshold In this section we report on the results of the experimental evaluation of the PrutaxHBI algorithm. We compare our algorithm with Apriori Cumulate [8] and Prutax [5]. All experiments were conducted on a computer with two Dual Core AMD Opteron 1 Ghz processors and 8 GB RAM running under Linux 2.6.9 operating system. Data sets were created using DBGen generator from the Quest Project [1]. Input data were stored in flat transactional format using a simple schema of $\text{<transaction_id, item_id}>$. For the Prutax algorithm input data have been transformed into a vertical database layout of $\text{<item_id, list}>$. of transactions and the transformation time has been added to algorithm’s running times. Likewise, the time needed to construct a hierarchical bitmap index for the PrutaxHBI algorithm has been included in the results. The taxonomy of items has been synthetically generated after base items had been created by DBGen. Figure 2 presents running times of the three algorithms when varying the minsup threshold. In our experiment the minsup threshold changes from 1% to 4%. Presented results are averaged over five different datasets of 10,000 transactions each, with the average transaction length set to 8 and the average frequent itemset size set to 3. The number of items in the database was set to 100,000. As can be seen from the figure, the PrutaxHBI algorithm outperforms both Apriori Cumulate and Prutax, with the gain greater for low minimum support thresholds. We attribute this behavior to the fact that lower minimum support thresholds induce more frequent itemsets, which in turn increases the profit of hierarchical bitmap indexing of base transactions. Figure 3 presents running times of the three algorithms when varying the minsup threshold. In our experiment the minsup threshold changes from 1% to 4%. Presented results are averaged over five different datasets of 10,000 transactions each, with the average transaction length set to 8 and the average frequent itemset size set to 3. The number of items in the database was set to 100,000. What is apparent from the figure is the fact that using the taxonomy to prune candidate itemsets (as Prutax and PrutaxHBI algorithms do) significantly improves the performance. Furthermore, both algorithms scale better than the original Apriori algorithm. When increasing the size of the database from 9,000 to 11,000, the running time of *Apriori* grew by 84% while the running time of *Prutax* grew only by 46%. We are glad to note that the *PrutaxHBI* algorithm outperforms the original *Prutax* for all database sizes. ![Fig. 4. Running times of algorithms when varying average transaction size](image) In the next experiment we have investigated running times of algorithms under varying average transaction size. The results depicted in Figure 4 are averaged over six datasets of 1000 transactions each. The number of distinct items was set to 10,000 and the *minsup* threshold was set to 2%. The average transaction size varied from five items to ten items. The results clearly show that all algorithms decrease performance while increasing the average transaction size. Again, our *PrutaxHBI* algorithm outperforms the other two algorithms, with *Apriori Cumulate* being the most sensitive to the average number of items in a transaction. The explanation of the result is straightforward. Larger transactions imply larger database file to be processed during each iteration of the *Apriori Cumulate* algorithm. Both *Prutax* and *PrutaxHBI* utilize a compressed vertical database layout, therefore the increase of the average transaction size has lesser impact on them. Furthermore, for *Apriori Cumulate* larger transactions require extending transactions with more ancestor items, which has a direct influence on the performance of the algorithm. The last experiment concerns the number of patterns in the mined database. The results depicted in Figure 5 are averaged over a set of database files with 1000 transactions each. The number of distinct items was set to 10,000 and the minsup threshold was set to 2%. The average frequent itemset size varied from one to seven. The variance in running times of Apriori Cumulate is probably random, because it should not affect the algorithm at all (recall that we are changing the average frequent itemset size, which does not prohibit the existence of shorter or longer itemsets in the database). Both Prutax and PrutaxHBI slightly worsen performance for larger frequent itemsets due to the increasing depth at which the candidate itemset graph must be traversed. Nevertheless, our algorithm still significantly outperforms competitors. 7 Conclusions In this paper we have presented a new approach to generalized association rule mining that uses the Prutax algorithm and the hierarchical bitmap index structure. Our PrutaxHBI algorithm outperforms state-of-the-art algorithms for mining generalized association rules. Experiments conducted on synthetic datasets prove the efficiency of the proposed solution. Certainly, the work presented in this paper may be extended in several directions. There are numerous tweaks of the Prutax algorithm that might increase the performance, most notably, caching of intermediate results might prove useful. Our future work agenda includes, among others, improving the integration of Prutax and the hierarchical bitmap index, verifying the top-down breadth-first approach of generating candidate itemsets using the hierarchical bitmap index, and applying hierarchical bitmap indexing technique to other data mining problems. References
{"Source-Url": "http://www.cs.put.poznan.pl/mmorzy/papers/admkd07.pdf", "len_cl100k_base": 4982, "olmocr-version": "0.1.51", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 28533, "total-output-tokens": 6359, "length": "2e12", "weborganizer": {"__label__adult": 0.0003883838653564453, "__label__art_design": 0.0003237724304199219, "__label__crime_law": 0.0006780624389648438, "__label__education_jobs": 0.00089263916015625, "__label__entertainment": 0.0001067519187927246, "__label__fashion_beauty": 0.00019943714141845703, "__label__finance_business": 0.0007262229919433594, "__label__food_dining": 0.0004546642303466797, "__label__games": 0.0016021728515625, "__label__hardware": 0.0013418197631835938, "__label__health": 0.0010156631469726562, "__label__history": 0.0003371238708496094, "__label__home_hobbies": 0.0001780986785888672, "__label__industrial": 0.0008411407470703125, "__label__literature": 0.0003323554992675781, "__label__politics": 0.0002925395965576172, "__label__religion": 0.000514984130859375, "__label__science_tech": 0.2440185546875, "__label__social_life": 0.00016999244689941406, "__label__software": 0.038238525390625, "__label__software_dev": 0.7060546875, "__label__sports_fitness": 0.00038361549377441406, "__label__transportation": 0.0004830360412597656, "__label__travel": 0.00023758411407470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26134, 0.02089]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26134, 0.49342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26134, 0.86734]], "google_gemma-3-12b-it_contains_pii": [[0, 2339, false], [2339, 5553, null], [5553, 8728, null], [8728, 11558, null], [11558, 14865, null], [14865, 16950, null], [16950, 18553, null], [18553, 20268, null], [20268, 21932, null], [21932, 23375, null], [23375, 26134, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2339, true], [2339, 5553, null], [5553, 8728, null], [8728, 11558, null], [11558, 14865, null], [14865, 16950, null], [16950, 18553, null], [18553, 20268, null], [20268, 21932, null], [21932, 23375, null], [23375, 26134, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26134, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26134, null]], "pdf_page_numbers": [[0, 2339, 1], [2339, 5553, 2], [5553, 8728, 3], [8728, 11558, 4], [11558, 14865, 5], [14865, 16950, 6], [16950, 18553, 7], [18553, 20268, 8], [20268, 21932, 9], [21932, 23375, 10], [23375, 26134, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26134, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
cccc55c5686fc9e9c5f7e8bdd03300a7e494960b
Homogenization: A Mechanism for Distributed Processing across a Local Area Network Mahmud Shahriar Hossain Department of Computer Science and Engineering, Shahjalal University of Science and Technology, Sylhet-3114, Bangladesh. E-mail: shahriar-cse@sust.edu M. Muztaba Fuad Department of Computer Science, Montana State University, Bozeman, MT 59717, USA. E-mail: fuad@cs.montana.edu Debzani Deb Department of Computer Science, Montana State University, Bozeman, MT 59717, USA. E-mail: debzani@cs.montana.edu Kazi Muhammad Najmul Hasan Khan Department of Computer Science and Engineering, Shahjalal University of Science and Technology, Sylhet-3114, Bangladesh. E-mail: najmul_bd@yahoo.com Dr. Md. Mahbubul Alam Joarder Institute Of Information Technology (IIT), University of Dhaka, Dhaka-1000, Bangladesh. E-mail: joarder@udhaka.net ABSTRACT Distributed processing across a networked environment suffers from unpredictable behavior of speedup due to heterogeneous nature of the hardware and software in the remote machines. It is challenging to get a better performance from a distributed system by distributing task in an intelligent manner such that the heterogeneous nature of the system do not have any effect on the speedup ratio. This paper introduces homogenization, a technique that distributes and balances the workload in such a manner that the user gets the highest speedup possible from a distributed environment. Along with providing better performance, homogenization is totally transparent to the user and user needs no interaction with the system to secure the benefit. Keywords: Homogenization, Distributed processing, Java, Triangular Dynamic Architecture (TDA), RMI. 1. INTRODUCTION Triangular Dynamic Architecture (TDA) [10] introduces a mechanism of distributed processing and parallel computation for balancing the workload among the idle machines of a network. The construction of TDA is accomplished by introducing an intelligent server that dynamically categorizes hosts and relates those hosts transparently in a local area network. In a distributed system, there might be thin-clients[7], who possess least processing capability with a minimum resource allotment; also there might be high performance hosts with idle CPU time. All the machines will be properly balanced with equal workload when TDA is applied. When the server finds a job from a client, it divides the job into granules and distributes it to the service providers. After processing, the service providers directly return the outcomes to the requesting client. An intelligent server must divide the requested jobs efficiently so that the distribution mechanism properly balances the load across the system. TDA provides a dynamic nature of distributed and parallel processing that possesses platform independence and a load-balancing tool called homogenization. Homogenization is a process that assures TDA to balance the workload across a networked environment in a dynamic and intelligent way. Every distributed and parallel processing mechanism suffers a massive problem when the networked environment is a heterogeneous one. Moreover a LAN environment basically encompasses a heterogeneous infrastructure because the hosts vary in hardware architecture, memory, resident Operating Systems, background daemons and many other parameters. Homogenization brings all of these heterogeneous parameters to the same virtual platform. Equal allotment of workload would suffer from speedup degradation with the appearance of a low-performance machine. Homogenization assures speedup even when low-performance machines are involved. It should be implemented in a transparent way with minimum interaction from the user. 2. RELATED WORKS Although there are several distributed systems [1, 3, 11, 16], there is hardly any work on intelligent job distribution and load balancing. Scott [14] introduces the basics of client/server computing and component technologies and then proposes two frameworks for client/server computing using distributed objects. The component-based architecture defines the basic preliminary components of TDA. TDA is further developed to communicate among three kinds of hosts - server, client and service-provider. Moreover, TDA establishes dynamic relations on runtime and implements homogenization. Randall et al. [11] have discussed the scalability of a client server relationship. The distribution architecture is developed turn by turn as the number of clients is increased. The paper describes several existing distributed object oriented systems but they did not show any kind of performance measurement benchmarks against their comments. Launay et al. [13] introduced a framework that constraints parallelism without any extension to the Java [15] language. The project aims at the automatic generation of distributed code from multithreaded Java programs. Although parallelism is its basic concern, it does not emphasize its performance in load balancing rather it stresses its performance in code generation. In contrast, homogenization enhances parallelism by providing balanced distribution of load among the machines across TDA. JavaParty [9] transparently adds remote objects to Java by declaration in the source code. It introduces involvement of pre-compiler. It creates multiple Java byte-code files for every single distributable class. JavaParty is specially targeted towards and implemented on clusters of workstations. It combines Java-like programming and the concepts of distributed-shared memory in heterogeneous networks. In contrast, homogenization provides a balancing architecture in TDA with the involvement of an intelligent server without any requirement of pre-compilers. Although JavaParty deals with heterogeneous infrastructure, TDA is enriched with dynamic homogenization that does not require any static entry about heterogeneous machines. Another work experimentally compares mechanism of load balancing with existing load-balancing strategies that are believed to be efficient for multi-cluster systems. Nieuwpoort et al. [8] conducted this comparison and established a divide-and-conquer model for writing distributed supercomputing applications on hierarchical wide-area systems. In this research work, an algorithm named “cluster-aware random stealing” is used, which is analogous to homogenization in TDA. But the divide-and-conquer strategy may result in high round-trip time. This is why TDA dynamically uses straightforward homogenization process. Homogenization does not provide only the awareness about the machine-configurations but also it enriches TDA server with the load-information of the hosts. Fuad et al. [5, 6] introduce a system called AdJava that harnesses the computing power of underutilized hosts across a LAN or WAN. It also provides load balancing and migration of distributed objects through the use of intelligent software agents. Although the migration mechanism used in AdJava is highly automated, it suffers from penalty of migration time of the object. TDA provides mechanism to pass objects to the server and thereafter service providers, but there are administrative preferences that allow real distribution of load through analyzing it entirely or a virtual distribution of load that allows distribution information collection from the server. AdJava uses a simple distribution policy to distribute objects to available machines. If the number of objects to be distributed is more than the number of machines in the system AdJava distributes more than one object to those machines that are loaded lightly compared to other machines in the system. On the contrary, TDA distributes a computation according to the homogenized information about the system. Objects are granulized according to that dynamic information. So there is no need to recycle object-transfer to already loaded service providers by a granule of the same request. AdJava harnesses its performance only through scientific applications while TDA is capable of distributing business applications as well. 3. SYSTEM ARCHITECTURE TDA is a sophisticated form of client-server relationship that in turn is established over three-tier architecture. Now the classical client server relations are no more suitable [4], applications now follow the three-tier architecture. In TDA, the classical client-server relationship is established dynamically and the three-tier architecture is then merged to it. TDA offers triangular relationships, which is dynamically established by the server. The relationship is constructed between the client, the server and the service-provider. TDA uses Remote Method Invocation (RMI) [16] for implementing the triangular relationships. 3.1 TRIANGULAR DYNAMIC ARCHITECTURE TDA is called so because multiple triangular relationships are established on demand dynamically at run time. For all of the triangles, the server serves as the common point. The server may also decide to make several triangular relationships against a single request. The relationships also can dynamically switch from one to another, that is, if a service-provider becomes busy after receiving the sub-request from the server, it can send the server a connection refusal request and also sends the current status of the sub-job it was performing. If the server grants the refusal request then the service-provider is free, the server will hand over the remaining part of the sub-job to another service-provider that is least busy. It is evident from the Figure 1 that the server is the common point for all the triangles, which means that the server is the one who is responsible for establishing such relations. This is the basic design of TDA. If Client1 sends a request to the server and if the server decides that the request can be divided into three parts, it sends the granulized requests to three service-providers designated as service-provider 1, service-provider 2 and service-provider 3. The three service-providers process the corresponding sub-jobs in parallel and send the outcomes directly to Client1. In this case three triangular relationships are established, (i) client1, server, service-provider 1, (ii) client1, server, service-provider 2, (iii) client1, server, service-provider 3. For all these dynamically established relationships, the server is the common element, which proves that server is one that is responsible for the decision of distribution. For the time being, it is assumed that service-provider 1 and service-provider 2 have performance twice than service-provider 3. If the TDA server decides an equal distribution of load to these three service-providers then the distribution would suffer from the problem of parallel processing. The problem is, service-provider 3 would take twice the time taken by service-provider 1 or service-provider 2 for computation. As a result overall computation time becomes a function of the time taken by the slowest machine among the invoked hosts for a particular request. So there should be a mechanism, which would contribute a balanced distribution rather than equal allotment. The distribution should occur in such a fashion that all the invoked service-providers finish their computation at the same time regardless their performance. Homogenization is a process that deals with this problem in TDA. 3.2 TDA SERVER TDA server is one, which is responsible for the actual distribution of workload. The server maintains some information and based on the stored information, the server can decide about the number of granules to be generated for a particular request. When a request arrives, the server always depends on the latest data available to its local database; it does not look for more information from the service-providers, since doing so will degrade its performance. 3.3 SERVICE-PROVIDER Service-providers perform the actual computation in TDA. Background processes are the heart of service-providers. All the processes of a service-provider are hidden from the remote user’s sight. A background process always measures the current load of the host even when the service-provider is doing its share of the work. But, it measures its load in such a manner that it does not overwhelm other processes because it is implemented through a low priority thread. Time to time, it communicates with the server mentioning the current load. 3.4 CLIENT The overall TDA is designed to facilitate the client; to reduce computation time and to perform many jobs that the client alone was unable to conduct efficiently. Furthermore, the client might never perform the job as a thin host. A client program is composed of a user console and a request handler. User console is the basic interface to TDA for the users. If a user casts a request through the console, the request is sent to the request handler. Request handler encrypts the request and sends the request along with the client object reference to the TDA server. The result of processing is received in the user interface portion. 4. HOMOGENIZATION Figure 2 illustrates the homogenization process for TDA. Java Virtual Machine (JVM) [15] brings all the hosts in TDA to the same platform named homogenization plane. In the homogenization plane all the machines are of same virtual platform but they are of different performance factors. The TDA server performs the next level of homogenization. It brings the service-providers to the homogenization line. This level of homogenization is performed by allotment-variation of workload depending on the performance factors of the service-providers. In the homogenization line, all the service-providers take same amount of time to complete corresponding sub-requests. Scope length is the length of allotment of workload to a service-provider decided by the server. Scope length variation makes all the service-providers finish their computation at the same time. 5. HOMOGENIZING TDA The server maintains several tables in its local database that helps distributing the load. The server actually calculates the scope-length to be offered to a particular service-provider, using the tables of the local database. Most critical knowledge-issues are performance of the service-providers, their response time, list of services provided by a service-provider, etc. A background process in the service-provider informs the server about its current load after every 30 seconds. The server maintains this information and based on the stored information, the server generates a performance number, which is called the homogenized performance. Homogenized performance is the outcome of the second level homogenization of Figure 2. The server depends on the homogenized performance of the service-providers for the balanced distribution of load. Whenever a service-provider gets an identity during bootstrap, it sends performance parameter to the server. The server also measures the communication distance of the service-provider by pinging it test packets. Time to time, the server upgrades its tables e.g., it sends test packets to get the response time of the service-providers. Test packets are directly thrown back to the server from the service-provider. Moreover, it helps the server to know whether a particular service-provider is dead or active. It helps the server controlling the fault tolerance mechanism. Test packets are smaller in size and they merely congest the traffic. If a service-provider is not busy, but yet it has a large response time, then the server does not invoke it for small jobs. The server always tries to offer it massive and computation intensive jobs so that the time consumed by communication overhead becomes less pronounced. A service-provider with comparatively lower homogenized performance always gets smaller portions of request than a faster one. A service-provider that is dead with a sub-request keeping it incomplete is again re-requested to another service-provider by the server. This prevents loss of sub-requests, hence the possible loss of client-request. Some service-providers are marked by the administrator as lazy and least busy all the times. Server prefers them as first priority to be involved by sub-requests. The administrator can also set a threshold value for homogenized performance. TDA server ignores service-providers that have homogenized performance less than the threshold value. Therefore, homogenization improves TDA not only as a distributing architecture but also as a sophisticated load-balancing design. 6. PERFORMANCE ANALYSIS To verify the potentiality of homogenization, a scientific application is implemented in TDA. Performance is measured in two types of environment: heterogeneous environment, and homogenized environment. A homogenized environment is one where TDA has applied homogenization i.e. in reality homogenized environment is a heterogeneous one, but TDA homogenized the overall system. Matrix multiplication is a common scientific computation that is to be solved for different scientific problems. Considering the simplest algorithm that multiplies two matrices with three loops, the experiment is performed. All the statistics taken are for the same network, same service-providers and the same thin client, as well as the same TDA server. For experimental purpose, the test matrices were all square matrices. Every time two square matrices of same size were requested to the server to distribute. Only the first matrix is granulized into pieces and sent to different service-providers. Each service-provider gets a copy of the second matrix from the thin client. Each service-provider then calculates a portion of the result and sends it directly to the thin client that requested for the job. The thin client combines the result when all the portions are received. Figure 3: Performance Analysis: (a) Heterogeneous behavior of TDA, (b) Homogenized behavior of TDA. (c) Corresponding speedup of (a) and (b). 6.1 HETEROGENEOUS BEHAVIOR OF TDA Figure 3 shows both heterogeneous and homogenized behavior of TDA for square matrix size of 800. The black portion of a bar indicates the actual computation time and the grey portion represents the overhead due to communication distance. From Figure 3(a), it is evident that introduction of successive service- providers reduces the actual computation time. Closer inspection shows that introduction of the sixth and the ninth service-providers do not reduce the actual computation time rather computation time is increased. This type of degradation of performance is found because the sixth and ninth service-providers were comparatively of low CPU speed. Equal allotment of load results in heterogeneous pattern of speedup. The heterogeneous pattern of speedup is shown in Figure 3(c) with a gray line. The speedup pattern shows that speedup is decreased when sixth and ninth service-providers are involved. The experiments are taken with various combinations of Intel machines. They were varying in CPU speed, memory size, operating system, user processes, background daemons and many other parameters. Pentium II, III, and IV Intel machines with physical memory ranging from 64 to 128 MB are used. All of them are connected by 100 Mbps Ethernet network. All the TDA components were running over the Virtual Machine provided by SUN's JDK version 1.2.2 or higher. Overhead affects speedup because overall computation time is composed of actual computation time and overhead. Overhead is an additive function of communication time and decision making time of the server. 6.2 HOMOGENIZED BEHAVIOR OF TDA The same analysis is taken with the only exception that now allotment of load is not equal. TDA homogenized the environment. The physical environment is the same as heterogeneous one, but now homogenization is applied. Figure 3(b) shows that application of homogenization assures decrease in actual computation time although the infrastructure is heterogeneous. Corresponding speedup is shown in Figure 3(c) with a black line. Introduction of newer service-providers causes speed-up improvement regardless their configuration. But the acceleration of speedup is decreased while large amount of service-providers is involved in a distribution. This clarifies that the almost constant overhead becomes pronounced when the actual computation time is reduced. Subsequent involvement of too many service-providers results in slow speedup improvement. In this experiment, homogenization provides a maximum speedup of 3.6 with nine service-providers but non-homogenized distribution provides maximum speedup of 2.8 with 5 service-providers. 6.3 LOAD VS. SPEEDUP Speedup also depends on the size of the load. Different sizes of matrices are used to understand the behavior. Figure 4(a) shows the speedup lines at different size of matrix multiplication. The figure depicts heterogeneous performance improvement. The matrix sizes are 200, 400, 600, 800 and 1000. For some of the size, speedup is less than unity which illustrates that TDA could not improve the performance because the load was too small. In this case, overheads dominate over the actual computation time. During the size 200, such degradation is found. For all other sizes, speedup is greater than unity. It proves that TDA shows higher performance at higher degree of load. The corresponding homogenized performance for the same heterogeneous infrastructure is given in Figure 4(b), which shows steady improvement of performance at higher amount of loads. A comparison between Figure 4(a) and Figure 4(b) shows that the maximum speedup reached during non-homogenized situation is around 3.5 where the maximum speedup reached during homogenization is around 5.5 which describes the nobility of homogenization through TDA. 7. CONCLUSION Homogenization technique based of TDA, is an enriching mechanism of job distribution across a local area network. TDA granulizes computation intensive jobs to concurrent pieces using homogenization and operates them in a dynamic environment to reduce total processing time. Experimental analysis shows that in a heterogeneous environment, homogenization provided a 55% increase in speedup relative to maximum non-homogenized performance. Homogenization does not require any kind of user interaction for its knowledge-centric distribution mechanism. It is established with an automatic manner in TDA as a transparent load-balancing tool. Homogenization provides better processing time in a distributed computing environment. For implementing homogenization, the present JVM remains unchanged. The current implementation is fully based on the existing JVM and that way TDA fulfills its main goal of providing a distributed computing environment in an existing LAN. 8. REFERENCES
{"Source-Url": "http://www.cs.utep.edu/mhossain/papers/homogenization.pdf", "len_cl100k_base": 4479, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17346, "total-output-tokens": 5717, "length": "2e12", "weborganizer": {"__label__adult": 0.00030684471130371094, "__label__art_design": 0.0003817081451416016, "__label__crime_law": 0.00032711029052734375, "__label__education_jobs": 0.0008630752563476562, "__label__entertainment": 0.00010055303573608398, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.00037741661071777344, "__label__food_dining": 0.0003592967987060547, "__label__games": 0.000522613525390625, "__label__hardware": 0.002010345458984375, "__label__health": 0.0006661415100097656, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 9.566545486450197e-05, "__label__industrial": 0.0005517005920410156, "__label__literature": 0.00031495094299316406, "__label__politics": 0.000270843505859375, "__label__religion": 0.0005655288696289062, "__label__science_tech": 0.17333984375, "__label__social_life": 0.00010019540786743164, "__label__software": 0.0187835693359375, "__label__software_dev": 0.79833984375, "__label__sports_fitness": 0.00027632713317871094, "__label__transportation": 0.0006098747253417969, "__label__travel": 0.0002505779266357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25863, 0.02438]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25863, 0.52687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25863, 0.90957]], "google_gemma-3-12b-it_contains_pii": [[0, 5822, false], [5822, 11663, null], [11663, 17267, null], [17267, 19371, null], [19371, 24877, null], [24877, 25863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5822, true], [5822, 11663, null], [11663, 17267, null], [17267, 19371, null], [19371, 24877, null], [24877, 25863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25863, null]], "pdf_page_numbers": [[0, 5822, 1], [5822, 11663, 2], [11663, 17267, 3], [17267, 19371, 4], [19371, 24877, 5], [24877, 25863, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25863, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
4a888bca7203aa76b4619bccd2e13be59e6926f0
Using Statistical Models to Predict Software Regressions Alexander Tarvo Microsoft Corporation, Redmond, WA 98052 alexta@microsoft.com Abstract Incorrect changes made to the stable parts of a software system can cause failures – software regressions. Early detection of faulty code changes can be beneficial for the quality of a software system when these errors can be fixed before the system is released. In this paper, a statistical model for predicting software regressions is proposed. The model predicts risk of regression for a code change by using software metrics: type and size of the change, number of affected components, dependency metrics, developer’s experience and code metrics of the affected components. Prediction results could be used to prioritize testing of changes: the higher is the risk of regression for the change, the more thorough testing it should receive. 1. Introduction Despite all efforts of engineering community, bugs in software systems are still inevitable today. Probably, one of the most unpleasant classes of bugs is software regressions – undesired changes in the behavior of already stable parts and features of the software system. Software regressions can lead to significant problems for the software manufacturer, if are not detected and fixed early. If software regression is not detected during the testing, customers will be affected by undesirable side effects of the change. Such situation will result not only in a financial loss for a manufacturer (in any case, the issue needs to be fixed and update issued), but its reputation will also suffer. The key method of avoiding negative consequences of software regressions is testing of all code changes. However, cost and time restrictions often prevent engineers from doing exhaustive testing of all changes, so some method of test prioritization is necessary. Probably, one of the most straightforward and widely used of such methods is assessment of the risk, associated with each code change, by an expert or group of experts. The higher is the risk of regression, associated with the change (which can be also called regression proneness of the change), the more thorough testing it should receive. However, manual risk estimation is costly and subjective: it relies on skills and experience of experts. To address this problem, we developed a statistical model to predict risk of regressions. The model utilizes knowledge of the software system, represented in a form of software metrics, and provides an objective quantification of regression risk for each code change. Predicted risk is used by test engineers to plan testing activities for changes: high-risk changes should pass through an extensive testing to discover possible regressions while changes with low probability of regression can pass just sanity testing. This paper presents an industrial case study, where we built the system to predict software regressions by using historical data on changes in Windows XP operating system. Analysis of system’s accuracy shows it could be successfully used to predict software regressions. 2. Data collection In our work we concentrate on post-release changes, or fixes, which are made after software system is released to the market. These changes include bug fixes, new features, reliability or performance improvements. Information on fixes is stored in the bug-tracking database in the form of bug records. If the bug record describes a fix for a software regression, a link to the bug which resulted in regression is provided in the record. This allows for identifying fixes which caused regressions – regressed fixes. Only regressions, caused by changes in a source code of the software system, are considered. Since a fix typically results in a code change, the corresponding bug record can be related to the set of changes in the program source code. These changes are grouped in one or more check-ins – atomic changes of a source code, recorded in a version control system. Each check-in contains a list of changed source files, differences between old and new versions of these files, date of the change, name of a developer and a brief description of the change. By identifying check-ins, related to the fix, it is possible to detect components of the software system, affected by it. Three major types of metrics are used to describe code changes in this study: these are (1) change metrics, (2) code metrics and (3) dependency metrics. 2.1. Change metrics Change metrics describe properties of a software change itself: size of the change, number of changed components, experience of the developers and other properties. Some major classes of change metrics are described below. Number of changed components is one of the most important properties of the fix: complex fixes that cause changes in a large number of components are expected to be more regression prone than small fixes. By analyzing source and binary code of the software system, following metrics were defined: - **AddedFunctionsCount:** number of functions, added because of the fix; - **DeletedFunctionsCount:** number of functions, deleted because of the fix; - **ChangedFunctionsCount:** number of functions, changed because of the fix; - **AddedSourceFilesCount:** number of source files, added because of the fix; - **DeletedSourceFilesCount:** number of source files, deleted because of the fix; - **ChangedSourceFilesCount:** number of source files, changed because of the fix; - **BinariesAffected:** number of binary modules affected by the fix. Another group of metrics is related to the experience of developer, who performed the change – we call these metrics experience metrics. It can be possible that change done by an experienced person can be of a higher quality than that of an inexperienced programmer [9]. One possible method to estimate developer’s experience is looking at all fixes done by this person in past: the more fixes the developer worked on in past, the higher is his/her experience. Developer is considered experienced if he/she did more check-ins than 75% of other developers during the same period. Correspondingly, inexperienced developer did less than 25% of fixes in comparison to other developers. To measure overall programming experience of the developer we defined global experience metrics, based on the number of changes he/she made into the whole software system during the past 12 months: - **CheckinsLastYear:** overall number of check-ins by this developer; - **ExperiencedDeveloper:** 1, if developer did 15 or more check-ins; 0 otherwise; - **InexperiencedDeveloper:** 1, if developer did 2 or less check-ins; 0 otherwise; However, not only overall experience of developer matters, but his/her knowledge of the particular area of source code should be considered as well. To measure developer’s knowledge of the affected area, the number of changes he/she had made in this part of the system during previous 12 months was counted: - **CheckinsLastYearInComponent:** number of check-ins developer did in the source code of the component, that is affected by a fix; - **ExperiencedDeveloperInComponent:** 1, if developer did 7 or more check-ins in the source code of the affected component, 0 otherwise; - **InexperiencedDeveloperInComponent:** 1, if developer did 1 or less check-in in the source code of the affected component, 0 otherwise. The last group of fix metrics is fix characteristics, which describe its nature and the overall fix process: - **FixForRegression:** 1, if this is a fix for a known regression, 0 otherwise; - **isNewFeature:** 1, if this fix is a new feature, 0 if it is a bug fix; - **CheckinCount:** number of check-ins required to make the change; - **DeveloperCount:** number of developers, working on the change; - **BugLinesDelta:** summary change in size (LOC) of affected functions. 2.2. Code metrics It has been shown that fault proneness of the component can be successfully predicted by using its code metrics: size, complexity and historical code churn are positively correlated with the number of failures spotted in the component [3]. Since regressions are actually consequences of failures, we assume that complex components with high historical code churn might have a higher number of regressions as well. Also, making a fix in a large and complex component is a complicated task for a developer and can increase chances of making a mistake, so code metrics were included in the set of predictor variables. Three major classes of code metrics were used in this study: - **Complexity metrics** describe internal complexity of the component. Examples of complexity metrics are component size or number of global parameters in it; - **Object-oriented metrics** describe complexity of components developed with using of object-oriented methodology. Examples of object-oriented metrics are number of classes in the module, size of the class hierarchy, number of methods in the class; - **Code churn** describe the history of changes in the component. Examples of churn metrics are number of changed code lines or functions in the component, as well as number and properties of bugs fixed in it. Complexity and object-oriented metrics were collected for each binary module and function using MaX framework [10]. MaX is an automated tool that can collect code metrics at binary module and function level. Churn metrics were collected for each binary module, source file and function using custom-developed tool called Binary Change Tracer (BCT). 2.3. Dependency metrics Components in a software system do not exist in isolation: they interact in a number of ways. For example, an application can load a dynamically linked library and call functions or access data structures located in it. Obviously, if the library is not present in the system, application will not work properly, so it can be said that application depends on the library. Dependencies between all components of software system form a dependency graph – a directed graph \(G=(C, D)\), where components form a set of vertices \(C=\{c_1, \ldots, c_n\}\) and dependencies form set of edges \(D=\{d_1, \ldots, d_m\}\). Number of dependencies \(M\) can be as high as \(M = N \cdot N\). If component \(c_i\) depends on another components \(c_j\), there exists an edge in the graph \(d_{ij}=(c_i, c_j) \in D\). For component \(c_j\) this dependency is called an outgoing dependency, for component \(c_i\) it is an incoming dependency. It has been shown [1] that data and call dependencies can be useful predictors of component fault proneness. In this study dependency metrics are used as predictors of the fix regression proneness. Dependency metrics for a code change were defined by using information about changed components and a structure of the call graph. Suppose that fix affects a subset of components \(C_c \subseteq C\). Then dependencies, linking these components to any other components in software system, are constituting a set of affected dependencies \(D_a \subseteq D\). Affected dependency \(d=(c_i, c_j)\) is a part of set \(D_a\) if \(c_i \in C_c\) or \(c_j \in C_c\). We distinguish two basic types of affected dependencies (see Figure 1): - **External dependency** \(d_{ext}=(c_i, c_k)\) is a dependency between a changed component \(c_i \in C_c\) and non-changed component \(c_k \notin C_c\). - **Internal dependency** \(d_{int}=(c_i, c_j)\) is a dependency between two changed components \(c_i, c_j \in C_c\). In this study dependency data was collected at function and binary levels. If data is collected at function level, a set of changed functions \(F_c=\{f_1, \ldots, f_k\}\) is defined. Correspondingly, at the binary level, a set of changed binaries \(B_c=\{b_1, \ldots, b_l\}\) is defined. It allows us to define four dependency metrics for each fix: - **BinaryExternalDependenciesAffected**: total number of external dependencies for binary modules \(B_c\), affected by the fix; - **BinaryInternalDependenciesAffected**: total number of internal dependencies between binary modules \(B_c\), affected by the fix. Similarly, **FunctionExternalDependenciesAffected** and **FunctionInternalDependenciesAffected** metrics were defined for the functions \(F_c\), affected by the fix. Number of dependencies for each affected function was also considered, resulting in following metrics: - **IncomingDependenciesCount**: number of incoming dependencies of the function; - **OutgoingDependenciesCount**: number of outgoing dependencies of the function; ![Figure 1. Dependency metrics](image-url) MaX framework was used to extract dependency information for software system. In addition to the code metrics, MaX can also collect call and data dependency information for each function in the software system. 3. Measuring accuracy Model, developed during this study, can be viewed as a binary classifier: it classifying fixes as regression prone and non-regression prone. There are four possible outcomes of classifications: - **True Positive (TP)**: fix is regression prone and was classified as regression-prone; - **False Positive (FP):** fix is not regression prone, but was classified as regression-prone; - **True Negative (TN):** fix is not regression prone and was classified as not regression prone; - **False Negative (FN):** fix is regression prone, but was classified as not regression prone. Based on these outcomes, a number of metrics to measure classification performance have been developed [11, 12]: \[ \text{Precision} = \frac{TP}{TP + FP} \] \[ \text{Recall} = \frac{TP}{TP + FN} \] \[ \text{False Positive Rate} = \frac{FP}{FP + TN} \] However, many statistical methods, such as logistic regression, do not directly specify if a fix is regression prone or not. Instead, they produce a number, representing a probability of regression for a fix. To convert this number into an actual class label it is necessary to define a threshold. Fix is considered as regression prone if output of the classifier is higher than the threshold and as non-regression prone, if output is below the threshold. To measure change of classifier performance depending on the threshold, we used a technique called Receiver Operating Characteristic (ROC) graphs. ROC graph is a two-dimensional graph, where true positive rate (recall) is plotted on the Y axis and false positive rate is plotted on the X axis [11]. ROC curve for an ideal classifier is a straight line from (0,0) to (1,1). Line from (0,0) to (1,1) implies a worst possible classifier that is no better than a random guessing. Area under ROC curve (AUC) can serve as a single number to measure classifier’s performance. It can vary from 0.5 for the worst classifier to 1.0 for the best possible one. ### 4. Model building Subject of this study is Microsoft Windows XP - an operating system from Microsoft Corporation. It is a large software system composed of several millions lines of code which constitute thousands of binary modules. At the moment of writing this paper Windows XP has undergone two major maintenance releases (service packs) and numerous smaller fixes. To train the model we scanned the whole code of the system and selected data on a large number of bug records (fixes) made after Windows XP SP2 was released (August 2004). Only unique bug records (ones that point to different sets of check-ins) were included into the dataset. To make sure that all regressions in selected fixes were revealed, no data was collected during the last 18 months of the study. Nevertheless, dataset appeared to be sparse: only a small fraction of these fixes caused software regressions. A number of metrics were collected for each fix: these metrics include change metrics, dependency metrics and code metrics. Code metrics were collected for each Windows component, affected by the fix. These include complexity metrics, object-oriented metrics and pre-release code churn. Pre-release code churn for each component was collected during Windows XP SP2 development (from September 2003 to August 2004). Three different levels of granularity were used to collect code metrics: code churn and bug data were collected at the level of binary modules, source files and functions; while complexity and object-oriented metrics were collected at function and binary module levels only. A classic table-based approach was used to build the model. For each fix, a vector $\bar{x}$ of independent variables (predictors) and a dependent variable $y$ (response) were defined. Independent variables are fix metrics, which describe a change. Dependent variable is an occurrence of regression for a certain fix. Stepwise logistic regression [5] was used to build a statistical model. As many other statistical methods, like decision trees or neural networks, logistic regression require dimension of the vector $\bar{x}$ be a constant across all data points. However, a single fix can lead to changes in multiple components: it is not uncommon for a large complex fix to affect a number of source files or even binary modules. Thus if code metrics for all affected variables are included into the vector $\bar{x}$, its size will vary. To make dimensions of the $\bar{x}$ vector constant, we aggregated values of code metrics for all components, affected by the single fix [12]. Suppose the fix affects components $(c_1, c_2, \ldots, c_d)$, and code metrics $(m_1(c_1), \ldots, m_d(c_d))$ are defined for any component $c_j \in (c_1, c_2, \ldots, c_d)$. In this case, an aggregated value $\bar{m}_j$ of metric $m_j$ across components $(c_1, c_2, \ldots, c_d)$ is \[ \bar{m}_j = f(m_1(c_1), m_1(c_2), \ldots, m_1(c_d)), \] where $f(m_1(c_1), m_1(c_2), \ldots, m_1(c_d))$ is an aggregating function. In this study $\max(m_1(c_1), m_1(c_2), \ldots, m_1(c_d))$ and $\text{median}(m_1(c_1), m_1(c_2), \ldots, m_1(c_d))$ functions were used to aggregate values of metrics $m_1(c_1), m_1(c_2), \ldots, m_1(c_d)$. To measure a predictive power of the model, data splitting technique [12] was used. 50 random splits were done; during each split, 70% of fixes were selected to form a training set and the remaining 30% formed a test set. Averaged ROC graph were created as it was discussed in [11]. To evaluate relative importance of different groups of metrics, four models were built separately for dependency metrics, experience metrics, fix metrics and code metrics. For each model, 50 random splits were done, and, during each random split, ROC curve was built. Based on these 50 values of AUC, mean value of area under the curve... μ(AUC), as well as its standard deviation σ(AUC) were calculated. Results are reported in the Table 1. Table 1. Model performance for the different groups of metrics <table> <thead> <tr> <th>Metric group</th> <th>μ(AUC)</th> <th>σ(AUC)</th> </tr> </thead> <tbody> <tr> <td>Fix metrics (no experience)</td> <td>0.73</td> <td>0.046</td> </tr> <tr> <td>Code metrics</td> <td>0.70</td> <td>0.040</td> </tr> <tr> <td>Dependency metrics</td> <td>0.69</td> <td>0.049</td> </tr> <tr> <td>Experience metrics</td> <td>0.54</td> <td>0.044</td> </tr> </tbody> </table> To see if these models provide statistically different values of AUC, author performed a series of unpaired t-tests to compare resulting AUCs. According to these tests, fix metrics appeared to be better predictors of the regression proneness for a fix (p-value < 0.001) than code or dependency metrics. No statistically significant difference (p=0.56) was found between the accuracy of the models, based on dependency and code metrics. Surprisingly, experience metrics didn’t prove to be informative ones (p>0.001). One of possible explanations is that the most complex and risky fixes are done by the most experienced programmers, and simple fixes are left to novice engineers. Anyway, this phenomenon might be left as an opportunity for a future investigation. To build the final model, all metrics were used. Stepwise logistic regression reported five most significant predictors (see Table 2): - **SourceFilesChanged**: number of source files changed due to the fix; - **NewFeature**: 1, if this fix is a new feature, 0 otherwise; - **PreReleaseFunctionsDeleted**: number of functions deleted from the binary module during the pre-release timeframe; - **MaxFunctionLocalCoupling**: maximum value of FunctionLocalCoupling metric across all functions in the binary module. FunctionLocalCoupling is the number of calls to other classes, whose instances are created as local variables in the function; - **MaxSubClasses**: Maximum number of sub classes across all high-level classes in the binary module; **FunctionsDeleted, MaxFunctionLocalCoupling** and **MaxSubClasses** are code metrics, defined for binary modules. That’s why median values of these metrics for all binaries, affected by the fix, were taken as predictors. Table 2. Most significant predictors <table> <thead> <tr> <th>Metric name</th> <th>Type</th> <th>p-value</th> </tr> </thead> <tbody> <tr> <td>SourceFilesChanged</td> <td>Fix</td> <td>0.001</td> </tr> <tr> <td>NewFeature</td> <td>Fix</td> <td>0.03</td> </tr> <tr> <td>Median(FunctionsDeleted)</td> <td>Code</td> <td>0.02</td> </tr> <tr> <td>Median(MaxFunctionLocalCoupling)</td> <td>Code</td> <td>0.002</td> </tr> <tr> <td>Median(MaxSubClasses)</td> <td>Code</td> <td>&lt;0.001</td> </tr> </tbody> </table> Resulting ROC graph for the final model is shown in Figure 2. Its mean area under ROC curve is μ(AUC) = 0.77 and standard deviation σ(AUC)=0.040, which is significantly better than random draw (AUC=0.5). What is more important, the model can detect the most risky fixes with sufficient accuracy, which allows to intensify testing activities for them and so enable regressions to be discovered in these fixes early. Figure 2. ROC and precision-recall graphs for the model In addition to logistic regression, author evaluated an accuracy of regression prediction models, based on other machine learning algorithms, available in MATLAB (see Table 3). Unpaired t-tests shown that logistic regression model has somewhat higher accuracy than multilayer perceptron with Principal Component Analysis (p=0.029), while regression tree has the worst accuracy (p<0.001). Table 3. Comparative performance of different machine learning algorithms <table> <thead> <tr> <th>Machine learning algorithm</th> <th>μ(AUC)</th> <th>σ(AUC)</th> </tr> </thead> <tbody> <tr> <td>Logistic regression</td> <td>0.77</td> <td>0.040</td> </tr> <tr> <td>Multilayer perceptron + PCA</td> <td>0.75</td> <td>0.058</td> </tr> <tr> <td>CART tree</td> <td>0.67</td> <td>0.069</td> </tr> </tbody> </table> 5. Related work Using statistical models to predict software risk is a widely used technique. Almost all studies are concentrated on predicting fault proneness [1, 2, 3, 6, 7] of components in the software system. All of these works are using various types of code metrics to predict fault proneness of the components: one kind of such metrics is code churn, which is defined as an amount of changes in the software system [3]. Another class of metrics is complexity metrics, like code complexity, size or number of functions in the component [7]. For object-oriented programs special types of object-oriented metrics can be defined [4]. Different statistical methods were used to build models, including decision trees, neural networks and logistic regression [1, 7, 8]. Many works recommend using PCA [2, 10] to reduce dimensionality of input data and eliminate multicollinearity between various metrics. Prediction of software regressions appears to be a less explored area. By the time of writing this paper, only the work of A. Mockus and D. Weiss on predicting risk of software changes [9] was known to the author. This work shows a possibility to successfully predict risk of software change. Fix metrics, such as fix size, experience of a developer, number of affected subsystems were used to predict risk of software changes. Presented work extends state of the art by considering additional metrics for regression prediction like pre-release code churn, dependency, code complexity and object-oriented metrics. Relative importance of different groups of metrics is also evaluated and best set of metrics is selected for building the model. 6. Experience and lessons learned In a presented paper we have shown that risk of regression can be successfully predicted for a code change and pointed to metrics which are good predictors of regression risk. As a result of this study, a practical system for regression prediction has been developed and deployed in the Windows Serviceability team. Once a development of the new fix is done, the system automatically analyzes changes, caused by the fix: it extracts fix metrics and calculates risk of regression for that fix. In addition to that, the system also conducts change impact analysis for the fix. Resulting report is presented to the test engineer: the report contains both change impact information as well as a predicted risk of regression. These two pieces of knowledge complement each other: change impact information tells the engineer which Windows components might be impacted because of dependencies on changed components, and which tests should be run to verify changed code. At the same time, regression risk gives a hint how much testing should be done: it is recommended that risky fixes with high probability of regression should receive more testing, than low-risk fixes. The model has been deployed in March 2008 and quickly gained popularity among test engineers: based on usage logs we estimate that at least 70% of all test engineers in the Windows Serviceability team are using regression risk reports in their work. To improve model’s accuracy, we plan to collect more data on different versions of Windows, and introduce more metrics, such as presence of the code review for the fix. Also, in addition to logistic regression, we are going to experiment with different machine learning methods, such as Naïve Bayes and Support Vector Machines. To see if metrics, selected by the model, can be used as risk predictors for different types of software projects, we plan to evaluate our risk prediction model on other Microsoft products, such as SQL Server and Office. 7. References
{"Source-Url": "http://cs.brown.edu/~alexta/Doc/pubs/2008_ISSRE_UsingStatisticalModelstoPredictRegressions.pdf", "len_cl100k_base": 5797, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 27581, "total-output-tokens": 6691, "length": "2e12", "weborganizer": {"__label__adult": 0.00034165382385253906, "__label__art_design": 0.00023543834686279297, "__label__crime_law": 0.0002989768981933594, "__label__education_jobs": 0.00048613548278808594, "__label__entertainment": 4.506111145019531e-05, "__label__fashion_beauty": 0.00012743473052978516, "__label__finance_business": 0.00020241737365722656, "__label__food_dining": 0.00029468536376953125, "__label__games": 0.00045561790466308594, "__label__hardware": 0.0007333755493164062, "__label__health": 0.0003647804260253906, "__label__history": 0.00013625621795654297, "__label__home_hobbies": 6.437301635742188e-05, "__label__industrial": 0.0002715587615966797, "__label__literature": 0.0002137422561645508, "__label__politics": 0.00015652179718017578, "__label__religion": 0.0002932548522949219, "__label__science_tech": 0.00753021240234375, "__label__social_life": 6.693601608276367e-05, "__label__software": 0.00469207763671875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0002524852752685547, "__label__transportation": 0.00033283233642578125, "__label__travel": 0.0001513957977294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27975, 0.02582]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27975, 0.66351]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27975, 0.90376]], "google_gemma-3-12b-it_contains_pii": [[0, 4223, false], [4223, 9089, null], [9089, 13112, null], [13112, 18589, null], [18589, 22849, null], [22849, 27975, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4223, true], [4223, 9089, null], [9089, 13112, null], [13112, 18589, null], [18589, 22849, null], [22849, 27975, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27975, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27975, null]], "pdf_page_numbers": [[0, 4223, 1], [4223, 9089, 2], [9089, 13112, 3], [13112, 18589, 4], [18589, 22849, 5], [22849, 27975, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27975, 0.11688]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
7cad5c094b95fa0165db6e899aa96f60515d462f
How this Coding Standard is Organized - Identifiers - Noncompliant Code Examples and Compliant Solutions - Coding Conventions - Exceptions - Risk Assessment - Automated Detection - Related Vulnerabilities - Related Guidelines - CERT-CWE Mapping Notes - Bibliography This coding standard is organized into 15 chapters containing rules in specific topic areas followed by four appendices. Appendix A contains the bibliography. Appendix B lists the definitions of terms used throughout the standard. Appendix C lists the undefined behaviors from the C Standard, Annex J, J.2 [ISO/IEC 9899:2011], numbered and classified for easy reference. These numbered undefined behaviors are referenced frequently from the rules. Appendix D lists unspecified behaviors from the C Standard, Annex J, J.2 [ISO/IEC 9899:2011]. These unspecified behaviors are occasionally referenced from the rules as well. Most rules have a consistent structure. Each rule in this standard has a unique identifier, which is included in the title. The title and the introductory paragraphs define the rule and are typically followed by one or more pairs of noncompliant code examples and compliant solutions. Each rule also includes a risk assessment, related guidelines, and a bibliography (where applicable). Rules may also include a table of related vulnerabilities. The recommendations in this wiki are organized in a similar fashion. Identifiers Each rule and recommendation is given a unique identifier. These identifiers consist of three parts: - A three-letter mnemonic representing the section of the standard - A two-digit numeric value in the range of 00 to 99 - A suffix that represents the associated language or platform. - "-C" for the SEI CERT C Coding Standard - "-CPP" for the SEI CERT C++ Coding Standard - "-J" for the SEI CERT Oracle Coding Standard for Java - "-PL" for the SEI CERT Perl Coding Standard The three-letter mnemonic can be used to group similar coding practices and to indicate which category a coding practice belongs to. The numeric value is used to give each coding practice a unique identifier. Numeric values in the range of 00 to 29 are reserved for recommendations, and values in the range of 30 to 99 are reserved for rules. (The values used for the SEI CERT C++ Coding Standard are different.) Rules and recommendations are frequently referenced from the guidelines in this standard by their identifier and title. Here are some example identifiers with an explanation of each: - INT50-CPP Do not cast to an out-of-range enumeration value - This identifier indicates a rule - "INT" stands for the Integer category - "50" is the unique identifier - "-CPP" stands for the C++ language - EXP00-J Do not ignore values returned by methods - This identifier indicates a rule - "EXP" stands for the Expressions category - "00" is the unique identifier - "-J" stands for the Java language - FLP00-C. Understand the limitations of floating-point numbers - This identifier indicates a recommendation - "FLP" stands for the Floating Point category - "00" is the unique identifier - "-C" stands for the C programming language Noncompliant Code Examples and Compliant Solutions Noncompliant code examples illustrate code that violates the guideline under discussion. It is important to note that these are only examples, and eliminating all occurrences of the example does not necessarily mean that the code being analyzed is now compliant with the guideline. Noncompliant code examples are typically followed by compliant solutions, which show how the noncompliant code example can be recoded in a secure, compliant manner. Except where noted, noncompliant code examples should contain violations only of the guideline under discussion. Compliant solutions should comply with all of the secure coding rules but may on occasion fail to comply with a recommendation. Coding Conventions Unless otherwise specified, all code should compile on a reasonably modern compiler, when it is following compliance with the standard. For example, you can require GCC to conform to the C11 standard with the parameter --std=c11. Code that is only expected to run on a particular subset of platforms should have those platforms mentioned in the code's section header, e.g.: Compliant Solution (POSIX). Likewise, code that is only expected to run on more modern versions of C should indicate the oldest standard that supports them, e.g.: Compliant Solution (C99). In order to compile the code, you will need to include appropriate header files. For example, if the code invokes malloc(), you may need to include the stdlib.h header. Many code examples will contain ellipsis in comments. This indicates that the comment may be replaced by arbitrary code that satisfies the comment. A comment with only ellipsis suggests that the code may do anything. Proper error handling is a controversial subject, and many applications and libraries provide their own idiosyncratic error handling mechanisms. See Rule 12. Error Handling (ERR) and Rec. 12. Error Handling (ERR) for our guidelines on handling errors. When our code detects that an error condition might have occurred, and handling that error condition is not endemic to the guideline itself, we will use the comment: /* Handle Error */. This comment implies that the error is somehow addressed, so that the code does not fall through. The code may abort, or fix the error somehow. For example: ```c char *str = malloc(10); if (str == NULL) { /* Handle Error */ } /* ... str can not be NULL here. Work with str... */ ``` Exceptions Any rule or recommendation may specify a small set of exceptions detailing the circumstances under which the guideline is not necessary to ensure the safety, reliability, or security of software. Exceptions are informative only and are not required to be followed. Risk Assessment Each guideline in the CERT C Coding Standard contains a risk assessment section that attempts to provide software developers with an indication of the potential consequences of not addressing a particular rule or recommendation in their code (along with some indication of expected remediation costs). This information may be used to prioritize the repair of rule violations by a development team. The metric is designed primarily for remediation projects. It is generally assumed that new code will be developed to be compliant with the entire coding standard and applicable recommendations. Each rule and recommendation has an assigned priority. Priorities are assigned using a metric based on Failure Mode, Effects, and Criticality Analysis (FMECA) [IEC 60812]. Three values are assigned for each rule on a scale of 1 to 3 for severity, likelihood, and remediation cost. **Severity**—How serious are the consequences of the rule being ignored? <table> <thead> <tr> <th>Value</th> <th>Meaning</th> <th>Examples of Vulnerability</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Low</td> <td>Denial-of-service attack, abnormal termination</td> </tr> <tr> <td>2</td> <td>Medium</td> <td>Data integrity violation, unintentional information disclosure</td> </tr> <tr> <td>3</td> <td>High</td> <td>Run arbitrary code</td> </tr> </tbody> </table> **Likelihood**—How likely is it that a flaw introduced by violating the rule can lead to an exploitable vulnerability? <table> <thead> <tr> <th>Value</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Unlikely</td> </tr> <tr> <td>2</td> <td>Probable</td> </tr> <tr> <td>3</td> <td>Likely</td> </tr> </tbody> </table> **Remediation Cost**—How expensive is it to comply with the rule? <table> <thead> <tr> <th>Value</th> <th>Meaning</th> <th>Detection</th> <th>Correction</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>High</td> <td>Manual</td> <td>Manual</td> </tr> <tr> <td>2</td> <td>Medium</td> <td>Automatic</td> <td>Manual</td> </tr> <tr> <td>3</td> <td>Low</td> <td>Automatic</td> <td>Automatic</td> </tr> </tbody> </table> The three values are then multiplied together for each rule. This product provides a measure that can be used in prioritizing the application of the rules. The products range from 1 to 27, although only the following 10 distinct values are possible: 1, 2, 3, 4, 6, 8, 9, 12, 18, and 27. Rules and recommendations with a priority in the range of 1 to 4 are Level 3 rules, 6 to 9 are Level 2, and 12 to 27 are Level 1. The following are possible interpretations of the priorities and levels. ### Priorities and Levels <table> <thead> <tr> <th>Level</th> <th>Priorities</th> <th>Possible Interpretation</th> </tr> </thead> <tbody> <tr> <td>L1</td> <td>12, 18, 27</td> <td>High severity, likely, inexpensive to repair</td> </tr> <tr> <td>L2</td> <td>6, 8, 9</td> <td>Medium severity, probable, medium cost to repair</td> </tr> <tr> <td>L3</td> <td>1, 2, 3, 4</td> <td>Low severity, unlikely, expensive to repair</td> </tr> </tbody> </table> Specific projects may begin remediation by implementing all rules at a particular level before proceeding to the lower priority rules, as shown in the following illustration: Recommendations are not compulsory and are provided for information purposes only. ### Automated Detection Both rules and recommendations frequently have sections that describe automated detection. These sections provide additional information on analyzers that can automatically diagnose violations of coding guidelines. Most automated analyses for the C programming language are neither sound nor complete, so the inclusion of a tool in this section typically means that the tool can diagnose some violations of this particular rule. The Secure Coding Validation Suite can be used to test the ability of analyzers to diagnose violations of rules from ISO/IEC TS 17961:2013, which is related to the rules in this standard. The information in the automated detection sections on this wiki may be - provided by the vendors - determined by CERT by informally evaluating the analyzer - determined by CERT by reviewing the vendor documentation Where possible, we try to reference the exact version of the tool for which the results were obtained. Because these tools evolve continuously, this information can rapidly become dated and obsolete. ### Related Vulnerabilities The risk assessment sections on the wiki also contain a link to search for related vulnerabilities on the CERT website. Whenever possible, CERT Vulnerability Notes are tagged with a keyword corresponding to the unique ID of the coding guideline. This search provides you with an up-to-date list of real-world vulnerabilities that have been determined to be at least partially caused by a violation of this specific guideline. These vulnerabilities are labeled as such only when the vulnerability analysis team at the CERT/CC is able to evaluate the source code and precisely determine the cause of the vulnerability. Because many vulnerability notes refer to vulnerabilities in closed-source software systems, it is not always possible to provide this additional analysis. Consequently, the related vulnerabilities field tends to be somewhat sparsely populated. Related vulnerability sections are included only for specific rules in this standard, when the information is both relevant and interesting. **Related Guidelines** For each entry in a Related Guidelines table, CERT has determined that there is some code flaw for which there is both a violation of some condition of the CERT guideline and a condition of the external-to-CERT guideline, where that condition is violated or that condition is described as a flaw. **Related Guidelines Table headings definitions** - **Taxonomy:** A named set of coding rules, weaknesses, standards, or guidelines such as Information Technology—Programming Languages, Their Environments and System Software Interfaces—C Secure Coding Rules [ISO/IEC TS 17961:2013]; Information Technology—Programming Languages—Guidance to Avoiding Vulnerabilities in Programming Languages through Language Selection and Use [ISO/IEC TR 24772:2013]; MISRA C 2012: Guidelines for the Use of the C Language in Critical Systems [MISRA C:2012]; and CWE IDs in MITRE’s Common Weakness Enumeration (CWE) [MITRE 2010]. - **Taxonomy item:** A single named (and/or numbered) item in a taxonomy - **Relationship:** For each entry in a Related Guidelines table, CERT has determined that there is some code flaw for which there is both a violation of some condition of the CERT guideline and a condition of the external-to-CERT guideline, where that condition is violated or that condition is described as a flaw. These relationships may be defined in a precise or imprecise way. For Common Weakness Enumeration (CWE), CERT has made precise mappings between CERT C rules and CWEs, as described below. For other taxonomies of coding flaws or secure coding (such as MISRA or ISO/IEC TR 24772:2013), so far, CERT has made only imprecise ("Unspecified Relationship") mappings. An "Unspecified Relationship" label indicates there is some overlapping code flaw condition, but the extent is unspecified. If the mapping was made using an automated process developed by CERT, and not yet verified manually, the mapping is marked at the end with "(A)". Precise relationships explain more about the extent to which conditions of the CERT guideline and external guideline match. In the simplest case, the guidelines are exactly equal (the relationship is labelled "Exact"). CERT's "partial mapping" terms ("Partial overlap", "Guideline subset of <EXTERNAL_GUIDELINE>", "<EXTERNAL_GUIDELINE > subset of rule") describe relationships between the guideline items using the language of sets, where the guideline item (a CERT guideline or an <EXTERNAL_GUIDELINE> entry) is a set that holds one or more conditions. By subset we mean a proper subset, that “A B” means every element (meaning, every condition) in A is also in B, but there exists at least one element in B that is not in A. If a condition of a program violates a CERT rule “R” and also exhibits an <EXTERNAL_ GUIDELINE> “E”, that condition is in the overlap between ‘R” and “E”. For each CWE that has a partial mapping to a CERT rule, we have documented the nature of what the rule and CWE have in common, what is exclusive to the rule, and what is exclusive to the CWE, in a section titled "CERT-CWE Mapping Notes". The 10 main precise relationship labels CERT uses are mostly the same as the 10 CWE Mapping Fit relationship labels, with 3 different labels. <table> <thead> <tr> <th>Different but related terms:</th> <th>CERT term</th> <th>MITRE term</th> </tr> </thead> <tbody> <tr> <td>Rule subset of CWE</td> <td>CWE_More_Abstract</td> <td></td> </tr> <tr> <td>CWE subset of rule</td> <td>CWE_More_Specific</td> <td></td> </tr> <tr> <td>Partial overlap</td> <td>Imprecise</td> <td></td> </tr> </tbody> </table> An 11th label "None" is specified in cases where previous mappings existed but it has been determined that there is no overlap of conditions. **Table column formats:** - **Taxonomy:** Taxonomy name (e.g., "CWE") followed by version name that was mapped, if that is known (e.g., "CWE 2.11", "CERT 2016", or "MISRA") - **Taxonomy item:** A single named (and/or numbered) item in a taxonomy, sometimes with the full title text of the item and sometimes with a hyperlink to the item. - **Relationship:** A combined entry with fields for date mapped, organization that did the mapping, and relationship (all in same cell for one mapping, separated by a colon, with one entry per line): <Optional “Prior to ” YYYY-MM-DD: ORGANIZATION: RELATIONSHIP(Optional “(A)”) >. Where specified by mapping day, precise mappings done by CERT use the latest published edition of a non-CERT taxonomy along with the latest published edition of the CERT standard plus changes on the CERT wiki. Precise mappings done by an external organization use the latest published edition of the CERT standard and their own latest published edition. Examples below: Example entries below, in the same cell on different lines: 2017-10-31: CERT: CERT Subset of CWE Example entry for a different mapping, where the exact mapping date is unknown but is known to be before October 03, 2017: Prior to 2017-10-03: CERT: Unspecified Relationship Example entry for a different mapping that was made using an automated process and not yet manually verified: Prior to 2017-09-05: CERT: Unspecified Relationship (A) The related guidelines sections contain links to guidelines in related standards, technical specifications, and guideline collections such as Information Technology—Programming Languages, Their Environments and System Software Interfaces—C Secure Coding Rules [ISO/IEC TS 17961:2013]; Information Technology—Programming Languages—Guidance to Avoiding Vulnerabilities in Programming Languages through Language Selection and Use [ISO/IEC TR 24772:2013]; MISRA C 2012: Guidelines for the Use of the C Language in Critical Systems [MISRA C:2012]; and CWE IDs in MITRE’s Common Weakness Enumeration (CWE) [MITRE 2010]. You can create a unique URL to get more information on CWEs by appending the relevant ID to the end of a fixed string. For example, to find more information about CWE-192, Integer Coercion Error,” you can append 192.html to http://cwe.mitre.org/data/definitions/ and enter the resulting URL in your browser: http://cwe.mitre.org/data/definitions/192.html. The other referenced technical specifications, technical reports, and guidelines are commercially available. CERT-CWE Mapping Notes CERT’s “partial mapping” terms {Partial overlap, Rule subset of CWE, CWE subset of rule} describe relationships between the taxonomy items using the language of sets, where the taxonomy item (a CERT rule or a CWE weakness) is a set that holds one or more conditions. If a condition of a program violates a CERT rule “R” and also exhibits a CWE weakness “W”, that condition is in the overlap between the rule and weakness. For each CWE that has a partial mapping to a CERT rule, in this section we document the nature of what the rule and CWE have in common, what is exclusive to the rule, and what is exclusive to the CWE. Sometimes what is exclusive or shared is simply described by set equations using taxonomy items, but other times documentation of what is shared may include function names or data types, or some prose description of shared characteristics. Notation: “Intersection(A, B) =” the right side of the equals sign defines if there is an overlap between A and B, meaning if a condition of a program exists with an overlapping area between A and B. Notation: “A \subseteq B” means every element in A is also in B, but there exists at least one element in B that is not in A. (It means A is a “proper subset” of B.) Notation: “A \supsetneq B” means A is not a proper subset of B. Notation: “Intersection(A, B) = ” means no element in A is also in B. (It means the intersection is the empty set.) Notation: “A - B =” the right side of the equals sign defines what element(s), if any, exist in A that are not also in B. No CERT C rule or recommendation is identical to any other CERT C rule or recommendation. In this section, Independent() means all the rules listed within are independent; that is, every pair of two rules listed they have an empty intersection. Most CERT rules are designed to be independent, that is, they have no overlap. (This applies only to rules, not recommendations). For CWE that have been identified as having at least a partial overlap with another CERT rule R1, for current mapping to CERT rule R2: In the Mapping Notes section we consider C’s possible overlap or exclusion of the CWE overlap area with R1. We also consider the relationship of R1 and R2, if any. (By defining the relationship between the CERT rules that separately have at least some overlap with the CWE of interest, the mapping notes further define the conditions of overlap and/or non-overlap between the primary CWE-to-CERT-rule mapping of interest.) Regarding partial overlap, we try to find segments of code as examples that are inseparable and exhibit both code flaws. An example of separable code: The following line violates rules about integer overflow and floating-point overflow, but that does not mean that the rules about integer overflow and fp-overflow overlap: ``` INT_MAX + 1 ; FLT_MAX + 1.0; ``` ``` static char x[3]; ``` ``` char* foo() { ``` ```c int x_int = (int) x; // x_int = 999 eg ``` ```c return x_int + 5; // returns 1004 , violates CWE 466 ``` ```c } ``` ```c ... ``` ```c int y_int = foo(); // violates CWE-466 ``` ``` char* y = (char*) y_int; // // well-defined but y may be invalid, violates INT36-C ``` ``` char c = *y; // indeterminate value, out-of-bounds read, violates CWE-119 ``` Bibliography Most guidelines have a small bibliography section that lists documents and sections in those documents that provide information relevant to the guideline.
{"Source-Url": "https://wiki.sei.cmu.edu/confluence/download/temp/pdfexport-20220929-290922-0052-1093/c-HowthisCodingStandardisOrganized-290922-0052-1094.pdf?contentType=application/pdf", "len_cl100k_base": 4757, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 14198, "total-output-tokens": 4920, "length": "2e12", "weborganizer": {"__label__adult": 0.00026702880859375, "__label__art_design": 0.00019431114196777344, "__label__crime_law": 0.0006265640258789062, "__label__education_jobs": 0.0005536079406738281, "__label__entertainment": 3.892183303833008e-05, "__label__fashion_beauty": 0.00010836124420166016, "__label__finance_business": 0.0001455545425415039, "__label__food_dining": 0.00026226043701171875, "__label__games": 0.0004546642303466797, "__label__hardware": 0.0005183219909667969, "__label__health": 0.00020492076873779297, "__label__history": 0.00010448694229125977, "__label__home_hobbies": 5.59687614440918e-05, "__label__industrial": 0.00023317337036132812, "__label__literature": 0.00018274784088134768, "__label__politics": 0.00018668174743652344, "__label__religion": 0.0002989768981933594, "__label__science_tech": 0.0030364990234375, "__label__social_life": 5.668401718139648e-05, "__label__software": 0.005718231201171875, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.0001976490020751953, "__label__transportation": 0.00021088123321533203, "__label__travel": 0.00012040138244628906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20471, 0.01442]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20471, 0.61931]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20471, 0.91166]], "google_gemma-3-12b-it_contains_pii": [[0, 4157, false], [4157, 7726, null], [7726, 9906, null], [9906, 15505, null], [15505, 20303, null], [20303, 20471, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4157, true], [4157, 7726, null], [7726, 9906, null], [9906, 15505, null], [15505, 20303, null], [20303, 20471, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20471, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20471, null]], "pdf_page_numbers": [[0, 4157, 1], [4157, 7726, 2], [7726, 9906, 3], [9906, 15505, 4], [15505, 20303, 5], [20303, 20471, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20471, 0.14451]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
cbb326b71b13125679a8d3bd4b055915fc7084c2
A 360–degree process improvement approach based on multiple models César Jesús Pardo–Calvache1*, Félix Oscar García–Rubio2, Mario Gerardo Piattini–Velthuis2, Francisco José Pino–Correa1, Maria Teresa Baldassarre1 1Departamento de Sistemas Facultad de Ingeniería Electrónica y Telecomunicaciones, Universidad del Cauca. Calle 5 # 4 –70. C. P. 190002. Popayán, Colombia 2Grupo de investigación IDIS, Facultad de Ingeniería Electrónica y Telecomunicaciones, Universidad del Cauca. Calle 5 # 4 –70. C. P. 190002. Popayán, Colombia ABSTRACT Several models and methodologies have been defined in order to support organisational process improvement. The implementation and institutionalisation of these approaches allow organisations to improve, mature, acquire and institutionalise best practices and management systems from multiple approaches. However, there are two issues, which have to be kept in mind. On one hand, it is possible to find several similarities amongst improvement, management and governance approaches. Experts and practitioners can thereby save, improve and optimize the organisational efforts using the best parts of existing models as building blocks; they can thus be prepared to deconstruct models, aiming for their designs to meet multiple needs. On the other hand, nonetheless, there are other factors which may influence, for example, compliance, or those aspects related to structural differences such as terminology, size, process, element structure, content, granularity, and complexity, which make difficult to work in multi–model environments. This being the case, the people involved need a map or guideline telling them how to carry out the harmonisation of models and standards that have to be implemented inside their organisations. In the quest to help support the work of harmonization of multiple models, this paper presents a framework that defines elements needed for the harmonization of multiple reference models to occur, as well as its application to three case studies. The results obtained show that the framework proposed has allowed the harmonization of several models. Keywords: Harmonization of multiple models and standards, software process improvement (SPI), models, standards 1. Introduction Currently, there is a wide range of models that have been developed and which can be taken as a reference model (RM) to improve processes inside an organization. In 1999, for instance, Moore identified approximately 315 standards, guides, handbooks, and other prescriptive documents which were taken as RMs and maintained by 46 different organizations [1]. Nowadays, RMs provide best practices to cover different needs; e.g. Information Security Management System (ISMS) such as ISO 27001, Information Technology Governance Processes (IT Governance) and Services Management including ITIL, COBIT, ISO 20000, CMMI–SVC, or quality management systems like ISO 9001, EFQM, Six–Sigma, or even those in much more specific domains such as software development, maintenance, acquisitions –CMMI–DEV, CMMI–ACQ, ISO 90003, ISO 15504, ISO 12207–, and so forth. Some models are widely used in the industry to improve organization competitiveness, while others are required as mandatory standards and become a regulation method in certain market niches. Organizations can benefit from this high number of models and standards when assessing and institutionalizing new or improved processes and, as a consequence, becoming more competitive and producing high–quality products [2]. Independently of the model to be used, its implementation requires specific experience and knowledge, along with a high degree of effort and investment, as key factors for its success. All this implies that the task is not easy and that there is a significant risk of failure [3]. One of the most important things about the huge amount and variety of models to select from, is that they can be applied to support multiple needs [4]; however, this proliferation can make that the organizations become confused about what the most suitable model for them is. In addition, there are other issues that need to be resolved; for instance, the way to reconcile the structural differences, size and terminology between multiple models. We cannot forget that each model has its own features, which can be reflected through its approach, processes structure, definitions, concepts, vocabulary, amongst other things. Although this scenario can be quite heterogeneous, it is possible to find some relationships between different models from some characteristics they have in common; e.g. models with similar approaches such as ISO models usually share similar quality objectives and, therefore, comparable practices [5]. Companies can benefit because implementing multi–model processes from shared quality goals reduces the costs of adopting multiple models [6]. However, not all relationships are easy to establish between all models. Furthermore, models defined are not always implemented by the same body at equal times inside a company. These dissimilar organizational points of view cause a problem regarding model compliance and standards to arise; e.g. structural differences between COBIT and ISO 9001 make difficult to establish their overlap. This disagreement causes difficulties in understanding them, together with compliance and unification issues in its adoption, which at the same time implies greater efforts, time consumption and associated costs as opposed to when only one model or standard is used for process improvement. Problems have also arisen concerning ambiguity, instability, subjectivity, incompatibility and transformability, as well as the benchmarking of process elements [7]. Currently, software organizations need guidance in identifying and resolving differences and similarities between multiple models susceptible of being implemented by them, in order to improve their processes. Therefore, in an attempt to offer a solution that facilitates the harmonization of multiple models, this paper presents a Harmonization Framework (HFramework) –a solution that provides a 360–degree approach to support the multi–model process improvement; i.e. when several and different models need to be implemented and institutionalized in a company–. The findings obtained from the application of this harmonization proposal in three case studies, show that it allows the use of different models when carrying out software process improvement in a systematic manner. The paper proceeds as follows. In Section 2 an analysis of related work is presented. Section 3 illustrates an overview of HFramework which delimits a set of elements for defining suitable harmonizing strategies that support strategic business objectives by bringing into consonance the differences between multi–models. Section 4 shows the research methods applied in the case studies. Section 5 summarizes three case studies where HFramework was applied. Section 6 partially exemplifies a unified practice between ISO 9001 and CMMI, also illustrating how HFramework supports the integration of models. Section 7 presents the lessons learned, and Section 8, conclusions and upcoming future work. 2. Related works Some early works provide interesting proposals that show a growing interest in recent years on the part of the software engineering community regarding process improvement environments where multiple models are involved. Figure 1a exposes the studies found and which have been organized in five periods of time (each one of five years), from 1990 to 2015. It is important to highlight that this analysis does not include the studies which can arise in 2015. In Figure 1a it is possible to notice that there is an increase in the number of studies published lately; i.e. many researchers are interested in this research field affecting the software industry. Part of this growing interest occurs due to the fact that the governments are paying more attention to software industry. As a result of this, it is possible to find laws with more benefits for this sector, one of them being process and practice improvement inside small and medium enterprises that currently occupy a representative place in worldwide software development. Figure 1b shows the percentage of studies organized from the following features: (i) studies presenting a solution based on mappings in a unilateral direction, (ii) studies describing ontologies to represent the key elements of particular domains and (iii) studies providing a solution for supporting multi–environments; the latter is one of the groups which provides solutions to support the implementation of more than two models at the same time. Table 1 summarizes the studies classified from the features above mentioned. ### Table 1 Classification of studies related to the harmonization of multiple models <table> <thead> <tr> <th>Main feature</th> <th>Description</th> <th>Observation</th> </tr> </thead> <tbody> <tr> <td>1) Mappings in a unilateral direction</td> <td>Most proposals carry out the mappings in a single direction with the process structure of a base model used as a main structure; e.g. the well-known mappings of ISO to CMMI performed [8, 9].</td> <td>This solution is appropriate if the objective is focused on the instantiation of the right practices concerning the base model from the beginning a situation impossible to replicate when the needs of the organizations are different.</td> </tr> <tr> <td>2) Development of ontologies to represent the key elements of particular domains</td> <td>Among others, some studies have focused mainly on the development of ontologies to represent the key elements of particular domains; e.g. in [10] is presented an ontology for representing the CMM-SW model; also, in [11] is presented an ontology which has been developed taking SWEBOK as the basis.</td> <td>These ontologies have been defined mainly aiming at understanding the structure of the process-based quality approaches.</td> </tr> <tr> <td>3) Studies that provide a solution for supporting Multi-Environments</td> <td>Also in recent years, we have identified a few efforts related to harmonizing multiple models, such as the PrME project of the Software Engineering Institute, Enterprise SPICE [12], alignment of COBIT 4.1, ITIL V3 and ISO/IEC 27002 for Business Benefit [13], among others.</td> <td>Few of them, however, have proposed solutions to resolve the problems and structural differences arising between models that are being harmonized or that need to be harmonized in order to suit the needs of an organization.</td> </tr> </tbody> </table> In the light of the situation above described, the following sections propose a solution to support the harmonization of multiple models. 3. Supporting the multi–model process improvement with HFramework HFramework was developed to provide the conceptual, methodological and technological support necessary for facilitating the harmonization of multiple models. Figure 2 shows the elements inside the HFramework. 3.1. Conceptual Framework The conceptual framework provides the means necessary to understand the complexities in aligning multiple models. To this end, the conceptual view consists of the following elements: - **Harmonization of Multiple Models Ontology (H2mO):** H2mO provides a formal and clear support for the most widely-used methods, concepts, relationships and related terms in harmonization of multiple models. A detailed description of H2mO ontology and its application in a real context is presented in [14]. - **Process-reference Models Ontology (PrMO):** is an ontology of Process–Reference Models which establishes the key elements used to express process–based approaches. From PrMO, a common structure of process elements or Common Structure of Process Elements (CSPE) has been defined, along with a homogenization technique to facilitate the harmonization of different models [15]. 3.2. Methodological Framework This describes a systematic set of activities, tasks and roles to support the efforts related to the application of a suitable strategy facilitating the harmonization of multiple models and which consists of the following elements: - **Harmonization Process (HProcess):** provides a process and the elements necessary to support the systematic management and implementation of harmonization projects. A detailed description of HProcess, its activities, tasks, roles, work products, templates and other elements modelled with EPF Composer, can be seen at http://alarcos.esi.uclm.es/armonias/ and [16]. - **Harmonization Methods (HMethods):** is a set of methods taken as the basis for configuring a systematic harmonization strategy to be executed in order to harmonize multiple models. The harmonization strategy or HStrategy is the work product resulting from the implementation of HProcess and describes the activities to follow in order to support the harmonization of multiple models from the business objectives inside organizations. Currently, HMethods provides three methods to support the HStrategy: a Homogenization Method (HoMethod) for harmonizing the structural differences between multiple models, a comparison method (CoMethod) to identify differences and similarities between multiple models [17], and an Integration Method (IMethod) for combining and unifying best practices of multi–models. Likewise, a CSPE, which is a template defined from the process elements structure defined in PrMO to put the models into same structures, homogenizes them and makes easier both their comparison and integration. 3.3. Technological Environment Comprises HProcessTOOL, which supports the management of harmonization projects (planning, monitoring and control), as well as their execution, by automating the techniques defined by HFramework, can be seen in [16]. 4. Research methods The methodologies guiding this project research were Action–Research and Case Studies, carried out by following the integration of these approaches. This section describes the research strategy defined for this project in terms of its roles, participants and relationships. A detailed description of the case studies, the Harmonization Framework and its process, templates and findings obtained through its implementation, are presented in [18]. We considered the following participants: Researchers' Group, Researched Object, Critical Reference Group and Stakeholders (see Table 2). ### Table 2 Participants in research project <table> <thead> <tr> <th>Participants</th> <th>Description</th> </tr> </thead> </table> | The researchers' group | This group was formed by the Alarcos research group [professors from the School of Computer Science at the University of Castilla-La Mancha, Ciudad Real, Spain]. The author of this thesis is a member of this group. This group was formed by 4 people, 3 advisers, and the author of this thesis, and was divided into: - Research managers (RM): responsible managing the harmonization projects. - Process/method researchers (P/M-R): in charge of developing the components of HFramework. - Performer (P): carried out the implementation of the HFramework in the critical reference group case studies. - Adviser: supported the implementation of the framework carried out by the Performer, as well as any questions raised by the RM and P/M-R. | | Critical reference group | The context in which the proposed framework was applied was as follows: (i) the harmonization of two models to support the implementation of an Information Management Framework, which integrates views to ISO 27001 and ISO 20000-1; (ii) the harmonization of six models: ISO 27002, ITIL, Risk IT, Val IT, Cobit, and BASEL, in order to define an integrated model for the banking sector, and (iii) the establishment of a system of relationships between each of ISO models and CMMI-DEV. These activities took place in companies from Spain, Guatemala, and Italy. Section 5 shows more details of the case studies. | | Stakeholders of the research | All companies that can benefit from the results of this work; i.e., any enterprise in need to carry out the harmonization of multiple models, more specifically, those taking part in the cases studies, and which have all benefited from the results obtained (see Chapter 5). | The participants in this research were (see Figure 3): the critical reference group, the researchers, and the stakeholders. The critical reference group was comprised of three case studies: case study 1, carried out in a Spanish company; case study 2, within a project for a banking sector, and case study 3, in an Italian spinoff. These case studies allowed our proposal to be validated. A more detailed description of these case studies can be found in section 5. 5. Case studies HFramework has been applied to three harmonization projects. Table 3 highlights them and summarizes a few of their features. Case studies were carried out based on the approach presented by [19]. The design type of the case studies is ‘multiple cases’ – holistic –. This is because HProcess has been implemented in three different cases where multiple models have been harmonized. The main research question to solve was related to knowing if HProcess was suitable for carrying out the harmonization of multiple reference models. A harmonization strategy or HStrategy was defined in each case study. This allowed organizing both the effort and the participants around the harmonization projects implemented in each case study. The HStrategy in each case study involved the homogenization of differences between models along with their comparison, allowing the identification of the relationships between models, and how they can complement each other. As shown in Table 3, just case study 3 needed integrated practices in order to define a new model. In this paper, we show an example of how we performed the integration of practices. A detailed description of the case studies, their HStrategies and findings, is presented in [18]. ### Table 3 Harmonization projects supported by HFramework <table> <thead> <tr> <th>Harmonization case</th> <th>Country</th> <th>General objectives</th> <th>Models harmonized</th> <th>Final results</th> </tr> </thead> <tbody> <tr> <td>1. SER&amp;Practices</td> <td>Italy</td> <td>To establish a relationship system which allows organizations to ascertain how ISO models and CMMI are related.</td> <td>ISO 9001, ISO 27001, ISO 20000 and CMMI–Dev V1.3.</td> <td>A coverage and relationship system between the models analysed.</td> </tr> <tr> <td>2. Audisec, focused on consultancy and support for ISO 20000 and ISO 27001 certification. (<a href="http://www.audisec.es">www.audisec.es</a>)</td> <td>Spain</td> <td>To facilitate the certification of organizations to the ISO 20000 standard by considering the efforts made in previous certifications obtained for ISO 27001.</td> <td>ISO 27001 and ISO 20000-2.</td> <td>A system of coverage and relationships between the models analysed [18].</td> </tr> <tr> <td>3. Reasearch Project for IT Governance and berks.</td> <td>Guatemala</td> <td>To support different needs identified in Information Technology Governance when applicable to the Superintendence of Banks in Guatemala, and the banking sector in general.</td> <td>BASELI, RISK IT, VAL IT, ITIL, ISO 27001 and COBIT 4.1.</td> <td>Definition of an integrated IT Governance Model for Banking, called ITGSM [20].</td> </tr> </tbody> </table> Figure 4 shows an example of the final results obtained from harmonization case 1. As it can be seen in Figure 4a, out of the 22 process areas (PAs) defined in CMMI, we found that 21 are supported by ISO models, and that only one – the Decision Analysis and Resolution (DAR) – is not. Furthermore, ISO 9001 is Largely related (76%), ISO 27001 is Partially related (22%), and ISO 20000–2 is Weakly related (2%), meaning that clauses of ISO 9001 provide greater support than other ISO models. However, as presented in Figure 4b, ISO 27001 offers support in PAs, which ISO 9001 does not address or for which it provides less support; e.g. Risk Management (RM), Measurement and Analysis (MA), and Organizational training (OT). The same is true for ISO 20000–2, which offers support in a few PAs; OT, Project Planning (PP) and Project Monitoring and Control (PMC). Aiming at reducing the time and effort used in comparisons, we use a comparison approach applying a transitive property of equality (TPE approach); i.e. they can apply: if \( X(x) \) maps \( Y(y) \) and \( Y(y) \) maps to \( Z(z) \), then \( X(x) \) also maps to \( Z(z) \). In that sense, they can use the results of previous comparisons to establish multiple mappings; e.g. the objective in harmonization case 3 was to compare CMMI with ISO 9001, ISO 27001 and ISO 20000–2. For this matter, we have carried out the comparison between CMMI to ISO 9001, and CMMI to ISO 20000–2. Then, we have taken the results of the comparison performed in harmonization case one between ISO 27001 to ISO 20000–2, and was considered as a bridge to carry out the comparison between CMMI–Dev and ISO 27001. We concluded: if CMMI–Dev maps to ISO 27001 and ISO 27001 maps to ISO 20000–2, then CMMI–Dev maps to ISO 20000–2. Companies can apply this rule on the description of practices stated by each model. It will allow establishing if practices of a model X and a model Z really have something in common. This simple rule could help companies, practitioners and process engineers to find relationships between multi–models from existing comparisons between models and, thus, reduce the efforts involved. Studies performed by Dirk Malzahn of OrgaTech GmbH [21] have shown that performing an assessment with this approach reduced effort by 25–40%. In addition, the TPE approach can be applied in two ways: in practice seeing it as a cell, or on its elements seeing these as its organelles. Therefore, the type of application will depend on the level of detail the comparisons have been made with. On the other hand, it is also important to emphasize that due to the nature of TPE approach, the comparisons $X(x)$ maps to $Y(y)$ and $Y(y)$ maps to $Z(z)$ are strictly necessary, but this approach, nevertheless, will be impossible to apply. The homogenization of model structure, identification of relationships, and integration of the models guided by HFramework and its artefacts, allowed companies to obtain successful results according to the needs of each case. In the first case study, a relationship system was defined between ISO 9001, ISO 27001, ISO 20000–2 and CMMI–DEV, which makes it possible to ascertain their mutual coverage and take advantage of their relationships and, consequently, reduce the effort involved in their application. Similarly, on the basis of results obtained and experience gained through the harmonization project, the company participating in the second case study developed a software tool for supporting and managing the transition and improvement between ISO 27001 to ISO 20000–2 [18]. In the third case study, it was possible to define an integrated IT Governance Model for Banking, which is to be applied in the Guatemala banking sector [21]. 6. Supporting the integration of models HFramework also supports the integration of models. In this regard, and on the basis of the results obtained, Table 4 presents a partial example of our unified model showing how to implement the integration of two practices. The unified practice column shows the content of a unified practice, which integrates the content of clause 8.5.3 concerning preventive action from ISO 9001:2008 and the Specific Practice (SP) of Causal Analysis and Resolution (CAR). The result is a combination of best practices into a single practice. The CMMI column indicates whether there is a relationship between the content of unified practice and CMMI. The explanation column offers additional information. The CMMI relationship column indicates that ISO clause 8.5.3 has a correspondence to CAR SPs; i.e. specific practices 1.2, 2.1, 2.2 and 2.3. Square brackets indicate information added in unified practice and angle brackets indicate deleted content. The final result is a unified practice, which shares the quality goals of two models (see Table 5). From this type of practice, it is possible to define a multi–model process that fulfils two quality approaches. On the other hand, the institutionalization of a multi–model process makes possible to reduce the costs associated with the implementation. of models by not implementing each one separately. Moreover, it allows assessment costs to be reduced for the unified requirements addressed during the ISO assessment will not be taken into account again during the CMMI assessment. ### Table 4 Partial example of a unified practice between ISO 9001 and CMMI <table> <thead> <tr> <th>Unified Practice</th> <th>CMMI relationship</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>Clause 8.5.3. Preventive action.</td> <td></td> <td></td> </tr> <tr> <td>The organization shall determine actions to eliminate the causes of potential nonconformities in order to prevent their occurrence.</td> <td>Purpose of CAR.</td> <td>This satisfies purpose of CAR.</td> </tr> <tr> <td>Preventive actions shall be appropriate to the effects of potential problems.</td> <td>No relationship</td> <td></td> </tr> <tr> <td>A documented procedure shall be established to define requirements for.</td> <td>No relationship</td> <td></td> </tr> <tr> <td>a) determining and analyzing potential nonconformities and their causes &lt;&lt;,&gt;&gt; and proposing actions to address them.</td> <td>CAR, SP 1.2</td> <td>Practices are focused on determining nonconformities and their causes.</td> </tr> <tr> <td>b) evaluating the need for actions to prevent occurrence of nonconformities.</td> <td>CAR, SP 1.2</td> <td>This satisfies CAR SP 1.2</td> </tr> <tr> <td>c) determining and implementing actions needed &lt;&lt;,&gt;&gt; implementing selected action proposals developed in causal analysis.</td> <td>CAR, SP 2.1</td> <td>Actions needed in CAR is related to action proposals.</td> </tr> <tr> <td>d) records of results of actions taken [see 4.2.4] &lt;&lt;,&gt;&gt; causal analysis and resolution data for use across projects and throughout the organization, and</td> <td>CAR, SP 2.3</td> <td>Both keep records.</td> </tr> <tr> <td>e) evaluating and reviewing the effectiveness of the preventive actions taken [on process performance].</td> <td>CAR, SP 2.2</td> <td>Both review the effectiveness of actions taken.</td> </tr> </tbody> </table> ### Table 5 Unified practice between ISO 9001 and CMMI <table> <thead> <tr> <th>Harmonization case</th> </tr> </thead> <tbody> <tr> <td>Clause 8.5.3. Preventive action.</td> </tr> <tr> <td>The organization shall determine actions to eliminate the causes of potential nonconformities in order to prevent their occurrence, identifying causes and selected outcomes, and initiate activity to improve process performance.</td> </tr> <tr> <td>Preventive actions shall be appropriate to the effects of the potential problems.</td> </tr> <tr> <td>A documented procedure shall be established to define requirements for:</td> </tr> <tr> <td>a) determining and analysing potential nonconformities and their cause performing causal analysis of selected outcomes and propose actions to address them,</td> </tr> <tr> <td>b) evaluating the need for action to prevent occurrence of nonconformities,</td> </tr> <tr> <td>c) determining and implementing actions needed. Implement selected action proposals developed in causal analysis,</td> </tr> <tr> <td>d) records of results of actions taken [see 4.2.4] causal analysis and resolution data for use across projects and throughout the organization, and</td> </tr> <tr> <td>e) evaluating and reviewing the effectiveness of the preventive actions taken on process performance.</td> </tr> </tbody> </table> ### 7. Lessons learned From the results obtained after putting this proposal into practice, we have learned several lessons that are reported below, and which we believe could be taken into consideration as useful guidelines when multiple models are being harmonized. - We think that organizations can benefit from this heterogeneity and variety if they suitably select and complement the processes, which from these models best fit their contexts. Several factors as the structural and terminological differences, size, approach, amongst others, impact on harmonization projects. However, they are not totally incompatible and thereby possible to be reconciled through different methods and analysis; e.g. structural differences found between specific practices of CMMI and clauses of ISO models of harmonization case three. There was a reduction of complexity during the homogenization, comparison and integration of models involved in the harmonization projects. This came about as a result of the definition and establishment of incremental iterations, allowing activities to be agilely managed; e.g. in harmonization case two allowed to establish short targets in each iteration, carry out supervision and regular monitoring, obtain feedback quickly, measure the progress in short periods of time, and integrate the results obtained in each iteration, continuously. Without an iterative and incremental approach, this would have been impossible. Management focused and directed by objectives of harmonization aligned with business needs, allows companies to obtain results according to their needs. HFramework includes activities that support the definition of a harmonization proposal based on the business necessities and the prioritized harmonization requirements. Applying a transitive property of equality provides companies and practitioners with a simple approach, which helps them find alternative relationships between multiple models and, thereby, extend their harmonization scope. There is a risk related to subjectivity when making comparisons between models. This occurs because the analysis can be influenced by the knowledge and expertise acquired with other models. Although a method to support the integration of models has been defined, there is a lack of a more detailed criterion to facilitate the integration in other possible situations, as well as to expedite decision making. 8. Conclusions Currently, the wide range of models and standards provides companies with multiple solutions to choose from and decide which best fits their needs, and also brings them several benefits at different levels: security information, quality management, risks, best management practices related to technology information, amongst others. In spite of all this, it is necessary to say that due to several factors needed to be resolved before being able to have an integrated set of processes at both operational and management levels -e.g. ambiguity, incompatibility, terminology, structural differences, overlapping, amongst others- implementing and institutionalizing multiple models is not an easy task. Following this line of thought, environments where multiple models are present characterize themselves by requiring greater effort, time and cost commitment than conventional SPI projects. HFramework helps to resolve the structural problems between multiple models. It also supports the management and configuration of a harmonization project according to an organization business needs. It supports the harmonization of any set of models and/or standards required by one. Currently, we are replicating and refining HFramework and its elements in new harmonization projects. The main aim is to perform a study that allows us to determine whether the harmonization framework leads to a reduction in effort and costs associated with the implementation of a new model, rather than keeping one that is already institutionalized. Since this paper presents only an overview of HFramework and its application, future work will focus on a detailed presentation of case studies and experience reports along with guidelines for determining harmonization goals. CIOs (Chief Information Officers) are becoming CPOs (Chief Process Officers); therefore, a solution that allows companies to radically address their multiple business needs through the management and improvement of their processes, along with room for rethinking, rebuilding and boosting the performance of their processes around a wide range of models, is necessary. We expect that our proposal—along with others—offers organizations the appropriate readiness to face the challenges presented by the niche markets around the world. 9. Acknowledgments César Pardo and Francisco J. Pino acknowledge the contribution of the University of Cauca, where they work as an assistant professor and full professor respectively. 10. References
{"Source-Url": "http://tjfeonline.com/admin/archive/1422.01.20181516628580.pdf", "len_cl100k_base": 6877, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26129, "total-output-tokens": 8633, "length": "2e12", "weborganizer": {"__label__adult": 0.00035881996154785156, "__label__art_design": 0.0007162094116210938, "__label__crime_law": 0.0006723403930664062, "__label__education_jobs": 0.0080108642578125, "__label__entertainment": 8.45789909362793e-05, "__label__fashion_beauty": 0.00022017955780029297, "__label__finance_business": 0.00540924072265625, "__label__food_dining": 0.000400543212890625, "__label__games": 0.0007004737854003906, "__label__hardware": 0.0006909370422363281, "__label__health": 0.0005779266357421875, "__label__history": 0.00044035911560058594, "__label__home_hobbies": 0.00014090538024902344, "__label__industrial": 0.00077056884765625, "__label__literature": 0.0004703998565673828, "__label__politics": 0.0003788471221923828, "__label__religion": 0.0003962516784667969, "__label__science_tech": 0.051513671875, "__label__social_life": 0.00015270709991455078, "__label__software": 0.027801513671875, "__label__software_dev": 0.89892578125, "__label__sports_fitness": 0.0002465248107910156, "__label__transportation": 0.0004954338073730469, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37804, 0.02656]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37804, 0.14706]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37804, 0.9159]], "google_gemma-3-12b-it_contains_pii": [[0, 4586, false], [4586, 8575, null], [8575, 11037, null], [11037, 14533, null], [14533, 17236, null], [17236, 22544, null], [22544, 25315, null], [25315, 29137, null], [29137, 34008, null], [34008, 37804, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4586, true], [4586, 8575, null], [8575, 11037, null], [11037, 14533, null], [14533, 17236, null], [17236, 22544, null], [22544, 25315, null], [25315, 29137, null], [29137, 34008, null], [34008, 37804, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37804, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37804, null]], "pdf_page_numbers": [[0, 4586, 1], [4586, 8575, 2], [8575, 11037, 3], [11037, 14533, 4], [14533, 17236, 5], [17236, 22544, 6], [22544, 25315, 7], [25315, 29137, 8], [29137, 34008, 9], [34008, 37804, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37804, 0.27481]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
52b79c92aa196264c93a14b90ab2848498913ebb
The three “back ticks” (”) must be followed by curly brackets “{”, and then “r” to tell the computer that you are using R code. This line is then closed off by another curly bracket “}”. Anything before three more back ticks “”” are then considered R code (a script). If any code in the document has just a backtick ‘ then nothing, then another backtick, then that word is just printed as if it were code, such as hey. I’m reading in the bike lanes here. ```r # readin is just a "label" for this code chunk ## code chunk is just a "chunk" of code, where this code usually ## does just one thing, aka a module ### comments are still # here ### you can do all your reading in there ### let's say we loaded some packages library(stringr) library(plyr) library(dplyr) ``` ```r # Attaching package: 'dplyr' # # The following objects are masked from 'package:plyr': # # arrange, count, desc, failwith, id, mutate, rename, summarise, # summarize # # The following objects are masked from 'package:stats': # # filter, lag # # The following objects are masked from 'package:base': # # intersect, setdiff, setequal, union ``` ```r fname <- "../data/Bike_Lanes.csv" bike = read.csv(fname, as.is = TRUE) ``` You can write your introduction here. Introduction Bike lanes are in Baltimore. People like them. Why are they so long? Exploratory Analysis Let’s look at some plots of bike length. Let’s say we wanted to look at what affects bike length. Plots of bike length Note we made the subsection by using three “hashes” (pound signs): `###`. We can turn off R code output by using `echo = FALSE` on the knitr code chunk. We have a total of 1505 rows. What does it look like if we took the log (base 10) of the bike length: ```r no.missyear$log.length <- log10(no.missyear$length) ### see here that if you specify the data argument, you don't need to do the $ boxplot(log.length ~ dateInstalled, data=no.missyear, main="Boxplots of Bike Lenght by Year", xlab="Year") ``` I want my boxplots colored, so I set the `col` argument. ```r boxplot(log.length ~ dateInstalled, data=no.missyear, main="Boxplots of Bike Length by Year", xlab="Year", ylab="Bike Length", col="red") ``` As we can see, 2006 had a much higher bike length. What about for the type of bike path? ### type is a character, but when R sees a "character" in a "formula", then it automatically converts it ### a formula is something that has a y ~ x, which says I want to plot y against x ### or if it were a model you would do y ~ x, which meant regress against y ```r boxplot(log.length ~ type, data=no.missyear, main="Boxplots of Bike Lenght by Year", xlab="Year", ylab="Bike Length") ``` What if we want to extract means by each type? Let's show a few ways: ### tapply takes in vector 1, then does a function by vector 2, and then you tell what ### that function is ```r tapply(no.missyear$log.length, no.missyear$type, mean) ``` ```r ## Group.1 x ## 1 BIKE LANE 2.330611 ## 2 CONTRAFLOW 2.087246 ## 3 SHARED BUS BIKE 2.363005 ## 4 SHARROW 2.256425 ## 5 SIDEPATH 2.781829 ## 6 SIGNED ROUTE 2.263746 ``` ### aggregate ```r aggregate(x=no.missyear$log.length, by=list(no.missyear$type), FUN=mean) ``` ```r ## Group.1 x ## 1 BIKE LANE 2.330611 ## 2 CONTRAFLOW 2.087246 ## 3 SHARED BUS BIKE 2.363005 ## 4 SHARROW 2.256425 ## 5 SIDEPATH 2.781829 ## 6 SIGNED ROUTE 2.263746 ``` ### now let's specify the data argument and use a "formula" - much easier to read and ## more "intuitive" aggregate(log.length ~ type, data=no.missyear, FUN=mean) ```r ## type log.length ## 1 BIKE LANE 2.330611 ## 2 CONTRAFL ow 2.087246 ## 3 SHARED BUS BIKE 2.363005 ## 4 SHARROW 2.256425 ## 5 SIDEPATH 2.781829 ## 6 SIGNED ROUTE 2.263746 ``` ## ddply is from the plyr package ## takes in a data frame, (the first d refers to data.frame) ## splits it up by some variables (let's say type) ## then we'll use summarise to summarize whatever we want ## then returns a data.frame (the second d) - hence why it's ddply ## if we wanted to do it on a "list" thne return data.frame, it'd be ldply ```r ddply(no.missyear, .(type), plyr::summarise, mean=mean(log.length)) ``` ```r ## Source: local data frame [6 x 2] ## ## type mean ## 1 BIKE LANE 2.330611 ## 2 CONTRAFL ow 2.087246 ## 3 SHARED BUS BIKE 2.363005 ## 4 SHARROW 2.256425 ## 5 SIDEPATH 2.781829 ## 6 SIGNED ROUTE 2.263746 ``` ```r no.missyear %>% group_by(type) %>% dplyr::summarise(mean=mean(log.length)) ``` ```r ## Source: local data frame [6 x 2] ## ## type mean ## 1 BIKE LANE 2.330611 ## 2 CONTRAFL ow 2.087246 ## 3 SHARED BUS BIKE 2.363005 ## 4 SHARROW 2.256425 ## 5 SIDEPATH 2.781829 ## 6 SIGNED ROUTE 2.263746 ``` ddply (and other functions in the plyr package) is cool because you can do multiple functions really easy. Let's show a what if we wanted to go over type and dateInstalled: ```r ## For going over 2 variables, we need to do it over a "list" of vectors tapply(no.missyear$log.length, list(no.missyear$type, no.missyear$dateInstalled), mean) ``` <table> <thead> <tr> <th></th> <th>2006</th> <th>2007</th> <th>2008</th> <th>2009</th> <th>2010</th> <th>2011</th> </tr> </thead> <tbody> <tr> <td>BIKE LANE</td> <td>3.046261</td> <td>2.351256</td> <td>2.365728</td> <td>2.381418</td> <td>2.306994</td> <td>2.242132</td> </tr> <tr> <td>CONTRAFLOW</td> <td>NA</td> <td>NA</td> <td>NA</td> <td>NA</td> <td>2.087246</td> <td>NA</td> </tr> <tr> <td>SHARED BUS BIKE</td> <td>NA</td> <td>NA</td> <td>NA</td> <td>2.350759</td> <td>2.403824</td> <td>NA</td> </tr> <tr> <td>SHARROW</td> <td>2.300954</td> <td>2.220850</td> <td>2.691814</td> <td>2.247131</td> <td>NA</td> <td></td> </tr> <tr> <td>SIDEPATH</td> <td>NA</td> <td>NA</td> <td>2.625486</td> <td>NA</td> <td>2.773850</td> <td>3.266816</td> </tr> <tr> <td>SIGNED ROUTE</td> <td>NA</td> <td>2.287593</td> <td>NA</td> <td>NA</td> <td>2.239475</td> <td>2.210112</td> </tr> </tbody> </table> <table> <thead> <tr> <th></th> <th>2012</th> <th>2013</th> </tr> </thead> <tbody> <tr> <td>BIKE LANE</td> <td>2.36151</td> <td>2.408306</td> </tr> <tr> <td>CONTRAFLOW</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>SHARED BUS BIKE</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>SHARROW</td> <td>2.23636</td> <td>NA</td> </tr> <tr> <td>SIDEPATH</td> <td>NA</td> <td>NA</td> </tr> <tr> <td>SIGNED ROUTE</td> <td>NA</td> <td>NA</td> </tr> </tbody> </table> ```r tapply(no.missyear$log.length, list(no.missyear$type, no.missyear$dateInstalled), mean, na.rm=TRUE) ``` ```r aggregate(log.length ~ type + dateInstalled, data=no.missyear, FUN=mean) ``` ```r aggregate(log.length ~ dateInstalled + log.length, data=no.missyear, FUN=mean) ``` OK let's do an linear model ```r ## type is a character, but when R sees a "character" in a "formula", then it automatically converts it into a factor ## a formula is something that has a y ~ x, which says I want to plot y against x ## or if it were a model you would do y ~ x, which meant regress against y mod.type = lm(log.length ~ type, data=no.missyear) mod.yr = lm(log.length ~ factor(dateInstalled), data=no.missyear) mod.yrtype = lm(log.length ~ type + factor(dateInstalled), data=no.missyear) summary(mod.type) ``` ```r ## Call: ## lm(formula = log.length ~ type, data = no.missyear) ``` ## Residuals: <table> <thead> <tr> <th>Min</th> <th>1Q</th> <th>Median</th> <th>3Q</th> <th>Max</th> </tr> </thead> <tbody> <tr> <td>-1.51498</td> <td>-0.19062</td> <td>0.02915</td> <td>0.23220</td> <td>1.31021</td> </tr> </tbody> </table> ## Coefficients: | Estimate | Std. Error | t value | Pr(>|t|) | |-----------|------------|---------|----------| | (Intercept) 2.33061 | 0.01487 | 156.703 | < 2e-16 *** | | typeCONTRAFLOW -0.24337 | 0.10288 | -2.366 | 0.018127 * | | typeSHARED BUS BIKE 0.03239 | 0.06062 | 0.534 | 0.593194 | | typeSHARROW -0.07419 | 0.02129 | -3.484 | 0.000509 *** | | typeSIDEPATH 0.45122 | 0.15058 | 2.997 | 0.002775 ** | | typeSIGNED ROUTE -0.06687 | 0.02726 | -2.453 | 0.014300 * | --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.367 on 1499 degrees of freedom Multiple R-squared: 0.01956, Adjusted R-squared: 0.01629 F-statistic: 5.98 on 5 and 1499 DF, p-value: 1.74e-05 That's rather UGLY, so let's use a package called `xtable` and then make this model into an `xtable` object and then print it out nicely. ```r ## Loading required package: xtable # smod <- summary(mod.yr) xtab <- xtable(mod.yr) Well `xtable` can make html tables, so let's print this. We must tell R that the results is actually an html output, so we say the results should be embedded in the html “asis” (aka just print out whatever R spits out). ```r tab <- xtable(mod.yr) print.xtable(xtab, type="html") ``` Estimate Std. Error t value Pr(>|t|) (Intercept) 3.0463 0.2600 11.71 0.0000 factor(dateInstalled)2007 factor(dateInstalled)2008 -0.7808 0.2613 -2.99 0.0029 factor(dateInstalled)2009 -0.6394 0.2631 -2.43 0.0152 factor(dateInstalled)2010 -0.7791 0.2605 -2.99 0.0028 factor(dateInstalled)2011 -0.8022 0.2626 -3.05 0.0023 factor(dateInstalled)2012 -0.7152 0.2625 -2.72 0.0065 factor(dateInstalled)2013 -0.6380 0.2849 -2.24 0.0253 OK, that’s pretty good, but let’s say we have all three models. Another package called stargazer can put models together easily and print them out. So xtable is really good when you are trying to print out a table (in html, otherwise make the table and use write.csv to get it in Excel and then format) really quickly and in a report. But it doesn’t work so well with many models together. So let’s use stargazer. Again, you need to use install.packages("stargazer") if you don’t have function. OK, so what’s the difference here? First off, we said results are “markup”, so that it will not try to reformat the output. Also, I didn’t want those # for comments, so I just made comment an empty string “”. ```r stargazer(mod.yr, mod.type, mod.yrtype, type="text") ``` <p>| | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>Dependent variable:</strong></td> <td><strong>log.length</strong></td> <td></td> </tr> <tr> <td></td> <td>(1)</td> <td>(2)</td> </tr> <tr> <td>factor(dateInstalled)2007</td> <td>-0.733***</td> <td>-0.690***</td> </tr> <tr> <td></td> <td>(0.261)</td> <td>(0.259)</td> </tr> <tr> <td>factor(dateInstalled)2008</td> <td>-0.781***</td> <td>-0.742***</td> </tr> <tr> <td></td> <td>(0.261)</td> <td>(0.260)</td> </tr> <tr> <td>factor(dateInstalled)2009</td> <td>-0.639**</td> <td>-0.619**</td> </tr> <tr> <td></td> <td>(0.263)</td> <td>(0.262)</td> </tr> <tr> <td>factor(dateInstalled)2010</td> <td>-0.779***</td> <td>-0.736***</td> </tr> <tr> <td></td> <td>(0.260)</td> <td>(0.259)</td> </tr> <tr> <td>factor(dateInstalled)2011</td> <td>-0.802***</td> <td>-0.790***</td> </tr> <tr> <td></td> <td>(0.263)</td> <td>(0.261)</td> </tr> <tr> <td>factor(dateInstalled)2012</td> <td>-0.715***</td> <td>-0.700***</td> </tr> <tr> <td></td> <td>(0.262)</td> <td>(0.261)</td> </tr> <tr> <td>factor(dateInstalled)2013</td> <td>-0.638**</td> <td>-0.638**</td> </tr> <tr> <td></td> <td>(0.285)</td> <td>(0.283)</td> </tr> <tr> <td>typeCONTRAFLOW</td> <td>-0.243**</td> <td>-0.224**</td> </tr> <tr> <td></td> <td>(0.103)</td> <td>(0.103)</td> </tr> <tr> <td>typeSHARED BUS BIKE</td> <td>0.032</td> <td>-0.037</td> </tr> <tr> <td></td> <td>(0.061)</td> <td>(0.069)</td> </tr> <tr> <td>typeSHARROW</td> <td>-0.074***</td> <td>-0.064***</td> </tr> <tr> <td></td> <td>(0.021)</td> <td>(0.023)</td> </tr> <tr> <td>typeSIDEPATH</td> <td>0.451***</td> <td>0.483***</td> </tr> </tbody> </table> If we use \texttt{stargazer(mod.yr, mod.type, mod.yrtype, type="html")} Dependent variable: log.length (1) (2) (3) factor(dateInstalled)2007 -0.733*** -0.690*** (0.261) (0.259) factor(dateInstalled)2008 -0.781*** -0.742*** (0.261) (0.260) factor(dateInstalled)2009 -0.639** -0.619** (0.263) (0.262) factor(dateInstalled)2010 -0.779*** <table> <thead> <tr> <th>Date Installed</th> <th>Type</th> <th>Effect Size</th> <th>p-value</th> </tr> </thead> <tbody> <tr> <td>2011</td> <td>CONTRAFLOW</td> <td>-0.736***</td> <td>(0.260)</td> </tr> <tr> <td></td> <td>SHARED BUS BIKE</td> <td>0.032</td> <td>(0.061)</td> </tr> <tr> <td>2012</td> <td>CONTRAFLOW</td> <td>-0.802***</td> <td>(0.259)</td> </tr> <tr> <td></td> <td>SHARED BUS BIKE</td> <td>-0.074***</td> <td>(0.069)</td> </tr> <tr> <td>2013</td> <td>CONTRAFLOW</td> <td>-0.790***</td> <td>(0.263)</td> </tr> <tr> <td></td> <td>SHARED BUS BIKE</td> <td>-0.064***</td> <td>(0.021)</td> </tr> <tr> <td></td> <td>SIDEPATH</td> <td>0.451***</td> <td>(0.023)</td> </tr> <tr> <td></td> <td>SHARROW</td> <td>0.483***</td> <td>(0.023)</td> </tr> </tbody> </table> type SIGNED ROUTE \[-0.067^{**}\] \[-0.067^{**}\] \[(0.027)\] \[(0.029)\] Constant \[3.046^{***}\] \[2.331^{***}\] \[3.046^{***}\] \[(0.260)\] \[(0.015)\] \[(0.258)\] Observations 1,505 1,505 1,505 R2 0.017 0.020 0.033 Adjusted R2 0.012 0.016 0.026 Residual Std. Error 0.368 (df = 1497) 0.367 (df = 1499) 0.365 (df = 1492) F Statistic 3.691*** (df = 7; 1497) 5.980*** (df = 5; 1499) 4.285*** (df = 12; 1492) Note: \[p<0.1; \ p<0.05; \ p<0.01\] Data Extraction Let’s say I want to get data INTO my text. Like there are N number of bike lanes with a date installed that isn’t zero. There are 1505 bike lanes with a date installed after 2006. So you use one backtick ‘ and then you say “r” to tell that it’s R code. And then you run R code that gets evaluated and then returns the value. Let’s say you want to compute a bunch of things: ```r ### let's get number of bike lanes installed by year n.lanes = ddply(no.missyear, .(dateInstalled), nrow) names(n.lanes) <- c("date", "nlanes") n2009 <- n.lanes$nlanes[ n.lanes$date == 2009] n2010 <- n.lanes$nlanes[ n.lanes$date == 2010] getwd() ``` Now I can just say there are 86 lanes in 2009 and 625 in 2010. ```r fname <- "../../data/Charm_City_Circulator_Ridership.csv" # fname <- file.path(data.dir, "Charm_City_Circulator_Ridership.csv") ## file.path takes a directory and makes a full name with a full file path charm = read.csv(fname, as.is=TRUE) library(chron) days = levels(weekdays(1, abbreviate=FALSE)) charm$day <- factor(charm$day, levels=days) charm$date <- as.Date(charm$date, format="%m/%d/%Y") cn <- colnames(charm) daily <- charm[, c("day", "date", "daily")] ``` ```r charm$daily <- NULL require(reshape) ## Loading required package: reshape ## ## Attaching package: 'reshape' ## ## The following object is masked from 'package:dplyr': ## ## rename ## ## The following objects are masked from 'package:plyr': ## ## rename, round_any long.charm <- melt(charm, id.vars = c("day", "date")) long.charm$type[ grepl("Alightings", long.charm$variable)] <- "Alightings" long.charm$type[ grepl("Average", long.charm$variable)] <- "Average" long.charm$line[ grepl("purple", long.charm$variable)] <- "purple" ``` long.charm$line[grepl("green", long.charm$variable)] <- "green" long.charm$line[grepl("banner", long.charm$variable)] <- "banner" long.charm$variable <- NULL long.charm$line <- factor(long.charm$line, levels=c("orange", "purple", "green", "banner")) head(long.charm) ## day date value type line ## 1 Monday 2010-01-11 877 Boardings orange ## 2 Tuesday 2010-01-12 777 Boardings orange ## 3 Wednesday 2010-01-13 1203 Boardings orange ## 4 Thursday 2010-01-14 1194 Boardings orange ## 5 Friday 2010-01-15 1645 Boardings orange ## 6 Saturday 2010-01-16 1457 Boardings orange ### NOW R has a column of day, the date, a "value", the type of value and the circulator line that corresponds to it ### value is now either the Alightings, Boardings, or Average from the charm dataset Let's do some plotting now! require(ggplot2) ## Loading required package: ggplot2 ## Warning: package 'ggplot2' was built under R version 3.2.3 ### let's make a "ggplot" ### the format is ggplot(dataframe, aes(x=COLNAME, y=COLNAME)) ### where COLNAME are colnames of the dataframe ### you can also set color to a different factor ### other options in AES (fill, alpha level -which is the "transparency" of points) g <- ggplot(long.charm, aes(x=date, y=value, color=line)) ### let's change the colors to what we want- doing this manually, not letting it choose ### for me g <- g + scale_color_manual(values=c("orange", "purple", "green", "blue")) ### plotting points g + geom_point() ## Warning: Removed 5328 rows containing missing values (geom_point). ### Let's make Lines! ```r g + geom_line() ``` ## Warning: Removed 5043 rows containing missing values (geom_path). ```r ### let's make a new plot of points ```gpoint <- g + geom_point() ```n ### let's plot the value by the type of value - boardings/average, etc gpoint + facet_wrap(~ type) ## Warning: Removed 5328 rows containing missing values (geom_point). ``` OK let's turn off some warnings - making `warning=FALSE` (in knitr) as an option. ```r ## let's compare vertically p + facet_wrap(~ type, ncol=1) ``` We can also smooth the data to give us an overall idea of how the average changes over time. I don’t want to do a standard error (se). ``` ## let’s smooth this - get a rough estimate of what’s going on gfacet + geom_smooth(se=FALSE) ``` OK, I’ve seen enough code, let’s turn that off, using `echo=FALSE`. There are still messages, but we can turn these off with `message = FALSE`
{"Source-Url": "http://aejaffe.com/winterR_2016/Knitr/lecture/Knitr.pdf", "len_cl100k_base": 6065, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 45949, "total-output-tokens": 7080, "length": "2e12", "weborganizer": {"__label__adult": 0.000316619873046875, "__label__art_design": 0.0007786750793457031, "__label__crime_law": 0.0003509521484375, "__label__education_jobs": 0.0010061264038085938, "__label__entertainment": 0.00016927719116210938, "__label__fashion_beauty": 0.0001558065414428711, "__label__finance_business": 0.0005207061767578125, "__label__food_dining": 0.00034356117248535156, "__label__games": 0.0008120536804199219, "__label__hardware": 0.0009765625, "__label__health": 0.00025582313537597656, "__label__history": 0.00038814544677734375, "__label__home_hobbies": 0.00048613548278808594, "__label__industrial": 0.0007462501525878906, "__label__literature": 0.0003690719604492187, "__label__politics": 0.0002872943878173828, "__label__religion": 0.00034499168395996094, "__label__science_tech": 0.050201416015625, "__label__social_life": 0.0002779960632324219, "__label__software": 0.1131591796875, "__label__software_dev": 0.826171875, "__label__sports_fitness": 0.0006475448608398438, "__label__transportation": 0.0007615089416503906, "__label__travel": 0.0003752708435058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16307, 0.15028]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16307, 0.22275]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16307, 0.72019]], "google_gemma-3-12b-it_contains_pii": [[0, 1241, false], [1241, 1621, null], [1621, 1972, null], [1972, 2267, null], [2267, 3352, null], [3352, 5004, null], [5004, 5992, null], [5992, 6591, null], [6591, 8055, null], [8055, 8875, null], [8875, 10726, null], [10726, 11070, null], [11070, 11690, null], [11690, 12142, null], [12142, 13865, null], [13865, 15403, null], [15403, 15521, null], [15521, 15774, null], [15774, 15925, null], [15925, 16163, null], [16163, 16307, null], [16307, 16307, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1241, true], [1241, 1621, null], [1621, 1972, null], [1972, 2267, null], [2267, 3352, null], [3352, 5004, null], [5004, 5992, null], [5992, 6591, null], [6591, 8055, null], [8055, 8875, null], [8875, 10726, null], [10726, 11070, null], [11070, 11690, null], [11690, 12142, null], [12142, 13865, null], [13865, 15403, null], [15403, 15521, null], [15521, 15774, null], [15774, 15925, null], [15925, 16163, null], [16163, 16307, null], [16307, 16307, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16307, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16307, null]], "pdf_page_numbers": [[0, 1241, 1], [1241, 1621, 2], [1621, 1972, 3], [1972, 2267, 4], [2267, 3352, 5], [3352, 5004, 6], [5004, 5992, 7], [5992, 6591, 8], [6591, 8055, 9], [8055, 8875, 10], [8875, 10726, 11], [10726, 11070, 12], [11070, 11690, 13], [11690, 12142, 14], [12142, 13865, 15], [13865, 15403, 16], [15403, 15521, 17], [15521, 15774, 18], [15774, 15925, 19], [15925, 16163, 20], [16163, 16307, 21], [16307, 16307, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16307, 0.13995]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
bed0eeac1c665af5cc5dda689d811ecde70ad179
A multi-logic framework for multi-level fusion in real time data fusion applications José Gomes de Carvalho Jr. PEC COPPE / IPqM Federal University of Rio de Janeiro / Navy Research Institute. Rio de Janeiro, Brazil. gomes@ipqm.mar.mil.br Pablo Rangel PESC COPPE / IPqM Federal University of Rio de Janeiro / Navy Research Institute. Rio de Janeiro, Brazil. pablorangel@cos.ufrj.br Nelson F. Ebeken PEC COPPE Federal University of Rio de Janeiro, Rio de Janeiro, Brazil. nelson.ebeken@ntt.ufrj.br Abstract – This paper presents a framework called Real Time Multi Logic Reasoner (RT-MLR) to perform high level data fusion. RT-MLR reconciles the development of Knowledge Based Systems (KBS) with the object-oriented architectures and on-line real-time monitoring requirements. This framework has been implemented with an inference engine that supports a data driven approach and knowledge expressed in First-Order Logic, Fuzzy Logic and Temporal Logic. RT-MLR has been tested face on its capability to perform data fusions in a military application, using all kinds of logics implemented. Data fusions in different levels were made through rules that express relations between objects inside naval context scenarios. Simulated and actual data were used in validation process. Keywords: high level fusion, multiple logic reasoning, real time monitoring. 1 Introduction Data fusion is a widely studied process that includes many techniques with different objectives [1] [2] [3]. It is usually adopted the JDL model [4] that separates the process into levels according to planned objectives. Processes at level 0 make data conversions, filtering and all managements necessary to produce data as reliable as possible at sensor level. Objects processing is done at level 1, which includes mainly estimating position, speed and acceleration of tracked objects. High level fusion processes (levels 2 and 3 of JDL model) involve situational assessment and threat refinement. These processes usually involve decisions that were previously done by humans. In military applications, decisions must comply with doctrines that specify procedures in different contexts. The process must be understandable and decisions must be justifiable. These requirements indicate the use of techniques that deals with explicit knowledge into a human comprehensible logic. Knowledge Based Systems (KBS) are designed to provide inferences that may be explained to human operators, since they are based in logic rules. Another characteristic of military applications is the amount of information that must be treated in real-time. These applications are generally designed in an object-oriented architecture by software engineers. However, the greatest part of intelligence embedded into the systems is provided by experts that don’t know architectural details. The main contribution of this work is a framework that allows achieving high-level fusion expressing complex domain knowledge and relations itself through rules that support multiple logics. The framework reconciles the OO modeling and the KBS approach in a data-driven reasoner. The issues of KBS are detailed in section 2, particularly the compliance with OO systems and real time monitoring. Details of the implemented framework are provided in section 3. In section 4 is explained the approach adopted to apply the framework into a naval Command, Control, Communications, Computer, Intelligence, Surveillance and Reconnaissance System (C4ISR). Sections 5 and 6 present results and conclusions. 2 Applying knowledge based systems Intelligent applications usually have a complex domain. In those systems is commonly adopted a hierarchical approach based on languages like UML that support object-oriented architectures. Moreover, on domains that need to establish restrictions on properties as transitivity and cardinality, ontological languages, like OWL, use to be the priority choice. These languages are particularly useful in open domain applications like those designed for the web environment and working with text mining. Ontological languages usually have engines that make automatic inferences from the hierarchies provided by classes and from the restrictions imposed to relations. These inferences may be started by an explicit query made in the domain application or by an automatic process triggered by modifications in the working memory where facts lie. However, in complex domains reasoning is not restricted to objects in a hierarchy. So, it is always useful to have a mechanism to make inferences from business rules [5]. Another problem is the expressiveness of the knowledge represented by rules. To improve the coverage of the application domain and reduce the amount of rules needed to represent knowledge, it is useful to use rules that implement different logics. Fuzzy Logic, for example, enables a more intuitive partition of the application domain, expressing more concisely the knowledge of human experts. Temporal logic, on the other hand, is useful for behaviors that can only be identified if we consider the events related to the time they occurred. This is particularly important when the behavior to be considered is evolutionary in nature. In a medical system, for example, the diagnosis and treatment of a patient depends on a sequence of evidences and events related in time that can characterize this or that specific pathology and recommend this or that therapeutic procedure [6]. In a C4ISR system there are also behaviors that can be detected and countermeasures that can be suggested, which are related to temporal sequences of events. Another important aspect in the representation of knowledge is the ability to deal with values that are constantly changing, which is typical of real-time systems. Such characteristics are typical in military systems, control systems for industrial plants and systems for monitoring hospital patients. In these situations, the knowledge used in the premise of rules is constantly changing, resulting in the necessity of a constant re-evaluation of these rules. Several models have been proposed inclusion of production rules in a systems that use procedural modeling [7,8]. Some ontology languages as LOOM [9] and OCML [10] allow the inclusion of first-order logic production rules (detailed comparisons are made in [8] and [11]). The advantage of these and other languages for specifying ontology as FLogic [12] and Ontoligua [13] is that a complete ontology specification is allowed. These ontology languages include concepts that are commonly found in languages UML-like, as classes, inheritance, polymorphism, instantiation, strong typing and encapsulation. But they also include concepts that are not found in UML, as facets of properties (transitivity and cardinality), or operations on sets such as union, intersection and complement (a comparison of correspondence between UML and ontological languages can be found in [14]). Another approach is made by frameworks that allow the specification of production rules in object-oriented architecture implemented in Java (like DROOLS, ILOG JRules, Jess and Hammurabi). These frameworks generally follow the standard JSR-94 [15]. The standard defines operations to insert, remove or modify facts in a working memory. The inferences are done by queries over the rules and the basis of facts (working memory). The idea is to separate the knowledge of the business itself in a set of rules that are easily maintained and debugged by experts of the business, without them needing to know the architecture used in the rest of the application. The JSR-94 standard does not define syntax for the language in which the rules will be expressed. So, each framework defines a file format and syntax for the rules. RT-MLR has another approach for coupling of procedural and declarative paradigms, as we shall see in section 3. ### 2.1 KBS restrictions in real-time systems In KBS there are two key elements: the knowledge expressed in rules and the basis of facts (data known). Systems that seek to reconcile the declarative and procedural paradigms (DROOLS and Hammurapi Rules are examples) maintains a database of facts that co-exists with the application data modeled with OO techniques. It is up to the application to update the working memory used by the inference engine when a new external fact has to be included or excluded. Naturally, the activation of the inference motor also generates new facts that are automatically added to the working memory. More specifically, one fact is a true knowledge, such as: *Bob is a 20 years old single man.* The inference engines implement a forward or a backward chain strategy to search the rules that can be fired by the facts in the working memory. The inferences generate new facts that are added to the working memory. However, if there is an external change of facts, as information received from external sensors, it is necessary to remove the fact that it is no longer true and add the new fact in the working memory. Thus, if the status of Bob changes from single to married, the former must be removed and the new fact must be included in the database: *Bob is a married man of 20 years of age.* In real-time systems the information varies quickly. So, the removal and insertion of facts can make the search algorithms quite costly. Consider, for example, that the fact to be maintained is: *Bob is a married man of 20 years of age with 75 beats per minute.* If the monitoring system continuously reads the heartbeat of each patient, then we have a lot of new facts constantly being removed and inserted. A monitoring system deals with hundreds of objects, each of them with dozens of attributes. Therefore, there will be thousands of constantly changing data. The efficiency of the search strategy is directly related to the amount of information in the knowledge base. So, the high frequency of changes in a real-time monitoring system turns costly insertion and removal of facts in working memory. In the implemented framework, the data used in the rules have different granularity and the changes made in domain attributes directly triggers the rules of interest, as we shall see in section 3. ### 2.2 Traditional reasoning in KBS In several KBS, after each change in the base of facts, the inference engine searches for rules that can be triggered by the inclusion of the new fact. The fired rules can add new facts to the knowledge base and fire more rules in a recurring process. This strategy of inference is called data driven or forward chain. Most implementations of inference engines (OPS5, ART, CLIPS, Jess, DROOLS and others) use the RETE algorithm [16] that is based on two observations: • The firing of a rule usually changes only a few facts, affecting only a few rules by each of those changes. • The same patterns appear in the left-hand side of different rules. The RETE algorithm examines the rules and creates a rooted acyclic graph (the Rete) where nodes represent tests of one or two values which are the patterns that appear on the left-hand side of the rules. In building this graph, it is possible to identify the same patterns that appear in the different rules and simplify the graph. The engines that use the RETE algorithm pre-process rules and build this graph. These engines also provide a mechanism to delete and add new facts in the knowledge base. After each inclusion they seek the root nodes to match the new fact, and from there, information flows down the graph until the information reaches the leaves of the network when the rules are finally fired. The simplification provided by the RETE algorithm comes from the memory capacity of the network nodes. If only the right side value of an operator was re-calculated by the investigation of a node, then the value of the left side has not changed and you can use the value stored in the left-side-memory. If some new fact in the future changes the value of left side of this node then a new inference will be made using now the value stored in the right-side-memory. Each new fact included into the knowledge base is filtered by the network until it reaches leaf nodes, thus causing the activation of rules. However, as usually occurs in real-time systems, if several facts are generated almost at the same time (in a multi-thread monitoring system) before engine can complete inferences fired by the first one, then the decisions may not be updated with the current reality. Suppose, for example, that a new fact \( F_1 \) is generated and start shooting the inference engine. Suppose further that immediately after asynchronous updates of data occur in the database application. This is equivalent to the generation of new facts, for example, a \( F_2 \) fact. Each fact included in the working memory will fire the inference engine but the whole inference process is not concurrent, so each pair inclusion-inference will be processed sequentially. The already occurred event that generated \( F_2 \) will generate further research of the network, but at the moment it is waiting to spread the fact \( F_1 \) in the network. Eventually the fact \( F_2 \) can be used on the premise of a rule in which the fact \( F_1 \) is also used. Suppose that \( F_1 \) appears on the left side and \( F_2 \) on right side of an operator in this rule. For this rule, the engine will use the new value from \( F_1 \), but will use the value stored for the fact \( F_2 \) in the right-side-memory, since update of \( F_2 \) was not yet provided by the application in the working memory. This can generates a not so updated decision, since the event that generated \( F_2 \) had already occurred but was not considered. Naturally, it is known that fact \( F_2 \) will be considered in the sequence and the decision of the previously investigated rule will be eventually reconsidered, but the decision at the first moment was not the most appropriate, since values that were used don’t correspond to the most updated snapshot of the real-time application domain. The proposed methodology doesn’t work with a parallel memory, neither for “facts” (working memory) nor for inference in rules (right and left side operator memory). Instead, data considered are always read from the application domain, as will be explained in section 3. 2.3 Extending the expressiveness The frameworks that combine declarative knowledge with OO-domain, which were mentioned in Section 2.1, generally include only first-order logic. A known exception is DROOLS, which recently released an extension to include reasoning with temporal logic. The inference engine of the implemented framework works with rules expressing first-order logic, fuzzy logic and temporal logic, as we shall see in section 3. 3 The framework architecture This section details the solutions adopted in RT-MLR. 3.1 The objectives The RT-MLR aims to integrate the procedural and declarative programming in real-time applications modeled with an object-oriented architecture. The main objectives are: • Combining the hierarchical structure of OO applications with the KBS, facilitating the expression of expert knowledge in complex domains. • Integrating the knowledge base used for inferences with the application domain database. • Firing rules automatically after significant changes in the domain database fields. • Increasing syntactic and semantic integration expressing rules in the language used in application domain. • Increasing the expressiveness of system enabling inference over rules that use first order logic, fuzzy logic and temporal logic. 3.2 Application design approach The RT-MLR offers an integrative approach for the coupling of declarative and procedural paradigms. The rules are seen as something that interacts with the domain of the problem. However, this interaction is neither so close that is necessary to know about code sequences in the domain classes nor so far that isn’t necessary to know which data are modeled in the domain. Rules are modeled as association classes that relate the domain objects. In fact, in many cases, association classes have a set of rules that allow expressing some particular aspect of domain. They relate object attributes of a particular class with object attributes of another class, respecting the separation between the declarative and procedural knowledge. The classes of rules have a simple construction that does not require deep knowledge about the rest of the modeling and does not require a proprietary syntax. The rules are written in their own programming language, using logical operators that are resources provided by the framework. Thus, Boolean logic operators, fuzzy logical operators and temporal operators are provided as framework resources to be called in the body of the rules. Moreover, rules use the syntax of the native language (Java, for example). Java is cited as an example because the package was developed for Java, but nothing prevents there are other implementations for C++ or any other object-based language. ### 3.3 Multi logic reasoning For first-order logic, RT-MLR works with the closed world assumption in a non monotonic logic and also with the unique name assumption. Marc Dörfinger [17] proposed an inference motor for Courteous Logic [18] using labels to determine precedence of rules, which is an important issue in non monotonic logics. Into RT-MLR the precedence is only defined by the sequence the rules are added to a rule set. Those rules that are inserted later have higher precedence. As in Courteous logic, the hierarchy solves the problem of mutual exclusion, which is the existence of conflicting conclusions. In RT-MLR the final conclusion is from the highest priority rule. The sets of rules are created first and then each rule is added to one of the sets. In a typical application, it is probably necessary to create only one set for all rules of first-order logic and temporal logic. The set exists only to define the precedence of the rules. In the case of fuzzy rules, each group should aggregate the rules that relate to the same output variables (variables that appear at the conclusion of the rules). ### 3.4 On line monitoring One of the goals of the RT-MLR is to adapt the inference engine for real-time object-oriented applications. Some simplifying assumptions are made. The main assumption is that there is not a base of facts (working memory) parallel with database application. Figure 1 illustrates the concept. ![Figure 1 – The working memory in RT-MLR](image) The knowledge base consists of attributes that have already been defined in the application domain and which are not replicated on a parallel basis. The approach is a closed world of knowledge, where facts are values of attributes and it is assumed that these facts ever exist with some value. The second important feature concerns to the values used for inferences. The RT-MLR does not use values stored in a replicated working memory for inferences. The rules use the attributes of objects in the real-time application domain as the facts to be considered for inferences. Thus, changes of values are frequent and generated by asynchronous data acquisition processes in the domain application. Therefore, to assume the best inference as possible at each moment, the algorithm pays the price of always reading the current values in the domain attribute at the precise time that the rule is being investigated. The rules don’t have an associated memory. RT-MLR directly associates attributes in the domain applications with rules using those attributes. Thus, only rules that use those attributes are investigated when any significant change is detected in the value of the attributes. The attributes of domain classes that are used in the classes of rules should not be defined by the application as attributes of Java basic types. They must be defined by inheriting from classes provided by RT-MLR that encapsulate the basic types of Java (numeric, Boolean, etc.). When the values of these attributes are changed by any calling object in the domain application, the rules associated with that particularly attribute are directly investigated. Furthermore, users can establish a confidence interval for attribute values in order to define the minimum variation accepted as relevant to rules investigation. Thus, little variances in values, that aren’t considered relevant, will not cause rules investigations. RT-MLR is multi-thread safe, as usual in real-time domains. #### Fuzzy logic inference Fuzzy Logic is a theory proposed by Zadeh [19] that works with linguistic concepts. Logic rules use these linguistic concepts to inference. The linguistic concepts are modeled with membership functions that establish a degree of truth or degree of membership of the linguistic concept to a value domain. The membership, ranging from 0 to 1, is used by the inference engine to fire the rules. By dealing only with concepts such as near, far, critical, slow, fast, big or small, the rules allow a greater expression to transcribe the knowledge of common sense. For fuzzy logic, the investigation of the rules into RT-MLR is also linked to the changing values of the variables that appear in the premises of the rules. The left side of each fuzzy rule is evaluated based on the fuzzy variables and fuzzy operators. If the membership of the premise is greater than zero, the rule is fired, generating fuzzy an output function for each fired rule. The output functions of the fired rules are aggregate and the final function is then defuzzified to produce a scalar value for the output variable. For each fuzzy output variable is necessary to define previously a set of rules. All the fuzzy rules of the set are investigated when one of them is fired, since the values in the premises of these rules may remain unchanged but their current values may also fire the respective rules, then causing an influence in the final value of the output variable. Temporal logic inference There are several approaches proposed in the literature to consider the time domain in reasoning models. The RT-MLR uses the concept of Interval Temporal Logic [20, 21]. The basic element used is the event, which can be viewed as a fact associated with a given time. In fact, the event is a value with a time stamp associated. Since events must be related with attributes previously defined with the framework data types, is not necessary for the application to define explicitly an event. Instead of this, the events will be automatically defined and memorized by the framework as the temporal operators dealing with events are instantiated on logical rules. There is a set of time operators that performs operations like ThereWasEqual \((A,B,T1,T2)\). This operator returns true if in any time in the past, between time \(T1\) and Time \(T2\), the value of \(A\) was equal to the value of \(B\). Like this, there are other comparing operators, counting operators, medium value operators and operators that establish relations like before, after, during and until, always associated with time intervals in the past. Since the operators appear in rules, the interesting events are registered by the framework, exactly by the maximum time they are needed to remain memorized for investigation purposes. There is not a set of temporal logic rules. Temporal logic operators are used in first order logic rules, since their results are also Boolean or numeric. 4 Performing data fusion with RT-MLR Using RT-MLR to make data fusion doesn’t mean to establish a clear hierarchy in data fusion. In fact, for the RT-MLR such a hierarchy does not exist from a conceptual point of view. The main goal is to use the RT-MLR to implement inferences from features and behaviors detected in a CISR with sets of logic rules that capture relationships based on expert knowledge. From a classical approach, those inferences may be classified as level 1, level 2 or level 3 data fusion processes, but this doesn’t mind from an architectural point of view. The architecture of system where fusion was applied imposes some restrictions. For example, object fusion must be done at track level, not at measurement level, since each sensor has a node that performs level 0 fusion, provides Kalman filter estimation for object kinematic and delivers tracks to a central fusion node. The RT-MLR lies in this node besides a fusion application that specifies the rules and reports conclusions to a visualization node. Level 1 fusion example The fusion node receives periodically the individual tracks from each sensor node and performs a central fusion to decide which tracks from different radar nodes refer to the same targets. This fusion is based on fuzzy sets. Each new track received by fusion node is maintained in a separate list by the fusion application. The application instantiates a set of rules each time a message with a new track is received. Each instantiated rule establishes a fusion relation between the new track and another track from other radar. This associates the new track with all the other tracks from the other sensors. Thereafter those rules will be investigated each time an actualization of data track is received from sensor node. The track data must be aligned in time before comparison. So, for each compared track, the fusion rule calls a method that aligns the track’s kinematics for the current time. The association rule uses the aligned attributes to investigate the correlation of tracks. The following attributes are used: position, speed, course and size. The correlation measure is done by the membership value of each fuzzy set. The fuzzy sets are defined accordingly to the precision of each sensor. The fuzzyfied value is the module of the difference \((\Delta)\) between each attribute. The fuzzy sets are presented in figure 2. ![Figure 2 – Fuzzy sets used for track fusion](image) The support values shown in figure 2 are just examples, since the actual values depend on the precision of the involved sensors. The membership values obtained by entering the attribute values into the fuzzy functions are the measures of similarity between tracks. The attribute size is not directly provided by radar nodes. Radar nodes just send a counter of how many times the target was illuminated by radar wave. This counter and the distance from target to radar are correlated with the target size. A fuzzy rule set with nine rules is provided to obtain the size in meters from the fuzzy sets and rules in figure 3. ![Figure 3 – Fuzzy sets and rules used for size estimation](image) The membership values obtained from the sets in figure 2 are then contrasted with a threshold in order to decide if the attributes from the different tracks belong to the same object. In other works, the threshold reflects the precision of sensors [22]. In this work we use thresholds that vary with the condition of measurements made by radars. Each radar node has a Kalman filter (KF) that provides the estimation used as input for our fusion model. The KF also provides a performance measure that is the error variance \((\sigma)\) of the estimation done by filter. The standard deviation A level 2 data fusion example was implemented using the RT-MLR framework. The objective is offers a measure for the hostility of each unknown track. The measure varies from 0 to 100 in a scale that increases as the hostility increases. Differently from the previous explained track fusion rules, this data fusion is totally fuzzy. A set of fuzzy rules were created, associating rules defined by a domain expert to define the hostility. The model is an example of level 2 fusion because it correlates tracks with aspects of scenario, like commercial routes. An example was created relating a track and a probable freighter may be created relating a track and a ship and moving in intersect courses. Second scenario has a helicopter has a bigger speed and overlaps the ship’s current position. Third scenario has two ships in maneuvering positions. So, many different rules may be activated at the same time, combining the results to infer a final hostility for tracks. **Level 3 fusion example** It was also prepared an example to implement a level 3 fusion feature using the RT-MLR framework. The objective is estimating time that will be necessary to achieve a combat position with an enemy track. The calculus uses temporal logic considering events happened in the past, like amount of course changes in the last 10 minutes and emissions detected in the last 30 minutes. The rule uses temporal operators like these: \[ \text{IF } \text{Count(track.changingCourse, true, 600, 0)} > 2 \text{AND ThereWasEqual(track.emiting, true, 1800, 0)} \text{THEN interceptionFireCircleTime(me, track)} \] On the JDL model such reasoning may be considered of level 3, since it provides a prevision about future events from the past behavior of a threat. **Test architecture** Simulation architecture was created to provide sensor tracks for fusion models and to enable the scenario visualization of fusion performance. The architecture has two radar nodes, one visualization node and one fusion node. The radar nodes receive simulated measurements (radar dots) generated by the visualization node. The simulated dots were created by an operator with a desired kinematics. They represent the measurements made by sensors added by a Gaussian process noise and a Gaussian measurement noise. The noise range for each sensor is selected by the operator. The operator also asks for the radar nodes to create tracks. After tracks are created and associated with the dots, a Kalman filter provides kinematic estimations. The estimated tracks are sent to the visualization node and to the fusion node. Data fusion is processed by a monitoring application that uses the RT-RML to implements its logic approach. The results of data fusion provided by fusion node are then sent to visualization node to be exhibited. **5 Test results** Three scenarios were created to test level 1 track fusion model. First scenario has two vessels separated by 5 miles and moving in intersect courses. Second scenario has a small ship and a helicopter, in the same course. The helicopter has a bigger speed and overlaps the ship’s position. Third scenario has two ships in maneuvering trajectories (fig. 6). The tests were performed adding Gaussian noise to dot position in order to vary the quality of Kalman Filter estimation. Two test batteries were performed, using lower (σ=5 meters) and higher (σ=20 meters) Gaussian noises. Each scenario has two objects that were created and sent to two sensor nodes. So, four different tracks would appear in visualization node before fusion. Fusion node rules try to associate all four tracks and should associates two and don’t associate the other two. The average values of confusion matrix taken over 20 executions were presented in table 1. Hypothesis h=1 means tracks fusion. Table II – Track fusion evaluation measures <table> <thead> <tr> <th>h=1</th> <th>h=0</th> </tr> </thead> <tbody> <tr> <td>Same</td> <td>T_P</td> </tr> <tr> <td>Different</td> <td>T_N</td> </tr> <tr> <td>pconf =</td> <td>T_P / T_P + F_P</td> </tr> <tr> <td>Sup =</td> <td>T_P / n</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Scenario</th> <th>Error</th> <th>Prec</th> <th>Pconf</th> <th>Nconf</th> <th>Sup</th> <th>Cover</th> </tr> </thead> <tbody> <tr> <td>1, σ=5</td> <td>0.01</td> <td>0.99</td> <td>1</td> <td>0.99</td> <td>0.49</td> <td>0.49</td> </tr> <tr> <td>2, σ=20</td> <td>0.04</td> <td>0.96</td> <td>0.99</td> <td>0.92</td> <td>0.46</td> <td>0.46</td> </tr> <tr> <td>3, σ=20</td> <td>0.02</td> <td>0.98</td> <td>1</td> <td>0.96</td> <td>0.48</td> <td>0.45</td> </tr> </tbody> </table> Actual data The track fusion process was also tested with actual data recorded by a frigate operating in the Brazilian coast. Data came from RTN30X tracker radar and from Scanter1000 navigation radar. Data provided by Scanter1000 came from two targets. One of these targets was also tracked by the RTN30X. The common target is a ship running in a 7.5 miles trajectory with speed varying from 10 to 20 knots and with a sharply 90 grade turn in the middle of trajectory. The fusion process involved three tracks from two radars and the obtained results are shown in table 2. Table II – Actual data fusion evaluation measures <table> <thead> <tr> <th>Error</th> <th>Prec</th> <th>Pconf</th> <th>Nconf</th> <th>Sup</th> <th>Cover</th> </tr> </thead> <tbody> <tr> <td>0.001</td> <td>0.978</td> <td>1</td> <td>0.977</td> <td>0.503</td> <td>0.503</td> </tr> </tbody> </table> Tracks from different target were never fused and tracks from same target were not fused only 9 times in a 556 sample universe. The non-fusing situations occurred during the turn of the vessel due to the much worse course estimation from Scanter1000 compared with RTN30X. Level 2 fusion results One test scenario with three tracks was created to test the level 2 hostility fusion model. First track is a frigate moving near a commercial route. Second track is a fighter in a course of interception. The third is a medium size ship out of commercial routes. Figure 7 shows the scenario. ![Figure 7 – Test scenario for hostility estimation rules](image) Test was performed using different Gaussian noises for each sensor (σ=5 meters for one radar and σ=20 meters for the other). Rules calculated hostility during the time of test (around 3 minutes). Mean and standard deviation of hostility values were calculated for each track. Table III – Hostility evaluation <table> <thead> <tr> <th>Track</th> <th>Hostility</th> </tr> </thead> <tbody> <tr> <td>Mean</td> <td>SD</td> </tr> <tr> <td>Track 1</td> <td>19.5</td> </tr> <tr> <td>Track 2</td> <td>20.1</td> </tr> <tr> <td>Track 3</td> <td>95</td> </tr> </tbody> </table> Conclusions varied only for track 1 because this is the only one which is affected by two different rules. The other ones fired only one rule each, causing that the deffuzified value corresponds to the maximum of output fuzzy function. Level 3 fusion results For this test was created a scenario with three tracks. During the test the tracked threat was programmed to make the course changes and these events were registered by RT-MLR. The events corresponding to the detected emissions were artificially generated, since there is no sensor to provide information about electromagnetic emissions in the simulator. The range of threat’s weapons is also manually provided, since, in an actual environment, this information should be read from a data bank after a meta-classification of the threat. After the RT-MLR detected the events related in the rule, the time prevision became to be calculated and the information was sequentially actualized on the visualization node attached to the tracked symbol. 6 Conclusion This paper presented an integrated approach for fusing context information with a model that allows the reasoning based on multiple-logic rules and supports concurrent processing of real-time applications. Knowledge was modeled in rules as association classes in the domain. space. The framework created (RT-MLR) to support multiple-logic reasoning deals with rules in native application language (Java), so rules don’t need to be previously interpreted, increasing development time and accelerating learning curves. RT-MLR has an automated activation rule system, fired by significant changes in domain value attributes that enable user tuning rule activation. RT-MLR increases knowledge expressiveness supporting fuzzy and temporal reasoning beyond traditional first order logic. Despite the levels of data fusion do not matter because the rule-based approach works on whatever granularity and domain, RT-MLR was used to support a level 1 to 3 fusion application based on knowledge rules for a C4I real-time application. A fuzzy inference set of rules was first used to infer objects’ sizes. After that, sizes, speeds, courses and positions of tracks were used in a level 1 track to track fusion model. This fusion is based on comparing fuzzy measures with thresholds dynamically adjusted according to covariance matrix error estimations provided by KF. Results show that the model fuses correctly the targets the greatest part of the time. The model couldn’t fuse the targets only when differences of targets’ measures are above estimations errors provided by KF. Level 2 fusion was performed using navigation routes and targets’ routes to infer a hostility grade. Level 3 fusion was performed using past behaviors of targets captured with temporal logic rules to infer future times of combat position achievement. Level 2 and level 3 may be mightier if more high level information can be provided (e.g.: data base information) and other resources, like Bayesian inference, can be aggregate to the inference system. These will be probably the next steps in the RT-MLR development. References [4] A. Steinberg, C. Bowman and E. White Jr., *Revisions to the JDL Data Fusion Model*, 3rd NATO/IRIS conferation, Quebec City, Canada, 1998
{"Source-Url": "http://isif.org/fusion/proceedings/fusion2010/pdfs/we1.6.2-0155-final.pdf", "len_cl100k_base": 7755, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27021, "total-output-tokens": 8979, "length": "2e12", "weborganizer": {"__label__adult": 0.00038814544677734375, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.001003265380859375, "__label__education_jobs": 0.0008540153503417969, "__label__entertainment": 0.00012093782424926758, "__label__fashion_beauty": 0.00020802021026611328, "__label__finance_business": 0.0003933906555175781, "__label__food_dining": 0.0004224777221679687, "__label__games": 0.0009069442749023438, "__label__hardware": 0.001598358154296875, "__label__health": 0.000637054443359375, "__label__history": 0.0003974437713623047, "__label__home_hobbies": 0.00013124942779541016, "__label__industrial": 0.0010614395141601562, "__label__literature": 0.0003371238708496094, "__label__politics": 0.0004625320434570313, "__label__religion": 0.0004603862762451172, "__label__science_tech": 0.25439453125, "__label__social_life": 0.00012993812561035156, "__label__software": 0.0202789306640625, "__label__software_dev": 0.71337890625, "__label__sports_fitness": 0.00038504600524902344, "__label__transportation": 0.0011272430419921875, "__label__travel": 0.00021588802337646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39569, 0.03338]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39569, 0.64932]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39569, 0.92759]], "google_gemma-3-12b-it_contains_pii": [[0, 4916, false], [4916, 10741, null], [10741, 16550, null], [16550, 22026, null], [22026, 27250, null], [27250, 30425, null], [30425, 34724, null], [34724, 39569, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4916, true], [4916, 10741, null], [10741, 16550, null], [16550, 22026, null], [22026, 27250, null], [27250, 30425, null], [30425, 34724, null], [34724, 39569, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39569, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39569, null]], "pdf_page_numbers": [[0, 4916, 1], [4916, 10741, 2], [10741, 16550, 3], [16550, 22026, 4], [22026, 27250, 5], [27250, 30425, 6], [30425, 34724, 7], [34724, 39569, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39569, 0.12346]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
4f74e37a80b957c92f640210ee8be5eecc13f34a
A Logic Programming approach for Access Control over RDF Nuno Lopes, Sabrina Kirrane, Antoine Zimmermann, Axel Polleres, Alessandra Mileo To cite this version: A Logic Programming approach for Access Control over RDF Nuno Lopes\textsuperscript{1}, Sabrina Kirrane\textsuperscript{2}, Antoine Zimmermann\textsuperscript{3}, Axel Polleres\textsuperscript{4}, and Alessandra Mileo\textsuperscript{1} \textsuperscript{1} Digital Enterprise Research Institute\newline{nuno.lopes,alessandra.mileo}@deri.org \textsuperscript{2} Digital Enterprise Research Institute and Storm Technology\newline{sabrina.kirrane@deri.org} \textsuperscript{3} École Nationale Supérieure des Mines, FAYOL-ENSMSE, LSTI, F–42023 Saint-Étienne, France\newline{antoine.zimmermann@emse.fr} \textsuperscript{4} Siemens AG Österreich, Siemensstrasse 90, 1210 Vienna, Austria\newline{axel.polleres@siemens.com} Abstract The Resource Description Framework (RDF) is an interoperable data representation format suitable for interchange and integration of data, especially in Open Data contexts. However, RDF is also becoming increasingly attractive in scenarios involving sensitive data, where data protection is a major concern. At its core, RDF does not support any form of access control and current proposals for extending RDF with access control do not fit well with the RDF representation model. Considering an enterprise scenario, we present a modelling that caters for access control over the stored RDF data in an intuitive and transparent manner. For this paper we rely on Annotated RDF, which introduces concepts from Annotated Logic Programming into RDF. Based on this model of the access control annotation domain, we propose a mechanism to manage permissions via application-specific logic rules. Furthermore, we illustrate how our Annotated Query Language (AnQL) provides a secure way to query this access control annotated RDF data. 1998 ACM Subject Classification I.2.4 Knowledge Representation Formalisms and Methods Keywords and phrases Logic Programming, Annotation, Access Control, RDF Digital Object Identifier 10.4230/LIPIcs.ICLP.2012.381 1 Introduction Enterprises rely on stand-alone systems, commonly known as Line Of Business (LOB) applications, to efficiently perform day-to-day activities: interactions with clients in a Customer Relationship Management (CRM) application, employee information in a Human Resources (HR) application, project documentation in a Document Management System (DMS), etc. These systems, although independent, often contain different information regarding the same entities; for example, if an organisation needs to know the projects commissioned by a customer, the employees that worked on those projects and the revenue that was generated, they need to obtain information across these systems. However, such integration is not a simple task, not only due to the heterogeneity of the systems, but also due to the presence of access control mechanisms in each system. In fact, since much of the information within the enterprise is highly sensitive, this integration step could result in information leakage to unauthorised individuals. RDF is a flexible format for representing such integrated data, however it does not provide any mechanisms to avoid the problem of information leakage. In this paper we rely on an integration solution that extracts information from the underlying LOB applications into RDF. Based on this integrated data, we define a mechanism to enforce access control over the resulting RDF graph, implemented via logic programming. Our approach provides a flexible representation for the access control policies and also caters for permission propagation via logic inference rules. The solution we present builds upon an extension of the RDF data model to supply context information (called Annotated RDF), that provides a backwards compatible model to attach domain-specific metadata to each RDF triple. The main contribution of this paper in relation to access control over RDF data consists of defining an annotation domain that models access control permissions in RDF. Based on this model, access control can be enforced by relying on an extension of SPARQL, the standard query language for RDF. Although in this paper we are considering that the access control annotated data stems from the integration of the data from LOB applications, the presented model can be applied as a general model for access control in RDF, without requiring the information integration step. The remainder of the paper is structured as follows: in Section 2 we briefly introduce concepts from the Semantic Web research area and their extension to the annotated case. Section 3 formalises the access control annotation domain and details our implementation of the domain in logic programming. Finally, we describe the related work in Section 4 and present conclusions and directions for future work in Section 5. 2 Preliminaries In this section we provide the necessary background information regarding the semantics of Annotated RDFS. We start by presenting the data model, giving an overview of RDF and its extension towards Annotated RDFS which draws inspiration from Annotated Logic Programming [13]. We then present the extension of the RDF Schema (RDFS) inference rules for the annotated case and the extension of the SPARQL query language for querying Annotated RDFS, AnQL. Finally, we present the current prototype implementation of Annotated RDFS and AnQL which is implemented in SWI Prolog. 2.1 Annotated RDFS Data Model We present an overview of the concepts of RDF and its extension to Annotated RDFS. Definition 1 (RDF triple, RDF graph). Considering the disjoint sets $U$, $B$ and $L$, representing respectively URIs, blank nodes and literals, an RDF triple is a tuple $(s, p, o) \in UB \times U \times UBL$,\(^1\) where $s$ is called the subject, $p$ the predicate, and $o$ the object. An RDF graph $G$ is a finite set of RDF triples. An RDF triple has the intuitive meaning that the subject is connected to the object by the predicate relation. In this work, we avoid introducing details about the concrete syntaxes of RDF, and we omit minutiae. Please refer to [15] and [9] for specifics. Several extensions were presented to introduce meta-information into the RDF data model. For example, [7] define temporal RDF, which allows for the allocation of a validity \(^1\) For conciseness, we represent the union of sets simply by concatenating their names. interval to an RDF triple; [20] presents fuzzy RDF in order to attach a confidence or membership value to a triple. These and other approaches can be represented within a common framework, called Annotated RDF [23] and further extended to include RDFS inference rules by [21]. Annotated RDFS introduces the notion of an annotation domain into the RDF model and defines an extension of the RDFS inference rules that, by relying on the $\otimes$ and $\oplus$ (cf Definition 2) operations defined by the annotation domain, can be specified in a domain independent fashion. Next we present the definition of an annotation domain >**Definition 2** (Annotation Domain). Let $L$ be a non-empty set, whose elements are considered the annotation values. We say that an annotation domain for RDFS is an idempotent, commutative semi-ring $D = \langle L, \oplus, \otimes, \bot, \top \rangle$, where $\oplus$ is $\top$-annihilating. That is, for $\lambda, \lambda_1, \lambda_2 \in L$: 1. $\oplus$ is idempotent, commutative, associative; 2. $\otimes$ is commutative and associative; 3. $\bot \oplus \lambda = \lambda$, $\top \otimes \lambda = \lambda$, and $\top \oplus \lambda = \top$; 4. $\otimes$ is distributive over $\oplus$, i.e. $\lambda_1 \otimes (\lambda_2 \oplus \lambda_3) = (\lambda_1 \otimes \lambda_2) \oplus (\lambda_1 \otimes \lambda_3)$; An annotation domain $D = \langle L, \oplus, \otimes, \bot, \top \rangle$ induces a partial order $\preceq$ over $L$ defined as: $\lambda_1 \preceq \lambda_2$ if $\lambda_1 \oplus \lambda_2 = \lambda_2$. >**Example 3** (Annotation Domain). The Fuzzy Annotation Domain is defined as $D_{[0,1]} = \langle [0,1], \max, \min, 0, 1 \rangle$. We can specify that $\text{:joeBloggs :worksFor :westportCars}$ as follows: $\langle \text{:joeBloggs :worksFor :westportCars} \rangle: 0.5$ For the definitions of other domains, such as the temporal domain, the reader is referred to [21]. In Section 3.1 we present the definition of an annotation domain to model access control. Further to the above annotation domain definition, we extend RDF towards annotated RDFS: >**Definition 4** (Annotated triple, graph). An annotated triple is an expression $\tau: \lambda$, where $\tau$ is an RDF triple and $\lambda$ is an annotation value. An annotated RDFS graph is a finite set of annotated triples. The entailment between two Annotated RDFS graphs, $G \models H$ is defined by a model-theoretic semantics presented in [21]. ### 2.2 Inference Rules RDF Schema (RDFS) [4] consists of a predefined vocabulary that assigns specific meaning to certain URIs, allowing a reasoner to infer new triples from existing ones. A set of inference rules can be used to provide a sound and complete reasoner for RDFS [22]. These rules can be extended to support Annotated RDFS reasoning, in a domain-independent fashion, simply by relying on the $\otimes$ and $\oplus$ operations (presented in Definition 2). Such rules can be represented by the following meta-rule: $$ \begin{align*} \tau_1: \lambda_1, \ldots, \tau_n: \lambda_n, \{\tau_1, \ldots, \tau_n\} \models_{\text{RDFS}} \tau \\ \tau: \bigotimes \lambda_i \end{align*} $$ (1) This rule reads that if a classical RDFS triple $\tau$ can be inferred by applying an RDFS inference rule to triples $\tau_1, \ldots, \tau_n$ (denoted $\{\tau_1, \ldots, \tau_n\} \models_{\text{RDFS}} \tau$), the same triple can be inferred in the annotated case with annotation term $\otimes_i \lambda_i$, where $\lambda_i$ is the annotation of triple $\tau_i$. The $\oplus$ operation is used to combine information about the same statement: if the same triple is inferred from different rules or steps in the inference, the following rule is applied: $$ \frac{\tau: \lambda_1, \tau: \lambda_2}{\tau: \lambda_1 \oplus \lambda_2}. $$ (2) It is also possible to specify a custom set of rules in order to provide application specific inferencing. 2.3 AnQL: Annotated Query Language The proposed query language for Annotated RDFS is AnQL [14], which consists of an extension to the W3C recommended query language for RDF, SPARQL [18], while also taking into consideration features from the upcoming SPARQL 1.1 language revision. Consider $V$ a set of variables disjoint from $UBL$. In SPARQL, a triple pattern consists of an RDF triple with optionally a variable $v \in V$ as the subject, predicate and/or object. Sets of triple patterns are called basic graph patterns (BGP) and BGPs can be combined to create generic graph patterns. The semantics of SPARQL is based on the notion of basic graph pattern matching, where a substitution is a partial function $\mu: V \rightarrow UBL$. For the extension of SPARQL towards the AnQL query language, we propose a specific annotation domain instance of $D$ of the form $\langle L, \oplus, \otimes, \bot, \top \rangle$. Let $A$ denote the set annotation variables, disjoint from $UBLV$ and $\lambda$ be an annotation value from $L$ or an annotation variable from $A$, called an annotation label. For a SPARQL triple pattern $\tau$, we call $\tau: \lambda$ an annotated triple pattern and sets of annotated triple patterns are called basic annotated patterns (BAP). Similar to SPARQL, BAPs can be combined to create an annotated graph pattern and for further details we refer the reader to [14]. An AnQL query is defined as a triple $Q = (P, G, V)$ where: (1) $P$ is an annotated graph pattern; (2) $G$ is an annotated RDF graph; and (3) $V \subseteq VA$ is the set of variables to be returned by the query. Given an annotated graph pattern $P$, we further denote by $var(P) \subseteq V$ and $avar(P) \subseteq A$ the set of variables and annotation variables respectively present in a graph pattern $P$. As presented in Example 5, the annotated graph pattern $P$ is specified following the WHERE keyword where the variables are specified after the SELECT keyword. ▶ Example 5 (AnQL query). Considering the fuzzy domain presented in Example 3, we can pose the following query: ``` SELECT ?v ?av WHERE { ?v a :Company ?av } ``` where $?v$ is a variable from $V$ and $?av$ is an annotation variable from $A$. The semantics of AnQL BAP matching is defined by extending the notion of SPARQL basic graph pattern matching to cater for annotation variables and their mapping to annotation values. For any substitution $\mu$ and variable $v$, $\mu(v)$ corresponds to the value assigned to $v$ by $\mu$. For a BAP $P$, $\mu(P)$ represents the annotated triples that correspond to $P$ except that any variable $v \in var(P) \cup avars(P)$ is replaced with $\mu(v)$. ▶ Definition 6 (BAP matching, extends [16, Definition 2]). Let $P$ be a BAP and $G$ an Annotated RDFS graph. We define the evaluation of $P$ over $G$, denoted $[P]_G$, as the list of substitutions that are solutions of $P$, i.e. $[P]_G = \{ \mu \mid G \models \mu(P) \}$, according to the model-theoretic definition of entailment presented by [21]. The semantics of arbitrary annotated graph patterns is defined by an algebra that is built on top of this BAP matching. For further details we refer the reader to [14] and a combined overview of Annotated RDFS and AnQL is provided by [25]. 2.4 Implementation The system architecture of our prototype implementation, based on SWI-Prolog’s Semantic Web library [24], is sketched in Figure 1. The main component of the system consists of the Reasoner / AnQL Query Engine, which is composed of a forward-chaining reasoner engine with a fix-point semantics that calculates the closure of a given Annotated RDF Graph [21] and an implementation of the AnQL query language. This main component can be tailored to a specific Annotation Domain and to include different Inference Rules describing how triples and their annotation values are propagated. Such inference rules can be specified, in domain independent fashion, by using a high-level language that abstracts the specific details of each domain. An example of an Annotated RDFS rule is presented in Example 7. Example 7 (Annotated RDFS Inference Rule). The following rule provides subclass inference in the RDFS ruleset: \[ \text{\texttt{rdf(0, rdf:type, C2, V) \iff rdfs:subClassOf, C1, V1, \text{infimum}(V1, V2, V).}} \] where the \texttt{rdf/4} predicate is used to represent the annotated triples and the \texttt{infimum/3} predicate corresponds to the implementation of the $\odot$ domain operation (c.f. Definition 2). More information and downloads of the prototype implementation can be found at http://anql.deri.org/. 3 Access Control Annotation Domain In this section we formalise our access control annotation domain, following the definitions presented in Section 2.1, starting by defining the entities and annotation values and then presenting the $\odot$ and $\oplus$ domain operations. Finally, we briefly describe the implementation of the presented annotation domain. 3.1 Entities and Annotations For the modelling of the Access Control Domain (ACD) consider, in addition to the previously presented sets of URIs $U$, blank nodes $B$, and literals $L$, a set of credential elements $C$. The elements of $C$ are used to represent usernames, roles, and groups. To represent attributes, we propose a set $T$ of pairs of form $(k, v)$, represented as key–value pairs where $k \in U$ and $v \in L$. For example “(:age, 30)” or “(:institute, DERI)” are elements of $T$.\(^2\) We allow shortcuts to represent intervals of integers, for example “(:age, [25, 30])” to indicate that all entities with attribute “:age” between 25 and 30 are allowed access to the triple. Considering an element $e \in CT$, $e$ and $\neg e$ are access control elements, where $e$ is called a positive element and $\neg e$ is called a negative element.\(^3\) An access control statement $S$ consists of a set of access control elements and an Access Control List (ACL) consists of a set of access control statements. An access control statement $S$ is consistent if and only if, for any element $e \in CT$, only one of $e$ and $\neg e$ may appear in $S$. This restriction avoids conflicts, where a statement is attempting to both grant and deny access to a triple. Furthermore, we can define a partial order between access control statements $S_1$ and $S_2$, as $S_1 \leq S_2$ iff $S_1 \subseteq S_2$. This partial order can be used to eliminate redundant access statements within an ACL: if a user is granted access by statement $S_2$, he will also be granted access by statement $S_1$ (and thus $S_2$ can be removed). Finally, an ACL is consistent if and only if all statements therein are consistent and not redundant. In our domain representation, only consistent ACLs are considered as annotation values. Intuitively, an annotation value specifies which entities have read permission to the triple, or are denied access when the annotation is preceded by $\neg$. ▶ Example 8 (Access Control List). Assume a set of entities $C = \{jb, js, hr, it\}$, where $jb$ and $js$ are employee usernames and $hr$ and $it$ are shorthand for humanResources and informationTechnology, respectively. The following annotated triple: \[ \tau: [[it], [hr, \neg js]] \] states that the entities identified with $it$ or $hr$ (except if the $js$ credential is also present) have read access to the triple $\tau$. An ACL $A$ can be considered as a non-recursive Datalog with negation (nr-datalog\(^{-}\)) program, where each of the access control statements $S \in A$ corresponds to the body of a rule in the Datalog program. The head of each Datalog rule is a reserved element $access \notin CT$ and the evaluation of the Datalog program determines the access permission to a triple given a specific set of credentials. The set of user credentials is assumed to be provided by an external authentication service and consists of elements of $CT$ which equates to a non-empty ACL representing the entities associated with the user. As expected, we assume that this ACL consists of only one positive statement, i.e. the ACL will contain one statement with all the entities associated with the user and does not contain any negative elements. ▶ Example 9 (Datalog Representation of an ACL). Taking into account the annotation example presented in Example 8. The nr-datalog\(^{-}\) program corresponding to the ACL $[[it], [hr, \neg js]]$ is: \[ \begin{align*} access & \leftarrow it. \\ access & \leftarrow hr, \neg js. \end{align*} \] The set of credentials of the user session, provided by the external authentication system eg. $[[js, it]]$, are facts in the nr-datalog\(^{-}\) program. \(^2\) In these examples, the default URI prefix is http://urq.deri.org/enterprise#. \(^3\) Here we are using $\neg e$ to represent strong negation. In our access control domain representation, $\neg e$ indicates that $e$ will be specifically denied access. Further domain specific information, for example the encoding of hierarchies between the credential elements, can be encoded as extra rules within the nr-datalog\(^{-}\) program. These extra rules can be used to provide implicit credentials to a user, allowing the access control to be specified based on credentials that the authentication system does not necessarily assign to a user. **Example 10 (Credential Hierarchies).** If the entity \(\text{emp}\) represents all the employees within a specific company, and that \(\text{jb}\) and \(\text{js}\) correspond to employee usernames (as presented in Example 8), the following rules can be added to the nr-datalog\(^{-}\) program from Example 9: \[ \begin{align*} \text{emp} & \leftarrow \text{js}. \text{emp} & \leftarrow \text{jb}. \end{align*} \] These rules ensure that both \(\text{jb}\) and \(\text{js}\) are given access when the credential \(\text{emp}\) is required in an annotation value. These rules can be used not only to express hierarchies between entities but any form of nr-datalog\(^{-}\) rules are allowed. ### 3.2 Annotation Domain We now turn to the annotation domain operations \(\otimes\) and \(\oplus\) that, as presented in Section 2.2, allow for the combination of annotation values catering for RDFS inferences. A naive implementation of these domain operations may produce ACLs which are not consistent (and would not be considered valid annotation values). To avoid such invalid ACLs, we rely on a normalisation step that ensures the result is a valid annotation value by checking for redundant statements and applying a conflict resolution policy if necessary. If an annotation statement contains a positive and negative access control element for the same entity, e.g \([\text{jb}, \neg\text{jb}]\), there is a conflict. There are two different ways to resolve conflicts in the annotation statements: (i) apply a brave conflict resolution (allow access); or (ii) safe conflict resolution (deny access). This is achieved during the normalisation step, through the \text{resolve} function, by removing the appropriate element: \(\neg\text{jb}\) for brave or \(\text{jb}\) for safe conflict resolution. In our current modelling, we are assuming safe conflict resolution. The normalisation process is defined as follows: **Definition 11 (Normalise).** Let \(A\) be an ACL. We define the reduction of \(A\) into its consistent form, denoted \(\text{norm}(A)\), as: \[ \text{normalise}(A) = \{\text{resolve}(S_i) \mid S_i \in A \text{ and } \nexists S_j \in A, i \neq j \text{ such that } S_i \leq S_j\}. \] The \(\oplus\) operation is used to combine annotations when the same triple is deduced from different inference steps (cf. Rule (2)). For the access control domain, the \(\oplus_{ac}\) operation involves the union of the annotations and the subsequent normalisation operation. The result of this operation intuitively creates a new nr-datalog\(^{-}\) program consisting of the union of all the rules from the original nr-datalog\(^{-}\) programs. Formally, \[ A_1 \oplus_{ac} A_2 = \text{normalise}(A_1 \cup A_2). \] The following example presents an application of the \(\oplus_{ac}\) operation: **Example 12 (\(\oplus_{ac}\) operation).** Consider the triples \(\tau_1 = (\text{johnSmith}, \text{salary}, 40000); [\text{js}]\) and \(\tau_2 = (\text{johnSmith}, \text{salary}, 40000); [\text{hr}]\). Combining these triples with the \(\oplus_{ac}\) operation (by applying Rule (2)) should result in providing access to all the entities which are allowed to access the premises: \[ (\text{johnSmith}, \text{salary}, 40000); [\text{js}, \text{hr}]\] In turn, the $\otimes$ operation is used when inferring new triples, with the application of Rule (1), and for the access control domain, this operation ($\otimes_{ac}$) consists of merging the rules belonging to both annotation programs and then performing the normalisation and conflict resolution. This equates to restricting access to inferred statements to only those entities that have access to the both the original statements. Thus, the $\otimes$ operation corresponds to: $$A_1 \otimes_{ac} A_2 = \text{normalise} \left( \{ S_1 \cup S_2 \mid S_1 \in A_1 \text{ and } S_2 \in A_2 \} \right),$$ where $S_1 \cup S_2$ represents the set theoretical union. Unlike the $\oplus_{ac}$ operation, the $\otimes_{ac}$ may produce conflicts in the annotation statements. For example, the application of the $\otimes_{ac}$ operation with the Annotated RDFS rule is as follows: **Example 13 ($\otimes_{ac}$ operation).** Let $\tau_1 = (:westportCars.netIncome,1000000):[[hr, \neg jb]]$ and $\tau_2 = (:netIncome.dom,:Company):[[it,jb]]$ be triples. The annotation resulting from applying the $\otimes_{ac}$ operation should provide access to the resulting triple only to entities which are allowed to access all the premisses. Thus we can infer, not only that $:westportCars$ is of type $:Company$, but also the appropriate annotation value: $$(:westportCars,type,:Company):[[hr,it,\neg jb]].$$ Please note that the aforementioned conflict resolution mechanism simplifies $[\neg jb,jb]$ to $[\neg jb]$. Lastly, the smallest and largest annotation values in the access control domain, $\perp_{ac}$ and $\top_{ac}$ respectively, correspond in turn to an empty nr-datalog$^-$ program and another that provides access to all entities $e \in \text{CT}$: $\perp_{ac} = [[]]$ and $\top_{ac} = [[[\text{Company}]]]$. The $\perp_{ac}$ annotation value element indicates that the annotated triple is not accessible to any entity, since no annotation statements will provide access to the triple, and an annotation value of $\top_{ac}$ states that the triple is public, since any credential contained in the user session will trivially provide access to the triple. Intuitively, the $\top_{ac}$ annotation is translated into the nr-datalog$^-$ program containing only the “access” fact, while $\perp_{ac}$ corresponds to an empty program. However, for practical reasons, it might be necessary to assume a “super-user” role, for example represented as the reserved element “su”, which will be allowed access to all triples and therefore would be used as the $\perp_{ac}$ annotation. **Definition 14 (Access Control Annotation Domain).** Let $\mathbf{F}$ be the set of annotation values over $\text{CT}$, i.e. consistent ACLs. The access control annotation domain is formally defined as: $D_{ac} = (\mathbf{F}, \oplus_{ac}, \otimes_{ac}, \perp_{ac}, \top_{ac})$. The presented modelling of the access control domain can be easily extended to handle other permissions, like update, and delete by representing the annotation as an $n$-tuple of ACL $(P,Q,\ldots)$, where $P$ specifies the formula for read permission, $Q$ for update permission, etc. In this extended domain modelling, the domain operations can also be extended to operate over the corresponding elements of the annotation tuple. A create permission has a different behaviour as it would not be attached to any specific triple but rather as a graph-wide permission and thus is out of scope for this modelling. In this paper, we are considering only read permissions in the description of the domain and thus restrict the modelling to a single access control list. It is worth noting that the support for create and update of RDF is only included in the forthcoming W3C SPARQL 1.1 Recommendation [8]. ### 3.3 Prolog Implementation Considering the prototype described in Section 2.4, the implementation of the access control annotation domain consists of a Prolog module that is imported by the reasoner. This module defines the domain operations $\otimes_{ac}$ and $\oplus_{ac}$, represented as the predicates \texttt{infimum/3} and \texttt{supremum/3} respectively. The annotation values are represented by using \texttt{lists} (in this case lists of lists), following the notions presented in the previous section. The implementation of the $\oplus_{ac}$ operation involves concatenating the list representation of both annotations and then performing the normalisation operation. As for the $\otimes_{ac}$ operation, we follow a similar procedure to the $\oplus_{ac}$ operation, with the additional step of applying either the \texttt{brave} or the \texttt{safe} conflict resolution method. The evaluation of the nr-datalog program can be performed based on the representation of the annotation values, by checking if the list of credentials of a user is a superset of any of the positive literals of the statements of our annotation values and also that it does not contain any of the negative literals of the statement. An example of RDF data annotated with access control information is presented in Figure 2, where the salary information is only available to the respective employee. In this figure we are representing the RDF triples and annotation element using the NQuads RDF serialisation.\(^4\) Using AnQL, the extension of the SPARQL query language described in Section 2.3, it is possible to perform queries that take into consideration the access control annotations. An example of an AnQL query over this data is presented in the following example: \begin{quote} \textbf{Example 15 (AnQL Query Example).} This query specifies that we are interested in the salary of employees that someone with the permissions $[[jb,hr,it]]$ is allowed to access. \texttt{SELECT * WHERE \{ ?p :salary ?s "$[[jb, hr, it]]$" \}} \end{quote} The answers for this query (when matched against the data from Figure 2) under SPARQL semantics, i.e. if the annotation was omitted, would be: \[ \{\{?p \rightarrow :joeBloggs, ?s \rightarrow 80000\}, \{?p \rightarrow :johnSmith, ?s \rightarrow 40000\}\}. \] However, when the domain annotations are present, an AnQL query engine must also perform the following check: $[[jb,hr,it]]$ satisfies the nr-datalog program $\lambda$, where $\lambda$ is the program represented by the annotation of each matched triple, thus yielding only the following answer: \[ \{\{?p \rightarrow :joeBloggs, ?s \rightarrow 80000\}\}. \] \(^4\) http://sw.deri.org/2008/07/n-quads/ 4 Related Work The topic of access control has been long studied in relational databases and the approach of enforcing access policies by query rewriting was also considered for the Quel query language by [19]. However, the presented system does not rely on annotating the relational data but rather access control is specified using constraints over the user credentials which are then included in the rewritten query. A good overview of common issues, existing models and languages for access control is provided by [5], who focus on topics also discussed in this paper such as user hierarchy, allowing and denying access and conflict resolution. For the Semantic Web, well known policy languages such as KAoS [3], Rei [12] and PROTUNE [2] are based on logical formalisms and consequently have well defined semantics. Although such policy languages enable policy specification using semantic web languages in their current form, they do not support reasoning based on RDF data relations. In contrast, [11], [17], and [1] propose access control models for RDF graphs and like us allow for policy propagation and inference based on semantic relations. The policy language proposed by [11] is not based on well defined semantics and no implementation details are provided. [17] propose a path-based approach to policy composition. [1] state that they use an analytical tableaux system, however they do not provide a mechanism for merging or for inference of permissions based on RDF structure. [6] describe the requirements an RDF store needs from a Semantic Wiki perspective. Apart from efficiency and scalability, the authors refer to the need for access control on a triple level and to integrate the structure of the organisation in the access control methods. The described system relies on a query engine (SPARQL is mentioned but no details are given) and a rule processor to decide the access control enforcement at query time. [10] present the possibility of maintaining metadata for RDF to enforce access control and touch upon of the work presented here, such as using rules for specifying access control, as possible extensions of their model. Providing access control on a resource level is also left as an open question, one we are tackling by the specification of rules. 5 Conclusions and Future Work The Resource Description Framework (RDF) can be used for large scale integration of information from existing LOB applications. In this paper, we propose an access control model that can be used to protect RDF data and demonstrate how a combination of Annotated RDF and SPARQL can be used to control access to integrated enterprise data. Our model is based on the previously proposed Annotated RDF framework and attaches the access control information on a triple basis i.e. each RDF triple can contain different annotation values. The proposed solution provides a flexible representation method for the access control annotations, based on access control rules that define which entities have access to the triple. However, on very large datasets, challenges will arise with respect to optimal access control policy administration. To tackle this issue we propose managing permissions by specifying domain-specific inference rules for the annotation domain. We also suggest a possible implementation structure for a framework to enforce the access control based on rewriting a SPARQL query into an Annotated SPARQL query (AnQL) which relies on a secure authentication service. Our initial work touches on how rules can be used to simplify the management of RDF access control permissions. In future work, we propose to investigate the interdependencies between usernames, groups, roles, and attributes and how we can further exploit the RDF graph structure to streamline the management of RDF access control policies. Although the modelling presented in this paper provides a suitable representation model for the annotation values, its implementation and evaluation for large RDF graphs remains an open issue. To provide acceptable query performance when compared to its non-annotated counterpart, different optimisation strategies for both annotation storage and query evaluation will be necessary. Acknowledgements. This work is supported in part by the SFI under Grant No. SFI/08/CE/I1380 (Líon-2), the IRCSET EPS and Storm Technology Ltd. We would like to thank Gergely Lukácsy, Aidan Hogan, and Umberto Straccia for their comments on this paper. References A Logic Programming approach for Access Control over RDF
{"Source-Url": "https://hal-emse.ccsd.cnrs.fr/file/index/docid/723221/filename/iclp2012.pdf", "len_cl100k_base": 7968, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 40719, "total-output-tokens": 10650, "length": "2e12", "weborganizer": {"__label__adult": 0.0003943443298339844, "__label__art_design": 0.0005731582641601562, "__label__crime_law": 0.0010042190551757812, "__label__education_jobs": 0.0010690689086914062, "__label__entertainment": 0.00013267993927001953, "__label__fashion_beauty": 0.00022280216217041016, "__label__finance_business": 0.0008120536804199219, "__label__food_dining": 0.0004119873046875, "__label__games": 0.0005950927734375, "__label__hardware": 0.0007162094116210938, "__label__health": 0.0007424354553222656, "__label__history": 0.0003325939178466797, "__label__home_hobbies": 0.00012791156768798828, "__label__industrial": 0.0006589889526367188, "__label__literature": 0.0005483627319335938, "__label__politics": 0.0005369186401367188, "__label__religion": 0.0005145072937011719, "__label__science_tech": 0.1514892578125, "__label__social_life": 0.00015676021575927734, "__label__software": 0.03521728515625, "__label__software_dev": 0.802734375, "__label__sports_fitness": 0.0002386569976806641, "__label__transportation": 0.0005035400390625, "__label__travel": 0.0002046823501586914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39580, 0.02874]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39580, 0.36696]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39580, 0.83527]], "google_gemma-3-12b-it_contains_pii": [[0, 596, false], [596, 3597, null], [3597, 6948, null], [6948, 10340, null], [10340, 13864, null], [13864, 16210, null], [16210, 19747, null], [19747, 23396, null], [23396, 27359, null], [27359, 29857, null], [29857, 33710, null], [33710, 37172, null], [37172, 39580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 596, true], [596, 3597, null], [3597, 6948, null], [6948, 10340, null], [10340, 13864, null], [13864, 16210, null], [16210, 19747, null], [19747, 23396, null], [23396, 27359, null], [27359, 29857, null], [29857, 33710, null], [33710, 37172, null], [37172, 39580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39580, null]], "pdf_page_numbers": [[0, 596, 1], [596, 3597, 2], [3597, 6948, 3], [6948, 10340, 4], [10340, 13864, 5], [13864, 16210, 6], [16210, 19747, 7], [19747, 23396, 8], [23396, 27359, 9], [27359, 29857, 10], [29857, 33710, 11], [33710, 37172, 12], [37172, 39580, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39580, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ef7abc51eb4552ca50f9e5f4dd439275b2431e0b
A Framework and Implementation of Information Content Reasoning in a Database 1XIAOCHUAN WU and 2JUNKANG FENG* 1,2Database Research Group School of Computing University of the West of Scotland High Street, Paisley PA1 2BE, UK {xiaochuan.wu; junkang.feng}@uws.ac.uk 2Business College Beijing Union University No. A3 Yan Jing Dong Li, Beijing 100025, CHINA *The contributions of both authors are equal. Junkang Feng is the corresponding author. Abstract: - Databases’ capability is limited in terms of inference. Especially, when users explore information beyond the scope of data within databases, the databases normally cannot provide the information. The underlying reason of the problem is that queries are answered based on a direct match between a query and data (up to aggregations of the data). We observe that it is possible to find information from a database beyond that. To this end, we propose a framework for information content reasoning in a database. A number of basic concepts are defined first. Then we present the framework and explain how it works. Moreover, we describe how such a framework is implemented by means of a prototype including a test with sample queries. Keywords: - Information content, Reasoning, Knowledge discovery from databases, Semantic theory of information, Databases 1. Introduction Database systems store data [7]. Users query a database [2] and a query can only be answered through a ‘direct match’ between the selection criteria within a query and data (up to aggregations of the data). In a case of querying a database beyond this, the system is unlikely to answer the query. A conventional query is, in essence, concerned with only the propositional content of data. Data carry information [6], [3]. A piece of data may carry information about another [1], [15]. It would seem desirable and possible to capitalize on this phenomenon so that more information can be derived than using conventional queries. Information systems are constructed for storing and providing information. And yet, it would appear that the notion of ‘information content’ of an information system is elusive. In the field of databases, the information content of a database has been taken as the instance of a database and the information capacity of a data schema as the collection of instances of the schema [9], [10], [11]. Another view on the topic of the relationship between information and data is that if it is truthful, meaningful data is semantic information [8]. We argue that such views miss two fundamental points. One is a convincing conception of ‘information content’. To equate data with information overlooks the fact that data in a database is merely raw material for bearing and conveying information. Information must be veridical (p.10 of [1]), that is, it must relate to a contingent truth [8], while for data there is no such requirement. The other is a framework for reasoning about information content to reveal hidden information. In addressing this problem, our purpose is to look at the relationships between information content, database structure and business rules, and thus discover how tacit business knowledge can then be explicitly expressed and used. In this paper, we present a novel framework for reasoning about the information content of data in a database. It helps a database system improve its capability of inference. This is achieved by introducing a variety of information sources such as domain knowledge. With the help of outside information sources, not only more queries that deal with a wider range of information than the propositional content of data within database can be answered, but also hidden information within the database system can be discovered. The underlying thought of the framework is based on a concept of information content. Fred Dretske [3] firstly introduced the concept. Then Xu, Feng and Crowe [16] extend Dretske’s idea and give a more detailed definition of Information Content of states of affairs. Our thoughts are based on the latter definition. The next section gives some fundamental concepts. Then the framework and a prototype of implementation are presented in the third section. The last section concludes the paper. 2. Basic Concepts A number of fundamental notions are defined in this section and they are the cornerstones of this paper. 2.1 Information Content Fred Dretske [3] gives the definition of information content as follow: “A state of affairs contains information about X to just that extent to which a suitably placed observer could learn something about X by consulting it.” Then he formalizes the above as “Information Content: A signal r carries the information that s is F = The conditional probability of s’s being F, given r (and k), is 1 (but, given k alone, less than 1).” Note that k stands for prior knowledge about information source s. Here is an example: That John is awarded a grade ‘A’ for his Programming course contains the information that he has gained 70% or above for that course. The definition above will be used as a cornerstone to define the theoretical foundation of the framework. 2.2 Random Events Following [16], the definition above is based upon the notion of probability ([1], pp.14-18), and it is strongly connected with the notion of random event. Thus, they firstly defined random event as follows: “Let s be a selection process under a set C of conditions, O the set of possible outcomes of s, which are called states, and E the power set of O, X is a random event if \( E \ni X \) and there is a probability of X, i.e., P(X).” For example, to select a student record from Students table randomly in database and it is of a particular student is a random event. In addition to the definition above, they unveiled a definition of probability space to explain what mean by ‘probability distribution’ as below: “Let s be a selection process under a set C of conditions, O the set of possible outcomes of s, E the power set of O and \( E \ni X_i \) for \( i = 1, \ldots, n \), \( P_s \) is the probability space of the random events \( X_i \) for \( i = 1, \ldots, n \) if \( P_s = \{P(X_1), P(X_2), \ldots, P(X_n)\} \) and \( \sum P(X_i) = 1 \).” 2.3 Random Variables A random variable is a variable that can hold one of a number of possible values at a time and which one of the values to be hold is determined randomly. For example, as in the above example, Students table contains attributes such as ID, Name and DOB. A random variable could be any one attribute or a collection of attributes of the Students table in the sense that for a randomly chosen tuple, the value of its ID cannot be pre-determined and can only be one of all the possible values for ID. 2.4 Particulars of random events Furthermore, Xu, Feng and Crowe (2008) point out that even though Dretske’s definition is plausible, the role that individual events play in our looking at the information content of a state of affairs was overlooked in Dretske’s definition. To amend this, Xu, Feng and Crowe [16] put forward a definition of particulars of a random event as follow: “Let s be a selection process under a set C of conditions, X a random event concerning s, X_i an instance of s, X_i is a particular of X if X_i is in a state \( \Omega \), written \( \Omega = \text{state}(X_i) \), and \( X \ni \Omega \).” As in the example above, to select a student record from Students table is a random variable, the record happens to be John’s is a random event, and one occurrence of John’s record is a particular of the random event. 2.5 ‘Information Content Inclusion’ Relations The term, ‘Information Content Inclusion’ Relation, was firstly put forward by Feng in 1998 [5]. It was defined as follows: if the particulars of random event Y are in the information content of the particulars of random event X then we say that ‘random event Y is in the information content of random event X’, and such a relationship between X and Y is called the ‘information content inclusion relation’, IIR for short. In addition to the definition above, Xu, Feng and Crowe [16] clarify four types of IIR and their sources shown in the table below: <table> <thead> <tr> <th>Relation: Information content of X includes Y</th> <th>Sources</th> </tr> </thead> <tbody> <tr> <td>X, Y: both database random events</td> <td>Syntactic relations between data constructs and data values</td> </tr> <tr> <td>X: a database random event; Y: a real world random event</td> <td>Semantic values and information content of data</td> </tr> <tr> <td>X: a real world random event; Y: a database random event</td> <td>Rules and processes of database design and database operations</td> </tr> <tr> <td>X,Y: both real world random events</td> <td>Relations between real world objects, business rules</td> </tr> </tbody> </table> 2.5.1 IIR Rules Xu, Feng and Crowe [16] identify five inference rules for reasoning about IIR with proofs for the soundness and completeness of the rules. The rules are: Reflexivity: \( Y \subseteq X \), then \( X \rightarrow Y \) This rule means that if random event Y is contained in random event X then the information content of X includes Y, which is also denoted IIR(X, Y). The rest of the rules can be interpreted similarly. Augmentation: \( X \rightarrow Y \), then \( XZ \rightarrow YZ \) Transitivity: \( X \rightarrow Y, Y \rightarrow Z \), then \( X \rightarrow Z \) Union: \( X \rightarrow Y, X \rightarrow Z \), then \( X \rightarrow YZ \) Decomposition: \( X \rightarrow YZ \), then \( X \rightarrow Y, X \rightarrow Z \) 2.5.2 Original IIR Original IIR are those that are identified by applying IIR definition directly to a variety of sources such as the real world, database systems and domain knowledge and are not those that are derivable by using the inference rules on known IIR. For example, Referential Integrity is a kind of constraints in a relational database, from which, original IIR can be derived. 2.5.3 Differences between Functional Dependencies and IIR The inference rules for IIR may look similar to those for Functional Dependencies. But there are several differences between them. Xu, Feng and Crowe [16] give a table shown below, which summaries the differences: <table> <thead> <tr> <th>Objects concerned</th> <th>Functional Dependencies</th> <th>IIR</th> </tr> </thead> <tbody> <tr> <td>Attributes in a relation</td> <td>Events-member of power set of outcomes of a selection process</td> <td>Random</td> </tr> <tr> <td>Characterization of objects concerned (1)</td> <td>Both random and certain ones are covered</td> <td>DB and the real world – altogether four types</td> </tr> <tr> <td>Characterization of objects concerned (2)</td> <td>Within a DB</td> <td>Syntactic, Semantic, Norms, Business rules…</td> </tr> <tr> <td>What is based on</td> <td>Syntactic Characterization</td> <td>The veridicality of event X is a necessary condition for X to be qualified as information being carried</td> </tr> <tr> <td>Veridicality</td> <td>N/A</td> <td></td> </tr> </tbody> </table> 3 IIR Closures 3.1 The Closure of a set of IIR Let F be a set of IIR. F closure (denoted F’) is the set of IIR implied by F. F ⊆ F’. If F= F’, F is called a complete set of IIR in the sense that no more IIR can be derived from it by using the IIR rules. 3.2 IIR Closure of a Random Event All random events that are derivable by using the IIR inference rules on a given set of original IIR and therefore are in the information content of the given random event constitute the IIR closure of the random event. For example, ‘Student ID = B001’ is a random event, and ‘Student Address = 1 High Street’ is in its information content. Likewise, ‘Student Postcode = PA1 2BE’ is in that of ‘Student ID = B001’. Through Transitivity, ‘Student Postcode = PA1 2BE’ and ‘Student Address = 1 High Street’ would constitute the IIR closure of ‘Student ID = B001’. Let x₁ denote ‘Student ID = B001’, then we use x₁ to done the IIR closure of x₁. 3.3 IIR between Attributes and IIR Closure of an Attribute Let X be an attribute. X can be taken as a random variable, and by taking one of the values that X can possibly take, we may say that X contains a set of random events. In other words, X can be seen as the aggregation of all its random events. As a random event may have another in its information content through having an IIR with it and the latter is contained by another random variable, two random variables may form a relationship between them based on IIR. If every random event of Y is in the information content of at least one random event of X, then we say that attribute Y is in the information content of attribute X, denoted IIR(X, Y). All such attributes as Y that are logically implied by a given set of IIR, which can therefore derived by using the IIR rules, constitute the IIR closure of X denoted X’+. That is, X’+ denotes the set of all attributes such that for everyone of which each of its contained random events has an IIR with at least one of X’s random event, that is, the former is in the information content of the latter. For example, we would have IIR(Student ID, Student Address), which means that Student Address is in the information content of Student ID. By the IIR rules, we can get IIR(Student ID, Student Postcode). Therefore (Student ID)’ would include Student Address and Student Postcode, among others. 3.4 Derivation of three Levels of Closures with Oracle Our Oracle implementation of IIR reasoning derives three levels of closures, namely, those between classes or tables, those between attributes and those between values of attributes. The first two are closures between random variables, and the third between random events. That is to say, the first two closures are concerned with relationships between random variables. However, IIR is a relationship between random events. The former relationship is similar to IIR, but it needs to be clarified. In the Oracle implementation, IIR closure of class/table X contains all classes or tables that are implied by class/table X and that are inferable by using IIR rules against a given set of IIR. We observe that the rules for random variables are similar to those for IIR, which are applicable only to random events. In this case, the IIR closure of table X is the set of classes/tables implied by X. Similarly, the IIR closure of an attribute (also a random variable) deals with attributes of a table. For example, variable X’s closure contains all attributes implied by variable X. The IIR closure of a random event for a relational database deals with data values. More precisely, a random event in a database in our formulation exists in the form of combinations of attributes and values in database. In other words, an attribute and a value construct a pair that is seen as a random event in a database. As a result, the IIR closure of random event X contains a set of pairs of all attributes and values implied by random event X. 3.5 Why computing IIR closures To compute $F^+$ given $F$, we can compute instead $X^+$ for all $X$, which is normally easier than computing $F^+$ directly. Once X closure is known, to know if IIR(X, Y) holds given F (i.e., whether it is implied by F) is a matter of verifying if Y is in the X closure or not. If so, the IIR holds. Otherwise, as far as the given F goes, the IIR does not exist. 3.6 A Flow Chart of the Basic Concepts With all the above basic notions in mind, a flow chart can be constructed, which depicts how the basic concepts are linked with one another. ![Flow Chart of the Basic Concepts](image) This diagram shows that closures of the three levels capture and formalize the information content of data in a database. Data in a database are now formalized as random events and random variables, and the ‘information content inclusion’ relation (IIR) can therefore be identified. Thus, the IIR closure of X is the information content of X as far as the data in a database and a given set of identifiable IIR between data go. 4 A System for Reasoning about Information Content of Data in a Database With the idea of IIR and associated other notions, we have created a system for reasoning about the information content of data in a database by taking... into account the ideas of Wang and Feng [14], and Essaar [4]. Intuitively, the system works like this. To select a student from a Students table is seen as a random event. And the term ‘particular’ is used to describe a single occurrence of a random event. For example, student John’s record happens to be selected from the Students table, and this particular occurrence of selection of John’s record is a ‘particular’ of the random event that the record happens to be John’s. A random variable may be seen as an aggregation of random events. In a table, an attribute can be seen as a random variable because it normally contains many random events in it. For example, Student Name is random variable, which contains Student Name being John and Student Name being Herman, among others. The IIR closure of Student ID being B001, for example, contains Student Name being ‘John’, Student Major being ‘history’ and Class Name being ‘BD445’. If a user queries about the class name about John, the query can be answered by searching in this IIR closure of Student ID being B001. That is, once IIR closures are known, queries can be posed on these closures. This way some information that cannot be found by conventional queries may be discovered. Fig. 2 A System for Reasoning about Information Content of Data in a Database As depicted in Fig. 2, the system consists of three main parts. The upper part is where users pose queries to the Oracle implementation of the system. The middle part is the Oracle implementation of information content reasoning. The lower part includes a variety of sources of original IIR, mainly from domain knowledge and the syntactic and semantic properties of the database that are inherent to it. The form of the queries is the conventional SQL. Most programming efforts were put on the implementation of computing the IIR closures. The core algorithm is based on the IIR rules. Original IIR were then added into the unit. This is one of the most difficult parts in the programming as when more original IIR were discovered more computation capability has to be added into the program such that the closures can continually increase accordingly. The output of the unit is simple. Three kinds of closures are provided by the system separately or together depending on the need of the user. Importantly, these outcomes of closures contain information content of random events. User queries, then, be posed on these closures. Thus, more information can be discovered through queries. The process of discovering original IIR could be extremely hard. There is a variety of sources out there that could potentially contain huge amount of original IIR [13]. The two main sources though are domain knowledge and the properties of the database per se. The latter can be further divided into those of semantic and syntactic levels respectively. Hereinto, the syntactic level includes plenty of constraints, which can be directly translated into original IIR such as data dependency, integrity rules and the cardinality ratio between tables. 4.1 Oracle Implementation The Oracle Implementation of the prototype of the framework was carried in two stages. In the first one, a simplified implementation of the prototype was built to test if the whole ideas can actually work on Oracle DBMS. Then, a more functionally comprehensive implementation was built to tackle example queries. Both of the above two implementations were developed by using PL/SQL [12]. 4.1.1 The Preliminary Implementation In this stage, the IIR rules were embedded in a computing unit of the implementation. These rules constitute the core of an inference engine. The whole process of program development of computing unit had been completed in advance. For the prototype, we only added some example original IIR into the computing unit. And the outcomes can be seen as an abstract of closures. The following example shows the detail of the implement of the prototype. We assume that the following IIR are given: \( F \{AB \rightarrow C, \ C \rightarrow A, \ BC \rightarrow D, \ ACD \rightarrow B, \ D \rightarrow EG, \ BE \rightarrow C, \ CG \rightarrow BD, \ CE \rightarrow AG \} \), in which \( X \rightarrow Y \) is a simplified version of \( IIR(X, Y) \), which means that the information content of \( X \) includes \( Y \). Supposing we wanted to know the IIR closures of all combinations of attributes based on the above given IIR. Then, these IIR were imported into the computing unit. The computing unit then computed the IIR closures, which were stored in a table in an Oracle database. Then we used the SELECT command to display the result as follow: In Fig. 3, XNO counts the number of combination of attributes. The combinations *per se* are stored under column name XDET. XCNO records the number of times of a attribute combination been computed. 0 means the first time of computing, 1 means the second time of computing and so forth. For example, attribute combination FEB in NO. 66 gets 0 in XCNO and FEB in XDEP, which means that the closure of FEB is FEB after first computing. Similarly, FEB in NO. 67 gets 1 in XCNO and FEBC in XDEP, which means that the IIR closure of FEB is FEBC after two iterations of the computing. As shown above, all the closures are shown in the right hand side. For each attribute combination, it has at least one closure that is itself. Some combinations have more than one closure because computing unit detects certain IIR implied by the attributes within the combination. Thus, a new closure was then produced and stored in the table. Then, the process repeated over and over again until the unit cannot detect any related IIR for the attributes. For the programming perspective, the program stops once the attributes in the left hand side of the original IIR do not include any more attribute that is in the information contents of them. ### 4.1.2 The Comprehensive Implementation In addition to the core computing unit, the comprehensive implementation of our prototype integrates many more original IIR based on a realistic example. Suppose the following three tables are stored in an Oracle database. - **Students** (sid, sname, stmajor, yr, age) - **Class** (cname, time, room) - **Enrolment** (sid, cname) In the tables above, **sid** is the primary key of Students, **cname** is the primary key of Class, **sid** of Enrolment is a foreign key referencing Students and **cname** of Enrolment is a foreign key referencing Class. **Sid** and **cname** combined as the composite primary key of Enrolment. The tables were populated with sample records. As Fig. 2 shows, referential integrity is one of the sources from which original IIR are derived. Thus, the constraints above were translated and integrated into the computing unit, in addition to the IIR rules. Suppose that students in different subjects fancy a variety of sports. For example, history students fancy swimming, while geology students like diving. This could be ad hoc business rules of domain knowledge from which original IIR rules can also be derived, and they are integrated into the computing unit. As a result, the size of IIR closures expanded accordingly. We now give an example below. Suppose we want to know the IIR closure of ‘SID 150’, i.e., Student ID being 150. ‘SID’ and ‘100’ would be inputted into the computing unit. The IIR closure would then be presented on the screen: As Fig. 4 shows, the closure is made up of two parts. One part contains data values ‘Parks geology So 21 BA200 TTH9 SC110 diving’ etc shown in the top half of Fig.4. The other part shown in the bottom half part of Fig. 4 contains the attributes to which the values belong and the tables to which the attributes belong. Thus, the above closure of ‘SID 150’ can be read as having CNAME = BA200, SNAME = Park, STMAJOR = geology, YR = So, AGE = 21, TIME = TTH9, ROOM = SC110, and Favourite Sport = diving. The closure above was based on the input of ‘SID 150’ only. In order words, the information content of ‘SID 150’ is CNAME BA200, SNAME Park, STMAJOR geology, YR So, AGE 21, TIME TTH9, ROOM SC110, and Favourite Sports being diving. All the information except ‘diving’ is inferred by using records of the tables stored in the Oracle database, the IIR rules and the original IIR derived from referential integrity. ‘Favourite sport is diving’ is inferred by using original IIR derived from business rules outside the Oracle database in addition to IIR rules. This example shows that the closure of ‘SID 150’ contains not only information within the database, but also information from outside the database such as business rules. Once users’ queries are posed on the closure, more information will be provided to them. 5 Future Works With IIR rules, we have discussed the relation regarding information content (i.e., information carrying) between random events. Such a relation at a higher level, i.e., that between random variables is still not clear. How the relations on different levels are connected also deserves further investigation. By far, the process of identifying original IIR is done manually. However, ideally, the system could identify original IIR automatically depending on the need of user. More hidden information within database should be discovered with the increase of original IIR derived from database itself and outside sources. Original IIR rules derived from sources like an ontology has not been implemented yet. The programming structure of the computing unit has not been examined in terms of efficiency and robustness. In addition, a graphic interface should be integrated into the programme to help users pose queries directly. Distributed systems may be taken into account as well in the future. 6 Conclusions In this paper, we have proposed a novel approach to information content reasoning of databases. We gave a set of basic concepts and described a prototype of a system for such purpose. A number of examples were used to test our system. With information sources outside database imported into the system, the information content of a random event (data values) within the database expanded dramatically. Users could make the most of the information content by posing queries. Thus, more information can be discovered than conventional queries. Technically speaking, the increase of random event’s closures is based on the boost in original IIR. However, identification of original IIR rules could be extremely hard due to wide range of sources. outside database. However, once original IIR have been identified and then integrated into the computing unit of our system, the system provides a powerful engine for users to query a database. References:
{"Source-Url": "http://www.wseas.us/e-library/transactions/information/2009/31-414.pdf", "len_cl100k_base": 6033, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27758, "total-output-tokens": 7253, "length": "2e12", "weborganizer": {"__label__adult": 0.0004432201385498047, "__label__art_design": 0.00048065185546875, "__label__crime_law": 0.0007100105285644531, "__label__education_jobs": 0.007293701171875, "__label__entertainment": 0.0001285076141357422, "__label__fashion_beauty": 0.00028252601623535156, "__label__finance_business": 0.0010013580322265625, "__label__food_dining": 0.0006990432739257812, "__label__games": 0.0006990432739257812, "__label__hardware": 0.0009312629699707032, "__label__health": 0.0017271041870117188, "__label__history": 0.00048732757568359375, "__label__home_hobbies": 0.0002008676528930664, "__label__industrial": 0.0008420944213867188, "__label__literature": 0.0013742446899414062, "__label__politics": 0.0004336833953857422, "__label__religion": 0.0006918907165527344, "__label__science_tech": 0.425537109375, "__label__social_life": 0.0002720355987548828, "__label__software": 0.022857666015625, "__label__software_dev": 0.53173828125, "__label__sports_fitness": 0.00027370452880859375, "__label__transportation": 0.0008211135864257812, "__label__travel": 0.0002353191375732422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29303, 0.02828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29303, 0.81563]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29303, 0.91924]], "google_gemma-3-12b-it_contains_pii": [[0, 2752, false], [2752, 6360, null], [6360, 9914, null], [9914, 13122, null], [13122, 15990, null], [15990, 17310, null], [17310, 20649, null], [20649, 23408, null], [23408, 26501, null], [26501, 29303, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2752, true], [2752, 6360, null], [6360, 9914, null], [9914, 13122, null], [13122, 15990, null], [15990, 17310, null], [17310, 20649, null], [20649, 23408, null], [23408, 26501, null], [26501, 29303, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29303, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29303, null]], "pdf_page_numbers": [[0, 2752, 1], [2752, 6360, 2], [6360, 9914, 3], [9914, 13122, 4], [13122, 15990, 5], [15990, 17310, 6], [17310, 20649, 7], [20649, 23408, 8], [23408, 26501, 9], [26501, 29303, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29303, 0.0942]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
26942608fa34c1f6e4faa1af1327f70c1f2cd9c3
module adder ( input [3:0] A, B, output cout, output [3:0] S ); wire c0, c1, c2; FA fa0 ( A[0], B[0], 1’b0, c0, S[0] ); FA fa1 ( A[1], B[1], c0, c1, S[1] ); FA fa2 ( A[2], B[2], c1, c2, S[2] ); FA fa3 ( A[3], B[3], c2, cout, S[3] ); endmodule What is Verilog? In this class and in the real world, Verilog is a specification language, not a programming language. - Draw your schematic and state machines and then transcribe it into Verilog. - When you sit down to write verilog you should know exactly what you are implementing. We are constraining you to a subset of the language for two reasons - These are the parts that people use to design real processors - Steer you clear of problematic constructs that lead to bad design. Verilog Fundamentals - What is Verilog? - Data types - Structural Verilog - RTL Verilog - Combinational Logic - Sequential Logic ```verilog module adder(input [3:0] A, B, output cout, output [3:0] S ); wire c0, c1, c2; FA fa0( A[0], B[0], 1'b0, c0, S[0] ); FA fa1( A[1], B[1], c0, c1, S[1] ); FA fa2( A[2], B[2], c1, c2, S[2] ); FA fa3( A[3], B[3], c2, cout, S[3] ); endmodule ``` Bit-vector is the only data type in Verilog A bit can take on one of four values <table> <thead> <tr> <th>Value</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Logic zero</td> </tr> <tr> <td>1</td> <td>Logic one</td> </tr> <tr> <td>X</td> <td>Unknown logic value</td> </tr> <tr> <td>Z</td> <td>High impedance, floating</td> </tr> </tbody> </table> An X bit might be a 0, 1, Z, or in transition. We can set bits to be X in situations where we don’t care what the value is. This can help catch bugs and improve synthesis quality. In the simulation waveform viewer, Unknown signals are RED. There should be no red after reset. “wire” is used to denote a hardware net wire [15:0] instruction; wire [15:0] memory_req; wire [7:0] small_net; Absolutely no type safety when connecting nets! Bit literals 4'b10_11 Underscores are ignored Base format (d,b,o,h) Decimal number representing size in bits We’ll learn how to actually assign literals to nets a little later Binary literals - 8'b0000_0000 - 8'b0xx0_1xx1 Hexadecimal literals - 32'h0a34_def1 - 16'haxxx Decimal literals - 32'd42 Verilog Fundamentals - History of hardware design languages - Data types - Structural Verilog - RTL Verilog ``` module adder (input [3:0] A, B, output cout, output [3:0] S); wire c0, c1, c2; FA fa0 ( A[0], B[0], 1'b0, c0, S[0] ); FA fa1 ( A[1], B[1], c0, c1, S[1] ); FA fa2 ( A[2], B[2], c1, c2, S[2] ); FA fa3 ( A[3], B[3], c2, cout, S[3] ); endmodule ``` Note: Our Verilog Subset - Verilog is a big language with many features not concerned with synthesizing hardware. - The code you write for your processor should only contain the languages structures discussed in these slides. - Anything else is not synthesizable, although it will simulate fine. - You *MUST* follow the course coding standard; a document will be released soon. - We will be mixing in some synthesizable SystemVerilog later in the course to improve maintainability of your code. A Verilog module has a name and a port list ```verilog module adder( input [3:0] a_i, input [3:0] b_i, output cy_o, output [3:0] sum_o ); // HDL modeling of // adder functionality endmodule ``` Ports must have a direction and a bitwidth. In this class we use _i to denote in port variables and _o to denote out port variables. Note the semicolon at the end of the port list! A module can instantiate other modules module adder( input [3:0] a_i, b_i, output cy_o, output [3:0] sum_o ); wire c0, c1, c2; FA fa0( ... ); FA fa1( ... ); FA fa2( ... ); FA fa3( ... ); endmodule module FA( input a_i, b_i, cy_i output cy_o, sum_o ); // HDL modeling of 1 bit // full adder functionality endmodule module adder( input [3:0] a_i, b_i, output cy_o, output [3:0] sum_o ); wire c0, c1, c2; FA fa0( a_i[0], b_i[0], 1'b0, c0, sum_o[0] ); FA fa1( a_i[1], b_i[1], c0, c1, sum_o[1] ); FA fa2( a_i[2], b_i[2], c1, c2, sum_o[2] ); FA fa3( a_i[3], b_i[3], c2, cy_o, sum_o[3] ); endmodule This class’s style standard: Connect ports by name and not by position. Connecting ports by ordered list is compact but bug prone: ```vhdl FA fa0( a_i[0], b_i[0], 1'b0, c0, sum_o[0] ); ``` Connecting by name is less compact but leads to fewer bugs. This is how you should do it in this class. You should also line up like parameters so it is easy to check correctness. ```vhdl FA fa0( .a_i(a_i[0]) ,.b_i(b_i[0]) ,.cy_i(1'b0) ,.cy_o(c0) ,.sum_o(sum_o[0]) ); ``` Connecting ports by name yields clearer and less buggy code. In the slides, we may do it by position for space. But you should do it by name and not position. Verilog Fundamentals - History of hardware design languages - Data types - Structural Verilog - RTL - Combinational - Sequential ```verilog module adder( input [3:0] A, B, output c0, c1, c2, output [3:0] S ); wire c0, c1, c2; FA fa0( A[0], B[0], 1'b0, c0, S[0] ); FA fa1( A[1], B[1], c0, c1, S[1] ); FA fa2( A[2], B[2], c1, c2, S[2] ); FA fa3( A[3], B[3], c2, c0, S[3] ); endmodule ``` A module’s behavior can be described in many different ways but it should not matter from outside. Example: mux4 module mux4( input a_i, b_i, c_i, d_i, input [1:0] sel_i, output z_o); wire t0, t1; assign z_o = ~((t0 | sel_i[0]) & (t1 | ~sel_i[0])); assign t1 = ~((sel_i[1] & d_i) | (~sel_i[1] & b_i)); assign t0 = ~((sel_i[1] & c_i) | (~sel_i[1] & a_i)); endmodule mux4: Using continuous assignments to generate combinational logic The order of these continuous assignment statements in the source code does not matter. But it does affect readability! They essentially happen in parallel; also, any time an input is changed, each line is automatically re-evaluated. (Be careful not to create cycles!) mux4: Using ? : // Four input multiplexer module mux4( input a_i, b_i, c_i, d_i, input [1:0] sel_i, output z_o); assign z_o = ( sel_i == 0 ) ? a_i : ( sel_i == 1 ) ? b_i : ( sel_i == 2 ) ? c_i : ( sel_i == 3 ) ? d_i : 1'bx; endmodule Not required for synthesis, but helps in simulation: If sel_i is undefined we want to propagate that information in waveform viewer. mux4: Using combinational “always_comb” or “always @(*)” block ``` module mux4( input a_i, b_i, c_i, d_i, input [1:0] sel_i, output reg z_o ); reg t0, t1; always_comb // system verilog; equiv. to always @(*) begin t0 = (sel_i[1] & c_i) | (~sel_i[1] & a_i); t1 = ~(sel_i[1] & d_i) | (~sel_i[1] & b_i)); t0 = ~t0; z_o = ~( (t0 | sel_i[0]) & (t1 | ~sel_i[0]) ) end endmodule ``` Within the always @(*) begin/end block, effects of statements appear to execute sequentially; Outside of block, only the last assignment to each variable is visible, and it appears a short time after any input is changed. For instance, the second t0 line uses t0 from the first. “Always @(*)” permit more advanced combinational idioms module mux4( input a_i,b_i,c_i,d_i input [1:0] sel_i, output reg z_o); always_comb begin if (sel_i == 2’d0 ) z_o = a_i; else if (sel_i == 2’d1) z_o = b_i; else if (sel_i == 2’d2) z_o = c_i; else if (sel_i == 2’d3) z_o = d_i; else z_o = 1’bx; end endmodule What happens if the case statement is not complete? ```verilog module mux3( input a_i, b_i, c_i, input [1:0] sel_i, output reg z_o ); always @( * ) begin case ( sel_i ) 2’d0 : z_o = a_i; 2’d1 : z_o = b_i; 2’d2 : z_o = c_i; endcase end endmodule ``` What have we created? If sel = 3, mux will output the previous value! What happens if the case statement is not complete? ```verilog module mux3( input a_i, b_i, c_i input [1:0] sel_i, output reg z_o ); always @( * ) begin case ( sel_i ) 2'd0 : z_o = a_i; 2'd1 : z_o = b_i; 2'd2 : z_o = c_i; default : z_o = 1'bx; endcase end endmodule We CAN prevent creating a latch with a default statement ``` Parameterized mux4 ```verilog module mux4 #(parameter WIDTH = 1 ) (input [WIDTH-1:0] a_i, b_i, c_i, d_i, input [1:0] sel_i, output [WIDTH-1:0] z_o); wire [WIDTH-1:0] t0, t1; assign t0 = (sel_i[1]? c_i : a_i); assign t1 = (sel_i[1]? d_i : b_i); assign z_o = (sel_i[0]? t0: t1); endmodule ``` Parameterization is a good practice for reusable modules Writing a muxn is challenging Instantiation ``` mux4#(32) alu_mux (.a_i (op1), .b_i (op2), .c_i (op3), .d_i (op4), .sel_i(alu_mux_sel), .z_o(alu_mux_out)); ``` Verilog Fundamentals - History of hardware design languages - Data types - Structural Verilog - RTL - Combinational - Sequential ```verilog module adder(input [3:0] A, B, output cout, output [3:0] S); wire c0, c1, c2; FA fa0( A[0], B[0], 1'b0, c0, S[0] ); FA fa1( A[1], B[1], c0, c1, S[1] ); FA fa2( A[2], B[2], c1, c2, S[2] ); FA fa3( A[3], B[3], c2, cout, S[3] ); endmodule ``` Sequential Logic: Creating a flip flop ```vhdl reg q_r, q_next; always_ff @(posedge clk) q_r <= q_next; ``` 1) The keyword `reg` confusingly enough does not have much to do with registers; it's just used to indicate a wire that is driven from a `always_ff` or `always_comb` block. So this line simply creates two wires, one called `q_r` and the other called `q_next`. 2) `always_ff` keyword indicates our intent to create registers; you can use the `always` keyword instead, but then the synthesizer has to guess! 3) `@ (posedge clk )` indicates that we want these registers to be triggered on the positive edge of the `clk` clock signal. 4) Combined with 2) and 3), the `<=` creates a register whose input is wired to `q_next` and whose output is wired to `q_r`. Note: always use `<=` with `always_ff` and `=` with `always_comb` Sequential Logic: flip-flop variants module FF0 (input clk, input d_i, output reg q_r_o); always_ff @(posedge clk) begin q_r_o <= d_i; end endmodule module FF (input clk, input d_i, input en_i, output reg q_r_o); always_ff @(posedge clk) begin if (en_i) q_r_o <= d_i; end endmodule flip-flops with reset ```verilog always_ff @(posedge clk) begin if (reset) Q <= 0; else if (enable) Q <= D; end ``` synchronous reset Register (i.e. a vector of flip-flops) module register#(parameter WIDTH = 1) ( input clk, input [WIDTH-1:0] d_i, input en_i, output reg [WIDTH-1:0] q_r_o ); always_ff @(posedge clk ) begin if (en_i) q_r_o <= d_i; end endmodule Implementing Wider Registers module register2 (input clk, input [1:0] d_i, input en_i, output reg [1:0] q_r_o ); always_ff @(posedge clk) begin if (en_i) q_r_o <= d_i; end endmodule Do they behave the same? yes Syntactic Sugar: `always_ff` allows you to combine combinational and sequential logic; *but this can be confusing.* **more clear** ```verilog module accum ( input clk, input data_i, input en_i, output [3:0] sum_o; ); reg [WIDTH-1:0] sum_r, sum_next; assign sum_o = sum_r; always_comb begin sum_next = sum_r; if (en_i) sum_next = sum_r + data_i; end always_ff @(posedge clk) sum_r <= sum_next; endmodule ``` **shorter** ```verilog module accum ( input clk, input data_i, input en_i, output [3:0] sum_o; ); reg [WIDTH-1:0] sum_r; assign sum_o = sum_r; always_ff @(posedge clk) begin if (en_i) sum_r <= sum_r + data_i; end endmodule ``` Syntactic Sugar: You can always convert an `always_ff` that combines combinational and sequential logic into two separate `always_ff` and `always_comb` blocks. **shorter** ```verilog module accum (input clk, input data_i, input en_i, output [3:0] sum_o); reg [WIDTH-1:0] sum_r; assign sum_o = sum_r; always_ff @(posedge clk) begin if (en_i) sum_r <= sum_r + data_i; end endmodule ``` **more clear** ```verilog module accum (input clk, input data_i, input en_i, output [3:0] sum_o); reg [WIDTH-1:0] sum_r, sum_next; assign sum_o = sum_r; always_comb begin sum_next = sum_r; if (en_i) sum_next = sum_r + data_i; end always_ff @(posedge clk) sum_r <= sum_next; endmodule ``` When in doubt, use the version on the right. To go from the left-hand version to the right one: 1. For each register `xxx_r`, introduce a temporary variable that holds the input to each register (e.g. `xxx_next`) 2. Extract the combinational part of the `always_ff` block into an `always_comb` block: a. change `xxx_r <=` to `xxx_next =` b. add `xxx_next = xxx_r;` to beginning of block for default case 3. Extract the sequential part of the `always_ff` by creating a separate `always_ff` that does `xxx_r <= xxx_next;` Bit Manipulations ```verilog wire [15:0] x; wire [31:0] x_sext; wire [31:0] hi, lo; wire [63:0] hilo; // concatenation assign hilo = { hi, lo }; assign { hi, lo } = { 32'b0, 32'b1 }; // duplicate bits (16 copies of x[15] + x[15:0]) assign x_sext = {16 { x[15] }, x[15:0]}; ```
{"Source-Url": "http://cseweb.ucsd.edu/classes/sp11/cse141L/pdf/01/SV_Part_1.pdf", "len_cl100k_base": 4432, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 52032, "total-output-tokens": 5776, "length": "2e12", "weborganizer": {"__label__adult": 0.000598907470703125, "__label__art_design": 0.0008215904235839844, "__label__crime_law": 0.0003733634948730469, "__label__education_jobs": 0.001667022705078125, "__label__entertainment": 0.00011360645294189452, "__label__fashion_beauty": 0.0003025531768798828, "__label__finance_business": 0.0001735687255859375, "__label__food_dining": 0.0005826950073242188, "__label__games": 0.0010614395141601562, "__label__hardware": 0.037322998046875, "__label__health": 0.0006084442138671875, "__label__history": 0.0003383159637451172, "__label__home_hobbies": 0.0005092620849609375, "__label__industrial": 0.0020008087158203125, "__label__literature": 0.00021648406982421875, "__label__politics": 0.000331878662109375, "__label__religion": 0.00087738037109375, "__label__science_tech": 0.04107666015625, "__label__social_life": 0.00010186433792114258, "__label__software": 0.0078582763671875, "__label__software_dev": 0.90087890625, "__label__sports_fitness": 0.0006346702575683594, "__label__transportation": 0.0010623931884765625, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13273, 0.03762]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13273, 0.75362]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13273, 0.65215]], "google_gemma-3-12b-it_contains_pii": [[0, 245, false], [245, 735, null], [735, 1134, null], [1134, 1789, null], [1789, 1950, null], [1950, 2254, null], [2254, 2625, null], [2625, 3121, null], [3121, 3500, null], [3500, 3816, null], [3816, 4105, null], [4105, 4746, null], [4746, 5151, null], [5151, 5265, null], [5265, 5873, null], [5873, 6308, null], [6308, 7043, null], [7043, 7409, null], [7409, 7793, null], [7793, 8181, null], [8181, 8721, null], [8721, 9145, null], [9145, 9983, null], [9983, 10285, null], [10285, 10445, null], [10445, 10707, null], [10707, 10937, null], [10937, 11648, null], [11648, 12994, null], [12994, 13273, null]], "google_gemma-3-12b-it_is_public_document": [[0, 245, true], [245, 735, null], [735, 1134, null], [1134, 1789, null], [1789, 1950, null], [1950, 2254, null], [2254, 2625, null], [2625, 3121, null], [3121, 3500, null], [3500, 3816, null], [3816, 4105, null], [4105, 4746, null], [4746, 5151, null], [5151, 5265, null], [5265, 5873, null], [5873, 6308, null], [6308, 7043, null], [7043, 7409, null], [7409, 7793, null], [7793, 8181, null], [8181, 8721, null], [8721, 9145, null], [9145, 9983, null], [9983, 10285, null], [10285, 10445, null], [10445, 10707, null], [10707, 10937, null], [10937, 11648, null], [11648, 12994, null], [12994, 13273, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13273, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13273, null]], "pdf_page_numbers": [[0, 245, 1], [245, 735, 2], [735, 1134, 3], [1134, 1789, 4], [1789, 1950, 5], [1950, 2254, 6], [2254, 2625, 7], [2625, 3121, 8], [3121, 3500, 9], [3500, 3816, 10], [3816, 4105, 11], [4105, 4746, 12], [4746, 5151, 13], [5151, 5265, 14], [5265, 5873, 15], [5873, 6308, 16], [6308, 7043, 17], [7043, 7409, 18], [7409, 7793, 19], [7793, 8181, 20], [8181, 8721, 21], [8721, 9145, 22], [9145, 9983, 23], [9983, 10285, 24], [10285, 10445, 25], [10445, 10707, 26], [10707, 10937, 27], [10937, 11648, 28], [11648, 12994, 29], [12994, 13273, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13273, 0.01367]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f566c4b91425aac7d1b354f0011138295bf0ad4a
[REMOVED]
{"Source-Url": "http://dl.ifip.org/db/conf/edbtw/edbtw2006/Abad-Mota06.pdf", "len_cl100k_base": 4775, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 24328, "total-output-tokens": 6657, "length": "2e12", "weborganizer": {"__label__adult": 0.0004296302795410156, "__label__art_design": 0.0010461807250976562, "__label__crime_law": 0.0008444786071777344, "__label__education_jobs": 0.0086517333984375, "__label__entertainment": 0.00025010108947753906, "__label__fashion_beauty": 0.0003180503845214844, "__label__finance_business": 0.0007615089416503906, "__label__food_dining": 0.0005340576171875, "__label__games": 0.0007672309875488281, "__label__hardware": 0.0008530616760253906, "__label__health": 0.0010862350463867188, "__label__history": 0.0007810592651367188, "__label__home_hobbies": 0.00017070770263671875, "__label__industrial": 0.0006799697875976562, "__label__literature": 0.0025005340576171875, "__label__politics": 0.0005059242248535156, "__label__religion": 0.0007157325744628906, "__label__science_tech": 0.458740234375, "__label__social_life": 0.0003345012664794922, "__label__software": 0.0546875, "__label__software_dev": 0.464111328125, "__label__sports_fitness": 0.00021207332611083984, "__label__transportation": 0.0006113052368164062, "__label__travel": 0.0002593994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29701, 0.02352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29701, 0.6122]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29701, 0.91706]], "google_gemma-3-12b-it_contains_pii": [[0, 2413, false], [2413, 5380, null], [5380, 6930, null], [6930, 9977, null], [9977, 13144, null], [13144, 16393, null], [16393, 19539, null], [19539, 22587, null], [22587, 25661, null], [25661, 28904, null], [28904, 29701, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2413, true], [2413, 5380, null], [5380, 6930, null], [6930, 9977, null], [9977, 13144, null], [13144, 16393, null], [16393, 19539, null], [19539, 22587, null], [22587, 25661, null], [25661, 28904, null], [28904, 29701, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29701, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29701, null]], "pdf_page_numbers": [[0, 2413, 1], [2413, 5380, 2], [5380, 6930, 3], [6930, 9977, 4], [9977, 13144, 5], [13144, 16393, 6], [16393, 19539, 7], [19539, 22587, 8], [22587, 25661, 9], [25661, 28904, 10], [28904, 29701, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29701, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
7af6f68341e16154e97ad21b8fbb565a981507a9
Chapter 1 Managing Software Projects Worldwide, some half a million project managers execute about a million software projects each year, producing software worth $600 billion. Many of these projects fail to fulfill customers’ quality expectations or fail to deliver the software within budget and on schedule. One analysis suggests that about one-third of projects have cost and schedule overruns of more than 125%. Why do so many software projects fail? Although there are many reasons, one of the most important is improper management of the project. For example, the major reasons for runaways (projects that are out of control) are unclear objectives, bad planning, new technology, a lack of a project management methodology, and insufficient staff. At least three of these five reasons clearly relate to project management. The other two—insufficient staff and new technology—can be considered as risks whose management is also a part of project management. Clearly, by using effective project management techniques a project manager can improve the chances of success. But what are these effective techniques? Let’s consider an analogy. Suppose you want to develop a muscular, toned body. To reach your goal, you start looking at exercise routines described in magazines. One article describes how to develop arm strength, giving a set of 10 exercises to be done—not too many by any standard. But then another article, this one on developing thigh strength, also gives 10 exercises, and the evangelist for flat stomachs also feels that doing 10 exercises is not too much. If you want to develop your body overall by following each of these isolated exercise programs, you would find that you have a set of 50 to 100 exercises to do—a clear impossibility for most people, let alone a busy project manager. To achieve your objective, you need a comprehensive training program that is practical and effective. Similarly, you’ll find an abundance of suggestions for performing the various aspects of project management, including effort estimation, risk management, project monitoring, configuration management, and so on. Although each proposed technique solves the problem it is designed to solve, it is not clear how to combine these techniques into a practical and workable process. For effective project management, the need of the hour is a practical, manageable "exercise routine" that will deliver the result. In other words, what is needed is a balanced process that covers the management of the entire project from inception to completion. Unfortunately, there is a paucity of published approaches illustrating how to integrate techniques in this way. This book fills this gap by describing the set of processes used in a world-class organization to effectively and efficiently manage software projects. The company is Infosys, a software development company that has an enviable track record of project execution; in 2000 alone, Infosys project managers used the processes described here to successfully execute about 500 projects for customers. This book discusses all aspects of Infosys project management—planning, execution, and closure. You’ll learn how Infosys project managers estimate, plan for managing risks, collect metrics data, set quality goals, use measurements for monitoring a project, and so on. An interesting aspect of these processes, one that will appeal to busy project managers, is that they are neither complex nor cumbersome, and they use simple metrics. Infosys has been assessed at level 5 (the highest level) of the Capability Maturity Model (CMM). By extracting project management processes from the set of processes at Infosys, this book also illustrates how projects are managed in a high-maturity organization. Through this illustration, I hope to bring the benefits of the CMM to project managers who have not studied it because of lack of time, because they regard it as being for "process folks" or because they have found it difficult to relate the CMM to project management practices. This chapter introduces the two topics that form the background for the book: the CMM and Infosys. Because the focus of the book is project management and not the CMM, I restrict the discussion to the project management aspects of the CMM. This chapter also provides an overview of the project management process and the main case study; details of these are discussed in the remainder of the book. First, then, let’s briefly discuss the role of processes in project management. ### 1.1 Processes and Project Management A software project has two main activity dimensions: engineering and project management. The engineering dimension deals with building the system and focuses on issues such as how to design, test, code, and so on. The project management dimension deals with properly planning and controlling the engineering activities to meet project goals for cost, schedule, and quality. If a project is small (say, a team of one or two working for a few weeks), it can be executed somewhat informally. The project plan may be an e-mail specifying the delivery date and perhaps a few intermediate milestones. Requirements might be communicated in a note or even verbally, and intermediate work products, such as design documents, might be scribbles on personal note pads. These informal techniques, however, do not scale up for larger projects in which many people may work for many months—the situation for most commercial software projects. In such projects, each engineering task must be done carefully by following well-tried methodologies, and the work products must be properly documented so that others can review them. The tasks in the project must be carefully planned and allocated to project personnel and then tracked as the project executes. In other words, to successfully execute larger projects, formality and rigor along these two dimensions must increase. Formality requires that well-defined processes be used for performing the various tasks so that the outcome becomes more dependent on the capability of the processes. Formality is further enhanced if quantitative approaches are employed in the processes through the use of suitable metrics. What is a process? Technically, a process for a task comprises a sequence of steps that should be followed to execute the task. For an organization, however, the processes it recommends for use by its engineers and project managers are much more than a sequence of steps; they encapsulate what the engineers and project managers have learned about successfully executing projects. Through the processes, the benefits of experience are conferred to everyone, including newcomers in the organization. These processes help managers and engineers emulate past successes and avoid the pitfalls that lead to failures. For a project, the engineering processes generally specify how to perform engineering activities such as requirement specification, design, testing, and so on. The project management processes, on the other hand, specify how to set milestones, organize personnel, manage risks, monitor progress, and so on. This book focuses on the project management process. When you consider project management processes, you must ask the question whether project managers will use them. I have often heard process designers complain that project managers don’t follow the process and that they resist changes. My experience with project managers at Infosys and other organizations is that they actually want to use processes but only if they’re reasonable and will help the project managers execute their projects better. Project managers do, however, resent processes that seem to be unnecessarily bureaucratic and add little value to their work. The trick, then, is to have lightweight processes—those that help project managers plan and control their projects better and that give them the flexibility to handle various situations. In response to the question “Why should project managers follow processes?” S.D. Shibulal—founder, director, and the current head of customer delivery at Infosys—sums it up nicely in a few key points: - Processes represent collective knowledge. Using them increases your chances of success. - A process may have some extra steps, but you will not always know beforehand which ones are not needed, and hence you will increase your risks by taking shortcuts. - Without processes, you cannot predict much about the outcome of your project. - You and the organization cannot learn effectively without having defined processes. And learning and improvement are imperative in today’s knowledge-based world. - Processes lower your anxiety level. The checklists inevitably cover 80 percent of what needs to be done. Hence, your task reduces to working out the remaining 20 percent. 1.2 PROJECT MANAGEMENT AND THE CMM Once it is accepted that use of effective processes can help in executing a project successfully, a question immediately arises: What are the desirable characteristics of these processes? The CMM for software is a framework that tries to answer this question. The CMM for software is a framework that was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University by observing the best practices in software and other organizations. Hence, the CMM reflects the collective process experience and expectations of many companies. It specifies desired characteristics of processes without prescribing specific processes. Thus, different processes can fulfill the requirements of the CMM. It can be used to evaluate the software process of an organization and to identify deficiencies. The CMM is one of the most popular frameworks for software process improvement (the other commonly used framework is ISO 9001\textsuperscript{3,4,5}). The foundations of the CMM were laid down in Watts Humphrey’s *Managing the Software Process*,\textsuperscript{6} and the framework itself is described completely in the SEI’s *The Capability Maturity Model: Guidelines for Improving the Software Process*.\textsuperscript{7} A “new edition” of the CMM, called CMM-I, has been released. But because the focus of this book is not on the models and because there is still little experience available with CMM-I, I discuss only the CMM for software and only the project management aspects, even though the CMM also covers organizational and process management issues. I do not discuss the assessment procedure, a brief description of which is given in my book *CMM in Practice*,\textsuperscript{8} and a detailed description given in *CMM Based Appraisal for Internal Process Improvement*, by S. Masters.\textsuperscript{9} ### 1.2.1 Overview of the CMM One objective of the CMM is to distinguish mature processes from immature, or ad hoc, processes. Immature software processes imply that projects are executed without many guidelines, and the outcome of a project depends largely on the capability of the team and the project leader. On the other hand, with mature processes, a project is executed by following defined processes. In this case, the outcome of the project is less dependent on people and more on the processes. It follows, then, that the more mature the processes, the more predictable the results and the more well controlled the projects. The range of results that can be expected in a project when it is executed using a process is its *process capability*. The actual result achieved in a project executed using the process is its *process performance*. Clearly, the process performance depends on the process capability. To consistently improve process performance on projects, you must enhance the process capability; the process itself must become more mature. The path to higher maturity includes some well-defined plateaus referred to as *maturity levels* by the CMM. Each maturity level specifies certain characteristics for processes, with higher maturity levels having more advanced characteristics that are found in more mature software processes. Hence, the CMM framework describes the key elements of software processes at different levels of maturity. Consequently, it also specifies the path that a software process follows in moving from immature processes to highly mature processes. This path includes five maturity levels, as shown in Figure 1.1.\textsuperscript{7} In level 1, the *initial* level, a project is executed in a manner that the team and project manager see fit. The *repeatable* level (level 2) applies when established project management practices are employed, although organization-wide processes may not exist. At the *defined* level (level 3), organization-wide processes have been defined and are regularly followed. At the managed level (level 4), quantitative understanding of the process capability makes it possible to quantitatively predict and control the process performance on a project. At the optimizing level (level 5), the process capability is improved in a controlled manner and the improvement is evaluated quantitatively. Each maturity level (except level 1) is characterized by key process areas (KPAs), which specify the areas on which the organization should focus to elevate its processes to that maturity level. Figure 1.1 also shows the KPAs for the different levels. For an organization to achieve a maturity level, it must satisfy all the KPAs at that maturity level as well as the KPAs at all lower maturity levels. Maintaining processes at higher levels of maturity is a challenging task requiring commitment from the organization and a proper work culture. Of the 900 assessments conducted between 1996 and June 2000 whose assessment results were provided to the SEI, only 3% of the organizations were at level 5, and another 5% were at level 4. The rest were at level 3 or below, with 38% at level 2 and 18% at level 3. 1.2.2 KPAs for Project Management Each KPA specifies goals that the processes of the organization must meet to satisfy that KPA. In addition, each KPA specifies a group of activities, called key practices, that collectively satisfy the goals of that KPA. In many senses, the goals for each KPA capture its essence. They specify the objectives that the CMM has set for the processes relating to the KPA. To illustrate the KPAs associated with project management, we briefly discuss here the goals of these KPAs. These goals are taken from the CMM, with some minor changes in the wording of some goals. Table 1.1 lists all the goals for KPAs at level 2, showing clearly that the level 2 focus is almost exclusively on project management. Under these goals, you create and document a project plan, evaluate the ongoing project performance against the plan, and take actions when the actual performance significantly deviates from the plan. Requirements are properly documented, and changes to requirements are properly managed. All work products are controlled, and changes to products are properly managed through a planned configuration management plan. Reviews and audits are performed to ensure that planned processes and standards are being followed. If some parts of the project are subcontracted to other vendors, the subcontracted work is also monitored properly. Table 1.2 details the goals of three of the seven KPAs at level 3. The other KPAs focus on organizational and process management issues. A project in a A level 3 organization uses a tailored version of the standard process and reuses assets, data, and experience from past projects for planning. The various groups that contribute to the project cooperate smoothly through well-defined interfaces and mechanisms. Reviews are properly carried out to identify defects in work products, and sufficient support for conducting reviews and follow-up activities is provided. ### Table 1.1 Goals for KPAs at Level 2 (Repeatable) <table> <thead> <tr> <th>KPA</th> <th>Goals</th> </tr> </thead> </table> | **Requirements Management (RM)** | • Software requirements are controlled to establish a baseline for software engineering and management activities. • Software plans, products, and activities are kept consistent with requirements. | | **Software Project Planning (SPP)** | • Estimates are documented for use in planning and tracking the project. • Project activities and commitments are planned and documented. • Affected groups and individuals agree to their commitments related to the project. | | **Software Project Tracking and Oversight (SPTO)** | • Actual results and performances are tracked against the software plans. • Corrective actions are taken and managed to closure when actual results and performance deviate significantly from the software plans. • Changes to commitments are agreed to by the affected groups and individuals. | | **Software Subcontract Management (SSM)** | • The prime contractor and the subcontractor agree to their commitments. • The prime contractor tracks the subcontractor’s actual results against its commitments. • The prime contractor and the subcontractor maintain ongoing communication. • The prime contractor tracks the subcontractor’s actual performance against its commitments. | | **Software Quality Assurance (SQA)** | • Software quality assurance activities are planned. • Adherence of software products and activities to the applicable standards, procedures, and requirements is verified objectively. • Affected groups and individuals are informed of software quality assurance activities and results. • Noncompliance issues that cannot be resolved within the project are addressed by senior management. | | **Software Configuration Management (SCM)** | • Software configuration management activities are planned. • Selected software work products are identified, controlled, and available. • Changes to identified software work products are controlled. • Affected groups and individuals are informed of the status and content of software baselines. | Table 1.3 shows the goals for the two KPAs at level 4. At level 4, the capability of the organization’s process is understood in quantitative terms. The process capability is used to set quantitative goals for a project. Data on project performance are collected on an ongoing basis and are compared with data on past performance; if significant deviations are observed, proper corrective actions are applied to bring the project back in control. A key aspect of level 4 is the use of statistical process control techniques on an ongoing basis so that each activity can be evaluated and corrective action taken if needed. The three KPAs at level 5 focus on improving the capability of the process. Of the three KPAs, the Defect Prevention KPA is the one that most directly affects project management. This KPA requires that defects be prevented proactively by systematically analyzing the causes of defects and then eliminating those causes. ### Table 1.2 Goals of Three KPAs at Level 3 (Defined) <table> <thead> <tr> <th>KPA</th> <th>Goals</th> </tr> </thead> </table> | Integrated Software Management (ISM) | • The project’s defined software process is a tailored version of the organization’s standard software process. • The project is planned and managed according to the project’s defined software process. | | Intergroup Coordination (IC) | • All affected groups agree to the customer’s requirements. • All groups agree to the commitments between different groups. • The groups identify, track, and resolve intergroup issues. | | Peer Reviews (PR) | • Peer review activities are planned. • Defects in the software work products are identified and removed. | ### Table 1.3 Goals for KPAs at Level 4 (Managed) <table> <thead> <tr> <th>KPA</th> <th>Goals</th> </tr> </thead> </table> | Quantitative Process Management (QPM) | • The quantitative process management activities are planned. • The process performance of the project’s defined software process is controlled quantitatively. • The process capability of the organization’s standard software process is known in quantitative terms. | | Software Quality Management (SQM) | • The project’s software quality management activities are planned. • Measurable goals for software product quality and their priorities are defined. • Actual progress toward achieving the quality goals for the software products is quantified and managed. | defects can be prevented from entering the software, the effort spent in removing them can be reduced, thereby improving quality and productivity. 1.3 PROJECT MANAGEMENT AT INFOSYS Infosys executes hundreds of projects each year. Full responsibility for executing a project rests with the project manager, who must make sure that the project team delivers high-quality software to the customer on time and within cost. To help the project manager fulfill this responsibility, support from the organization is necessary. This section provides a brief background on Infosys and its support for managing projects. 1.3.1 Background: Infosys Infosys is a software house headquartered in Bangalore, India. Its stated mission is “to be a globally respected corporation that provides best-of-breed software solutions delivered by best-in-class people.” It employs the global delivery model, in which the customer can be located anywhere in the world and customer fulfillment can be provided from anywhere. In this model, the customer is sought anywhere in the world where it provides the most value to the company. For customer fulfillment, a combination of processes, technology, and management is employed to segregate the work so that value can be added in the most optimum locations and then reaggregated for delivery to the customer. Infosys currently employs about 10,000 people, with about 15 development centers in four countries and offices in more than a dozen countries. The company was founded in 1981 by seven software professionals with an equity base of only $300. Today, Infosys has a market capitalization of more than $8 billion (based on market rates in June 2001), and its revenue was more than $400 million in 2000 (revenue in 1994 was $9.5 million). Its customers are spread across the globe and include major corporations—more than 60 of them being Fortune 1000 companies—that are engaged in diverse businesses such as banking, retailing, manufacturing, telecommunications, financial services, insurance, and transportation. Infosys is a highly respected company that has been rated as the best managed and most respected company in India and one of Asia’s leading information technology (IT) companies. It has bagged many awards, including the Ramakrishna Bajaj award, which is modeled after the Malcolm Balridge award. It can be safely said that Infosys is one of the best software services corporations in the world. Infosys provides a top-notch infrastructure so that its project managers can better serve the needs of its worldwide customers. The company has provided audio conferencing facilities to almost every group so that project managers can interact easily with customers and with group members located in different sites. Similarly, a state-of-the-art video conferencing facility is used for interaction among the company’s various locations as well as for virtual meetings. Its main campus in Bangalore is now one of the largest software service facilities in the world, with work-related facilities such as a library, extensive computing and networking facilities, training facilities, discussion rooms, projection facilities, and so on, as well as recreational facilities such as an art gallery, a health club, and facilities for tennis, basketball, and so on. Process orientation and improvement are a part of the Infosys work culture, and processes are defined for most tasks that are performed regularly. For process definition and improvement, Infosys first adopted the ISO 9000 framework and got its ISO certification in 1993. To further improve the software process, Infosys then adopted the CMM framework. It was first assessed at level 4 in December 1997, and then at level 5 in December 1999. In its pursuit of continuous improvement, Infosys now employs the Malcolm Balridge framework for all-around improvement and building leadership excellence in all areas of operation. 1.3.2 SEPG Support to Projects The quality department at Infosys contains the software engineering process group (SEPG). The SEPG is responsible for coordinating all the process activities, including process definition, process improvement, and process deployment. It also manages all information and data related to the use of processes (such as the process database and the process capability baseline, which are discussed further in Chapter 2). Although the responsibility for all aspects of delivery, including quality, belongs to the project team, the SEPG facilitates the project team in following the right processes. The SEPG also forms an independent channel for monitoring and reporting to senior management on process and quality issues. Because “processes won’t stick by themselves,” the SEPG helps to ensure that the defined processes are implemented and become standard practice. To this end, in addition to offering training on processes, the SEPG provides a member who is associated with a project as a software quality adviser. The quality adviser assists in defining and following processes, ensures that the processes are followed, aids in analyzing the data, and provides any needed process training. Because the adviser is well versed on processes, guidelines, and so on, the adviser’s main help comes during project planning. The adviser also reviews the project plan to ensure that it contains all the key elements. In addition to providing consulting and help with processes and metrics, the Infosys SEPG schedules and manages regular independent audits (see Chapter 11) to ensure that the defined processes and standards are being followed. ### 1.3.3 Senior Management Involvement in Projects Infosys prides itself in providing value to its customers through delivery excellence. Everything at Infosys, including its organizational structure, is driven by the aim of serving customers efficiently and effectively and quickly tapping new business opportunities. For delivery of customer services, Infosys has many *business units*. Within a business unit, a *team*, headed by a *project manager*, executes a project. The project manager is responsible for all aspects of project execution, from determining the requirements to final installation of the software. The project manager reports to a *business manager*, who in turn generally reports to the *business unit head*. To handle situations that cannot be resolved by the project manager, senior management involvement in projects is essential. At Infosys, the business manager regularly interacts with the project manager and monitors the project through status reports and milestone reports (discussed in Chapter 11). In addition to regular monitoring, the business manager also helps to resolve issues and problems that cannot be handled by the project team and are *escalated* to his level (escalation is discussed in Chapter 8). The business manager also interacts with customers to ensure that they are satisfied and that any issues are promptly raised and addressed. In addition, other senior people also review projects periodically by regularly taking part in internal audits (discussed in Chapter 11). Through two systems—called PRISM (project review by senior management) and IPM (integrated project management)—milestone reports and project plans are available for senior management to review. All senior managers are expected to review some projects periodically through this system and to give feedback to the project leaders. Overall, senior management maintains involvement in the project primarily by monitoring to ensure that the project objectives are met and that the customer is fully satisfied. ### 1.3.4 Training for Project Managers Because project managers have the main responsibility for satisfying the customer, they need to master not only executing the technical aspects of a project but also interacting with customers, eliciting requirements, managing the team, and so on. Clearly no one is likely to possess all the skills needed, so it’s crucial to train people to develop the necessary skills. Infosys has implemented a variety of programs to help people transition from being engineers to being project leaders. All fresh entrants undergo a three- to four-month induction training program. In addition to training in engineering and technology, this program contains one- or two-day programs in business etiquette, written communication, public speaking, body language, and so on. Later, when engineers are ready to become module leaders (those who manage the development of a system module, especially in larger projects) or project managers, they attend a series of technical and soft-skills training programs. Included in the former is a five-day project management course that focuses on all aspects of project management: planning, monitoring, controlling, and so on. A two-week course on requirements specification and management teaches how to elicit requirements, how to document them, how to verify them, and so on. The five-day residential soft-skills training program includes modules on appraisals and team management, customer focus and customer management, leadership, social and business etiquette for different countries, and so on. Other regularly offered programs focus on various aspects of management; project leaders take these courses when their schedules permit. Also, team-building workshops are conducted by professionals. 1.3.5 The Project Management Process For a project team to successfully execute a project, it must perform hundreds of tasks, many of them interdependent. Effectively managing this process is extremely important for success. At Infosys, the set of activities executed by a project manager is specified in the project management process. It is fairly standard, having three main stages: - Project planning - Project execution - Project closure In the project planning stage, the project manager reviews contractual commitments and creates a plan to meet them. Creating a project plan involves defining a life-cycle process to be followed, estimating the effort and schedule, preparing a detailed schedule of tasks, and so on. It also includes planning for quality and configuration management as well as risk management. In this phase, the major activities of the project manager are as follows: - Perform startup and administrative tasks. - Create a project plan and schedule. - Define the project objectives. - Identify a suitable standard process for project execution. - Tailor the standard process to meet project requirements. - Define a process for managing changes in requirements. - Estimate the effort. - Plan for human resources and team organization. - Define the project milestones and create a schedule. - Define the quality objectives and a quality plan to achieve them. - Make a defect prevention plan. - Identify risks and make plans to mitigate them. - Define a measurement plan for the project. - Define a training plan for the project. - Define project-tracking procedures. - Perform a review of the project plan and schedule. - Obtain authorization from senior management. - Define and review the configuration management plan. - Orient the project team to the project management plan. In addition to the project manager, this phase involves the customer, an SEPG representative, and the business manager for the project. The entry criterion is that the contract or project authorization is available. The exit criterion is that the project plan has been documented and group reviewed (see Chapter 10). The second phase, *project execution*, involves executing the project plan, tracking the status of the project, and making corrections whenever project performance strays from the path laid down in the project plan. In other words, it involves tracking and controlling the implementation of the project process. This phase is the longest in the project management process, incorporating periodic tasks such as monitoring project status and quality and taking any needed corrective steps. In this phase, the project manager performs these main activities: • Execute the project as per the project plan. • Track the project status. • Review the project status with senior management. • Monitor compliance with the defined project process. • Analyze defects and perform defect prevention activities. • Monitor performance at the program level. • Conduct milestone reviews and replan if necessary. Other members of the team also participate in this stage. The entry criterion is that the project plan is complete and approved, and the exit criterion is that all work products delivered are accepted by the customer. The last stage of the project management process, project closure, involves a systematic wind-up of the project after customer acceptance. The main goal here is to learn from the experience so that the process can be improved. Post-project data analysis constitutes the main activity; metrics are analyzed, process assets (materials, such as templates and guidelines, used to aid in managing the process itself) are collected for future use, and lessons are recorded. Because learning from the project is the main goal, this is a group activity that involves the project manager, the SEPG, and other members of the team. The entry criterion is that the customer has accepted the work products. The exit criterion is that a postproject meeting has been conducted. The main outputs of this phase are the project closure report and the collected process assets. The remainder of this book discusses the various elements of this management process. Part I includes separate chapters devoted to key planning activities, such as process definition and tailoring, risk management, effort and schedule estimation, quality planning, and configuration management planning. The other tasks in the planning phase (such as human resource planning, project organization, tools to be used, project tracking procedures, and so on) are discussed briefly in Chapters 7 and 8. Part II includes chapters on project monitoring and controlling and on project closure. 1.4 OVERVIEW OF THE ACIC CASE STUDY ACIC Corporation (name changed to protect confidentiality) is a multibillion-dollar financial institution. To keep up with the times, several years ago it started slowly Web-enabling its applications, and it wanted to start an on-line service for opening and tracking accounts. Because Infosys had successfully built some e-services for ACIC earlier in a project called Synergy (name changed), ACIC employed Infosys to analyze the problem. This work was executed in time and material (T&M) mode—that is, the customer paid for the effort spent by Infosys in doing the analysis. Based on the analysis output, Infosys made a successful bid for the Web project, giving rise to the ACIC case study that runs throughout this book. The project successfully released the new service in time, and the software has been in operation without any problem. (This case study is different from the WAR project case study discussed in my earlier book.) The ACIC project illustrates the various project planning and monitoring tasks undertaken in executing a project at Infosys. Many of the outputs related to management of the ACIC project are given in the relevant chapters. These include the following: - The data from the Synergy project, which was used by the ACIC project manager during planning (Chapter 2) - The project’s process plan (Chapter 3) - An analysis of the impact of a requirement change request (Chapter 3) - Effort estimates and the high-level schedule, along with a description of how they were obtained (Chapter 4) - The quality plan containing quality goals and plans for achieving them, including plans for defect prevention and reviews (Chapter 5) - The risk management plan describing the major risks, their risk exposure and impact, their prioritization, and the risk mitigation plans for the high-priority risks (Chapter 6) - The measurement and tracking plan (Chapter 7) - The complete project management plan, including the team management plan and the customer communication and escalation plan (Chapter 8) - The complete configuration management plan (Chapter 9) - Project tracking documents, including the defect log, the issues log, the status report, and the milestone report (Chapter 11) - Details of defect prevention, including defect analysis results and the impact on the project of the defect prevention plan (Chapter 11) - The complete closure report, which includes the metrics data on quality, productivity, cost of quality, defect removal efficiency, and so on (Chapter 12) 1.5 SUMMARY Software project management is perhaps the most important factor in the outcome of a project. Without proper project management, a project will almost certainly fail. Many organizations have evolved effective project management processes. This book describes these processes for one such organization, Infosys, which has been assessed at level 5 of the CMM and whose project managers have successfully executed hundreds of projects. Here are the key takeaways from this chapter: - Processes for the various aspects of project management should not be looked at in isolation. In a balanced process, the practices integrate smoothly. - Processes of an organization should encapsulate its best practices so as to help new projects replicate past successes and avoid failures. - At the top level, the project management process consists of three phases: planning, execution, and closure. - For effective execution of projects, project managers should be supported through the help of an SEPG in executing processes; senior management monitoring and issue resolution; and good training. - Many key process areas at all maturity levels of the CMM for software focus directly on project management. 1.6 REFERENCES
{"Source-Url": "https://www.pearsonhighered.com/assets/samplechapter/0/2/0/1/0201737213.pdf", "len_cl100k_base": 7389, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 36533, "total-output-tokens": 8555, "length": "2e12", "weborganizer": {"__label__adult": 0.0005092620849609375, "__label__art_design": 0.0007476806640625, "__label__crime_law": 0.00045108795166015625, "__label__education_jobs": 0.0152130126953125, "__label__entertainment": 0.00012958049774169922, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.00792694091796875, "__label__food_dining": 0.0005421638488769531, "__label__games": 0.0010976791381835938, "__label__hardware": 0.0005106925964355469, "__label__health": 0.00045180320739746094, "__label__history": 0.00036025047302246094, "__label__home_hobbies": 0.0002961158752441406, "__label__industrial": 0.0004940032958984375, "__label__literature": 0.0007462501525878906, "__label__politics": 0.00018155574798583984, "__label__religion": 0.00042557716369628906, "__label__science_tech": 0.005298614501953125, "__label__social_life": 0.00024020671844482425, "__label__software": 0.030914306640625, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.0003972053527832031, "__label__transportation": 0.0004143714904785156, "__label__travel": 0.0003521442413330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40126, 0.01302]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40126, 0.30533]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40126, 0.94165]], "google_gemma-3-12b-it_contains_pii": [[0, 2077, false], [2077, 4731, null], [4731, 7591, null], [7591, 9681, null], [9681, 12386, null], [12386, 12711, null], [12711, 15415, null], [15415, 18125, null], [18125, 20861, null], [20861, 23302, null], [23302, 26095, null], [26095, 28699, null], [28699, 31022, null], [31022, 32974, null], [32974, 35188, null], [35188, 37523, null], [37523, 39489, null], [39489, 40126, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2077, true], [2077, 4731, null], [4731, 7591, null], [7591, 9681, null], [9681, 12386, null], [12386, 12711, null], [12711, 15415, null], [15415, 18125, null], [18125, 20861, null], [20861, 23302, null], [23302, 26095, null], [26095, 28699, null], [28699, 31022, null], [31022, 32974, null], [32974, 35188, null], [35188, 37523, null], [37523, 39489, null], [39489, 40126, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40126, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 40126, null]], "pdf_page_numbers": [[0, 2077, 1], [2077, 4731, 2], [4731, 7591, 3], [7591, 9681, 4], [9681, 12386, 5], [12386, 12711, 6], [12711, 15415, 7], [15415, 18125, 8], [18125, 20861, 9], [20861, 23302, 10], [23302, 26095, 11], [26095, 28699, 12], [28699, 31022, 13], [31022, 32974, 14], [32974, 35188, 15], [35188, 37523, 16], [37523, 39489, 17], [39489, 40126, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40126, 0.03191]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
bc97ba509e9911dd08f74d1fbc82661595d8d93d
Abstract A data warehouse collects and integrates data from multiple, autonomous, heterogeneous, sources. The warehouse effectively maintains one or more materialized views over the source data. In this paper we describe the architecture of the Whips prototype system, which collects, transforms, and integrates data for the warehouse. We show how the required functionality can be divided among cooperating distributed CORBA objects, providing both scalability and the flexibility needed for supporting different application needs and heterogeneous sources. The Whips prototype is a functioning system implemented at Stanford University and we provide preliminary performance results. 1 Introduction A data warehouse is a repository of integrated information from distributed, autonomous, and possibly heterogeneous, sources. In effect, the warehouse stores one or more materialized views of the source data. The data is then readily available to user applications for querying and analysis. Figure 1 shows the basic architecture of a warehouse: data is collected from each source, integrated with data from other sources, and stored at the warehouse. Users then access the data directly from the warehouse. As suggested by Figure 1, there are two major components in a warehouse system: the integration component, responsible for collecting and maintaining the materialized views, and the query and analysis component, responsible for fulfilling the information needs of specific end users. Note that the two components are not independent. For example, which views the integration component materializes depends on the expected needs of end users. Most current commercial warehousing systems (e.g., Redbrick, Sybase, Arbor) focus on the query and analysis component, providing specialized index structures at the warehouse and extensive querying facilities for the end user. In this paper, on the other hand, we focus on the integration component. Specifically, we describe the architecture of a prototype system that collects data from heterogeneous sources, transforms and summarizes it according to warehouse specifications, and integrates it into the warehouse. This architecture has been implemented in the WHIPS (WareHouse Information Prototype at Stanford) System at Stanford. The Whips system is currently being used as a testbed for evaluating various integration schemes (as described briefly in Section 3). We designed the Whips architecture to fulfill several important goals, all interrelated, as follows: - Plug-and-Play Modularity. We clearly do not wish to have a system that only works with a specific warehouse or with particular types of sources, or that can only manage views in a specific way. On the contrary, the integration component should be composed of interchangeable modules, each providing some of the required functionality. For example, a warehouse wrapper module is responsible for storing information into the warehouse, which could be any database system. If the target database system changes, we only need to change the warehouse wrapper module. - **Scalability.** The integration component must deal with large amounts of data, coming from many sources. As the load grows, the system should scale gracefully by distributing its work among more machines and among more modules. For example, in our architecture, each materialized view is handled by a separate module. As the number of views grows, each view module can be run on a separate machine. Similarly, the system should support high degrees of concurrency, so that large numbers of updates can be processed simultaneously. - **24 × 7 Operation.** Many customers have international operations in multiple time zones, so there is no convenient down time, no “night” or “weekend” when new sources or views can be added and all of the recent updates can be batched and processed together to (re)compute materialized views. Thus, we should be able to add new sources and views to the system dynamically, and the integration component should be able to incrementally maintain the materialized views, without halting queries by end-users. - **Data Consistency.** When data is collected from autonomous sources, the resulting materialized views may be inconsistent, e.g., they may reflect a state that never existed at a source [ZGMW95]. We would like a system that can avoid these problems, if it is important to the application. Thus, it should be possible to specify the desired level of consistency, and the system should support the necessary algorithms to achieve the different levels. - **Support for Different Source Types.** Not all data sources are cooperative and willing to notify the warehouse when their data has changed. On the other hand, some sources do provide notification, e.g., by using trigger facilities. The integration component should be able to handle many different types of sources, and extract data from them in the most effective fashion. For example, to incrementally maintain a view based on data from an uncooperative source, the system should be capable of comparing database snapshots and extracting the differences. The contribution of this paper is to show how the functionality required for integration can be decomposed into modules to achieve our desired goals, and to show how these modules then efficiently interact. Our solution is based on the notion of distributed objects, as in the CORBA model [Obj95, YD96]. Each module is implemented as a CORBA object that can run on any machine. Each object has a set of methods that can be called from other objects. In essence, our architecture and prototype system may be viewed as an experiment of CORBA’s suitability for building information processing systems such as a data warehouse. Our experience indicates that distributed object technology, with the right architecture, is indeed very useful for providing the modularity and scalability required. The remainder of paper is organized as follows. In Section 2, we overview the Whips architecture by showing the flow of messages that occurs among the modules during system startup, view creation, and view maintenance. In Section 3, we describe the modules and explain the design trade-offs we faced. We then go into more specific implementation details in Section 4. We present some preliminary performance results from our prototype in Section 5 and conclude in Section 6. For an additional discussion of data warehouses and their research challenges, we refer the reader to [Wid95, HGMW95]. These papers provide references to work upon which our system builds, for instance, in incremental view maintenance, data consistency for materialized views, and snapshot difference algorithms for identifying updates to legacy sources. Due to space limitations, we do not survey that work here. ## 2 Whips architecture In Figure 2 we expand the integration component of Figure 1 to depict the Whips system architecture. As shown in the figure, the system is composed of many distinct modules that communicate with each other although they potentially reside on different machines. We implemented each module as a CORBA object, using the ILU implementation of CORBA [Xer95]. The communication between objects is then performed within the CORBA distributed object framework, where each object O has a unique identifier used by other objects to identify and communicate with O. Using CORBA provides several benefits. First, CORBA hides the low-level communication so that the modules themselves are written independently of the communication; contacting another module is simply a method call. Second, CORBA guides all communication by the destination module’s identifier rather than by its location. Therefore, it is easy to redistribute modules as the system scales. In the current prototype, we use the relational model to represent the warehouse data: views are defined in the relational model and the warehouse stores relations. The underlying source data is converted to the relational model by the source’s monitor and wrapper before it is sent to any other module. To simplify the presentation, we will discuss each source as if it contained only a single "relation." In actuality, each source may contain multiple relations (or anything else, converted to relations), and modifications are detected separately for each of them. We overview the modules of the architecture first by tracing the flow of messages in the Whips system. There are three distinct operations that each have their own flow of messages. First, at startup, the modules must identify themselves to each other. Similar actions also occur whenever a new source becomes available. Second, whenever a view is defined, the view is initialized and the system is primed to maintain the view. Third, each defined view is maintained (updated) in response to modifications that affect the view. Figure 2 shows the communication patterns during view definition and maintenance. 2.1 System initialization and source startup At system startup, the integrator publishes its identifier and creates the query processor(s). All other starting modules contact the integrator. More specifically, the warehouse, meta-data store, and view specifier contact the integrator and identify themselves. Each source monitor and wrapper also contact the integrator to register the source meta-data, which is passed to the persistent meta-data store and query processor(s). Currently, there is one monitor and one wrapper per relation, implemented according to the source data type (see Sections 3.6 and 3.7). While we expect most sources to be reported at startup, sources may be added to the system at any time, by following the same procedure. 2.2 View definition and initialization Views are defined at the view specifier by a system administrator. The view specifier type-checks each view definition with the meta-data store and then passes the view definition to the integrator, which spawns a view manager for that view. The integrator also notifies the monitors for all of the sources involved in the view to begin sending relevant modifications (if they were not already). The view manager is then responsible for initializing and maintaining the view. First, the view manager generates a (global) query corresponding to the view definition. It passes the query to a query processor, which contacts the query wrapper for each source involved in the view. The query processor joins the results returned to it by the query wrappers, and passes the (global) query answer back to the view manager. The view manager then sends the query answer to the warehouse wrapper to initialize the view. 2.3 View maintenance Each monitor of a relation $R$ detects the modifications to $R$ that occur at its source (see Section 3.7) and forwards these modifications to the integrator. The integrator then forwards the modifications to all interested view managers (see Section 3.3). Each view manager then uses one of the Strobe algorithms for view consistency [ZGMW95] to compute the corresponding changes to the view at the warehouse. This computation may involve generating a (global) query, which is sent to the query processor and evaluated as at view initialization time. The returned query result is then adjusted as necessary by the view consistency algorithm and possibly held and combined with other query results. When the combined modifications will leave the view in a consistent state, the view manager sends the set of adjusted query results to the warehouse wrapper, which applies them to the warehouse view as a single transaction, bringing the view to a new consistent state. 2.4 Communication and message ordering Communication messages are sent asynchronously during view maintenance, which means that delays in communication should not hold up the processing at any module. Note that in our architecture, messages sent from a source may arrive at a view manager by two paths. Modifications are sent from the monitor to the integrator to the view manager. Query results are sent from the wrapper to the query processor to the view manager. The architecture cannot guarantee that two messages sent by different paths will arrive in order, yet the view consistency algorithms require the view manager to know about all previous modifications when it receives a query result. One possible solution is to also send query results via the integrator and to send modification synchronously from the integrator to the view manager. However, this would require both more messages and more expensive (synchronous) messages. Our solution is to use sequence numbers, instead. Each monitor has its own sequence counter (per relation) and each modification is tagged with a sequence number when it is sent to the integrator. In addition, each wrapper tags its query results with the sequence number of the last modification sent by the corresponding monitor. The query processor builds an array of sequence numbers returned by the wrappers, one per relation, as part of each query result. The view manager also keeps an array of sequence numbers, one per relation, corresponding to the last modification it has received. When a query result arrives, the view manager then compares the query result array with its own array. If any query result sequence number is higher than the view manager's corresponding sequence number, the view manager waits for the modification before continuing. Note that this solution requires each single source query to receive a sequence number at least as high as any modification that may be reflected in the query result, and communication between the monitor and wrapper is involved. However, no special concurrency control is needed. 3 Whips modules In this section, we describe the modules of the Whips architecture in more detail. For each module, we discuss the current implementation, design alternatives we considered, advantages of the current design, and extensions we would like to make. The modules are described below in roughly the order in which they are encountered during view definition and materialization. 3.1 View specifier Views are defined in a subset of SQL that includes select-project-join views over all of the source data, without nesting. Optionally, the view definition may also specify which Strobe algorithm to use for view consistency. When a view is defined, the view specifier parses it into an internal structure we call the view tree, adds relevant information from the meta-data store (e.g., key attributes, attribute types), and sends the view tree to the integrator. We are currently adding simple SQL aggregate operators (min, max, count, sum, and average) to the view language. We plan to add index specification capabilities for each view. We also plan to include the option of specifying that the view should include historical information (although the source data does not). 3.2 Meta-data store The meta-data store keeps catalog information about the sources and how to contact them, the relations stored at each source, and the schema of each relation. The meta-data store also keeps track of all view definitions. 3.3 Integrator The integrator coordinates both system startup, including new source additions, and view initialization. However, the main role of the integrator is to facilitate view maintenance, by figuring out which modifications need to be propagated to which views. To do so, the integrator uses a set of rules that specify which view managers are interested in which modifications. These rules are generated automatically from the view tree when each view is defined. In the simplest case, the rules dictate that all modifications to a relation over which a view is defined are forwarded to the corresponding view manager. Currently, the integrator is implemented as an index over the view managers, keyed by the relations. We would like to extend the integrator to filter the modifications for each view. For example, a selection condition in a view definition might render some modifications irrelevant to that view (although relevant to other views). Although we have initially built the system with one integrator, an advantage of our design is that the integrator only depends on the view definitions. Therefore, the integrator can be replicated to scale the system. One integrator would be designated to spawn the view managers for each view definition, and to register the view managers with the other integrators. 3.4 View manager(s) There is one view manager module responsible for maintaining each view, using one of the Strobe algorithms (as specified in the view definition) to maintain view consistency. The different Strobe algorithms yield different levels of consistency depending on the modification frequency and clustering; all of the algorithms require keeping track of the sequence of modifications and compensating query results for modifications that may have been missed. A full discussion of the algorithms may be found elsewhere [ZGMW95]. There are two advantages to using one view manager per view. First, the work of maintaining each view can be done in parallel on different machines. Second, each view may employ a different Strobe algorithm, to enforce a different level of consistency for its view. 3.5 Query processor(s) The query processor is responsible for distributed query processing, using standard techniques such as sideways information passing and filtering of selection conditions [OV91] to prune the queries it poses to the wrappers. It tracks the state of each global query while waiting for local query results from the wrappers. The primary advantages of separating the query processing from the view manager are: that the view manager can generate global queries, unaware of the distributed sources; the query evaluation code, which is common to all of the Strobe algorithms, is only written once; and a single query processor can handle queries for many view managers. Because the wrappers hide the source-specific query syntax, the query processor generates single source queries as if the sources were relational databases. Currently, the query processor waits for each single source query result from the wrapper before continuing. We are extending the query processor to work concurrently on evaluating multiple queries; while waiting for a query result from a given source, the query processor can then generate another single source query or apply a single source query result to a global query. The architecture provides for multiple query processors as needed to handle the number of queries in the system. One design issue is then how each view manager chooses a query processor for each query. One option lets the view manager choose a query processor, either at random or with a hint from the integrator. However, a better alternative provides an additional module that exists purely to schedule queries to query processors. This scheme is most likely to scale to large numbers of view managers and queries. Note that multiple query schedulers could be added if needed, where each scheduler handles \( N \) query processors, and each view manager always sends its queries to a given query scheduler. 3.6 Wrappers Each wrapper is responsible for translating single source queries from the internal relational representation used in the view tree (which resembles relational algebra) to queries in the native language of its source. For example, a relational database wrapper would merely translate the relational algebra expression into SQL. A wrapper for a flat file Unix source might translate the algebra expression into a Unix grep for one selection condition, use postprocessing to apply further selection conditions and projections, and then transform the result into a relation. As stated above, using one wrapper per source hides the source-specific querying details from the query processor and all other modules: all wrappers support the same method interface although their internal code depends on the source. 3.7 Sources and monitors Each source may be completely autonomous of the warehouse and of the Whips system. However, we do take advantage of sources that are willing to cooperate (notify the system of changes) when we build monitors for them. Like the wrappers, the monitors all support a uniform method interface. However, their code differs according to the underlying source. Each monitor detects the modifications that are performed (outside the Whips system) on its source data. These modifications are then sent to the integrator. Currently, we have implemented trigger-based monitors for cooperative (relational) sources, and snapshot monitors for flat file sources that only provide periodic snapshots of the source data. We describe algorithms for efficient change detection on snapshots elsewhere [LMG95]. We are working on adding IBM's DataCapture to the system; DataCapture is a log-based monitor which reads the log for DB2 and generates a table of source changes. Currently, once a monitor is told that there is at least one view interested in the source, it notifies the integrator of all source modifications. However, we plan to enhance the monitors by filtering modifications based on selection conditions and projecting only relevant attributes (those involved in a selection condition, projection or join, or which are keys for the relation [GM95] in the view definition. Note, though, that filters applied at the monitor must apply to all view definitions. View-specific filtering must be performed at the integrator. 3.8 Warehouse and warehouse wrapper The warehouse in the Whips architecture may be any relational database. Of course, some relational databases are optimized for querying warehouse data, e.g., Redbrick [Red95], and may be more appropriate. The warehouse wrapper receives all view definitions and all modifications to the view data in a canonical (internal) format, and translates them to the specific syntax of the warehouse database. The wrapper thus shields all other modules in the Whips system from the particulars of the warehouse, allowing any database to be used as the warehouse. All modifications received by the warehouse wrapper in a single message are applied to the warehouse in one transaction, as needed by the Strobe view consistency algorithms. 4 Whips implementation All of the code is written in C++ and C, except the view parser portion of the view specifier, which is written in Lex and Yacc. We currently use a Sybase database [Syb92] for the warehouse. We have also experimented with a Sybase source with a monitor that uses triggers and a flat file source whose monitor uses the Windowing Snapshot algorithm [LG M95] to detect modifications. The Whips system currently runs on DEC Alphas and IBM RS/6000s. In the tests below, we used five separate machines for the modules: one for the integrator, view managers, and query processor, and one each for the warehouse wrapper, view specifier, Sybase source, and monitors and wrappers. 5 Performance In this section, we present the results of preliminary performance experiments on the Whips prototype system. We performed two experiments, one to measure the system latency in propagating a single modification from a source to the warehouse, and one to measure the system throughput in propagating modifications. For both experiments, the Whips system consisted of two sources containing one relation each. The daily_stock relation is a flat file containing a daily feed of stock prices from the NYSE and NASDAQ Stock Exchanges. The monthly_pe relation is a Sybase relation that provides the price-to-earnings (pe) ratio of each stock. (In the future, the pe’s will be obtained from a Dialog source [Dia94] for this application.) The two relations are defined as follows, where the italicized attributes are the keys: \begin{verbatim} daily_stock(ticker, date, high, low, volume, close) monthly_pe(ticker, pe) \end{verbatim} Two views were defined for the experiments, a Copy view that was a copy of the daily_stock relation, and a Join2 view that joined the two relations on the ticker attribute, as follows: \begin{verbatim} define view Copy as select * from daily_stock define view Join2 as select daily_stock.ticker, daily_stock.date, daily_stock.close, daily_stock.volume, monthly_pe.pe from daily_stock, monthly_pe where daily_stock.ticker = monthly_pe.ticker and monthly_pe.pe > 119.5 \end{verbatim} 5.1 System latency In the first experiment, we measured the system latency in propagating a single detected modification from the monitor to the Join2 view at the warehouse. We simulated insertions to the daily_stock relation and recorded the time spent by the Whips system in each module in processing that one insert. We waited for a steady state and recorded the time for each module for 20 insertions. The average time spent in each module is shown in Figure 3, for a total time of 304 ms. The communication time is the portion of the total time not spent in any module. As shown in the figure, a roughly equal amount of time is spent in each module. Therefore, no one module should be a bottleneck for propagating modifications in the system. Although for these experiments, we used small versions of the relations containing 150 rows each, when we ran the experiment with larger versions of the relations (over 10,000 rows each), only the time at the monitors and wrappers increased: it takes slightly longer to detect the change and slightly longer to find join matches for it. The total time was therefore 340 ms, about 11% slower. 5.2 System throughput In the second experiment, we measured the system throughput. We varied the number of modifications at the source per second from 1 to 20, and measured how many modifications appeared at the warehouse per second, for both the Copy and Join2 views. (Twenty modifications per second is roughly 1.8 million modifications per day.) Each modification was an insert into the relation daily_stock. We ran the experiment for two minutes. Figure 4 shows that, as expected, as we increase the insertion rate, the Whips system processes more total modifications, but a smaller percentage of the total. While a latency of 304 ms might predict processing only 3 inserts per second, since the modules' processing time can overlap, we expected a throughput inversely proportional to the slowest module, the view manager, of roughly 18 inserts per second (1000/55). However, in the current implementation, the query processor waits for each local query result from the wrapper before continuing. Therefore, the maximum throughput is inversely proportional to the time for the query processor and wrapper combined, or 11.6 inserts per second. The maximum we observed was 11.3; when more inserts were sent by the monitor, they generated longer and longer queues at the other modules. This experiment shows that the throughput is as good as the slowest module. Therefore, by replicating the modules, each replica can handle as much work and the system can scale to handle larger modification rates and more defined views. For example, in the above scenario, we could add more query processor modules to handle the heavy query workload, and also extend the query processor to handle additional queries while waiting for query results from the wrappers. 6 Conclusions and future work In this paper, we described the Whips architecture for warehouse creation and maintenance. The Whips system allows views over multiple, heterogeneous, autonomous, sources and provides incremental view maintenance in a modular and scalable fashion. The Whips system can thus grow while continuing to consistently update all defined views and to allow concurrent querying and analysis at the warehouse. Future work on the Whips system includes adding foreign functions to the view definitions, to translate different representations of data into comparable formats (e.g., dollars to yen) and filtering modifications at the integrator so that view managers are only informed of modifications relevant to their view (not simply all modifications to relations in the view). We are also designing algorithms for crash recovery; in order to recover from a crash, not only do all source and view definitions need to be persistent (they already are), but also all modifications currently being processed must be remembered and recovered. We also plan to do more performance testing and tuning of the prototype system. Adding system statistics could be of great benefit. For instance, usage statistics of the views defined could help decide how often the view should be updated. Query processor and integrator load statistics could help in load balancing. Finally, we are interested in keeping track of the relationships among views and using them to make view maintenance more efficient. In the examples in this paper, it was always necessary to examine the source data to update each view. However, some views may be self-maintainable [QGMW95], possibly by querying other views stored at the warehouse rather than the sources. References
{"Source-Url": "http://ilpubs.stanford.edu:8090/109/1/1995-40.pdf", "len_cl100k_base": 5597, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29388, "total-output-tokens": 6521, "length": "2e12", "weborganizer": {"__label__adult": 0.00023937225341796875, "__label__art_design": 0.0002872943878173828, "__label__crime_law": 0.00030684471130371094, "__label__education_jobs": 0.0006074905395507812, "__label__entertainment": 5.1915645599365234e-05, "__label__fashion_beauty": 0.00011050701141357422, "__label__finance_business": 0.0006961822509765625, "__label__food_dining": 0.0002677440643310547, "__label__games": 0.0003936290740966797, "__label__hardware": 0.0011453628540039062, "__label__health": 0.00029468536376953125, "__label__history": 0.00021076202392578125, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.0005936622619628906, "__label__literature": 0.0001577138900756836, "__label__politics": 0.00020205974578857425, "__label__religion": 0.0002598762512207031, "__label__science_tech": 0.04388427734375, "__label__social_life": 5.543231964111328e-05, "__label__software": 0.02911376953125, "__label__software_dev": 0.919921875, "__label__sports_fitness": 0.00015401840209960938, "__label__transportation": 0.0005326271057128906, "__label__travel": 0.0001894235610961914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30883, 0.02638]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30883, 0.34449]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30883, 0.91774]], "google_gemma-3-12b-it_contains_pii": [[0, 2877, false], [2877, 8196, null], [8196, 10946, null], [10946, 16369, null], [16369, 21775, null], [21775, 25676, null], [25676, 29805, null], [29805, 30883, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2877, true], [2877, 8196, null], [8196, 10946, null], [10946, 16369, null], [16369, 21775, null], [21775, 25676, null], [25676, 29805, null], [29805, 30883, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30883, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30883, null]], "pdf_page_numbers": [[0, 2877, 1], [2877, 8196, 2], [8196, 10946, 3], [10946, 16369, 4], [16369, 21775, 5], [21775, 25676, 6], [25676, 29805, 7], [29805, 30883, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30883, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
16221c0e5a6ece6221594bf8676396b4f7d655cc
SOFTWARE TOOL ARTICLE Infrastructure for genomic interactions: Bioconductor classes for Hi-C, ChIA-PET and related experiments [version 1; peer review: 2 approved] Aaron T. L. Lun¹, Malcolm Perry², Elizabeth Ing-Simmons² ¹Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK ²MRC Clinical Sciences Centre, Faculty of Medicine, Imperial College London, London, UK Abstract The study of genomic interactions has been greatly facilitated by techniques such as chromatin conformation capture with high-throughput sequencing (Hi-C). These genome-wide experiments generate large amounts of data that require careful analysis to obtain useful biological conclusions. However, development of the appropriate software tools is hindered by the lack of basic infrastructure to represent and manipulate genomic interaction data. Here, we present the InteractionSet package that provides classes to represent genomic interactions and store their associated experimental data, along with the methods required for low-level manipulation and processing of those classes. The InteractionSet package exploits existing infrastructure in the open-source Bioconductor project, while in turn being used by Bioconductor packages designed for higher-level analyses. For new packages, use of the functionality in InteractionSet will simplify development, allow access to more features and improve interoperability between packages. Keywords Hi-C, ChIA-PET, infrastructure, data representation, genomic interactions This article is included in the RPackage gateway. Corresponding author: Aaron T. L. Lun (aaron.lun@cruk.cam.ac.uk) Competing interests: No competing interests were declared. Grant information: ATLL was supported by core funding from Cancer Research UK (award no. A17197). MP and EI-S were supported by a Medical Research Council PhD studentships. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Copyright: © 2016 Lun ATL et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. How to cite this article: Lun ATL, Perry M and Ing-Simmons E. Infrastructure for genomic interactions: Bioconductor classes for Hi-C, ChIA-PET and related experiments [version 1; peer review: 2 approved] F1000Research 2016, 5:950 https://doi.org/10.12688/f1000research.8759.1 First published: 20 May 2016, 5:950 https://doi.org/10.12688/f1000research.8759.1 This article is included in the Bioconductor gateway. Introduction Techniques such as chromatin conformation capture with high-throughput sequencing (Hi-C)\(^1\) and chromatin interaction analysis with paired-end tags (ChIA-PET)\(^2\) are increasingly being used to study the three-dimensional structure and organisation of the genome. Briefly, genomic DNA is fragmented and subjected to a ligation step during which DNA from interacting loci are ligated together. High-throughput paired-end sequencing of the ligation products will identify pairs of interacting genomic regions. The strength of each interaction can also be quantified from the number of read pairs connecting the two interacting regions. This information can be used to derive biological insights into the role of long-range interactions in transcriptional regulation as well as the general organization of the genome inside the nucleus. The analysis of Hi-C and ChIA-PET data is not a trivial task, and many software packages have been developed to facilitate this process. Several of these packages like diffHiC\(^3\) and GenomicInteractions\(^4\) are part of the open-source Bioconductor project, which aims to provide accessible tools for analyzing high-throughput genomic data with the R programming language. One of the strengths of the Bioconductor project is the quality and quantity of shared infrastructure available to developers. Pre-defined S4 classes such as GenomicRanges and SummarizedExperiment can be used to represent various types of genomic data and information, easing the maintenance burden for developers while also improving interoperability between packages for users. However, this kind of common infrastructure does not yet exist for the genomic interaction field. Instead, each package contains its own custom classes, which increases code redundancy and development load while reducing interoperability. Here, we describe the InteractionSet package that provides base S4 classes for representing and manipulating genomic interaction data. It contains the GInteractions class, to represent pairwise interactions; the InteractionSet class, to store the associated experimental data; and the ContactMatrix class, to represent interactions in a matrix format. This facilitates code reuse across Bioconductor packages involved in analyzing data from Hi-C, ChIA-PET and similar experiments. Overview of available classes The GInteractions class Each object of the GInteractions class is designed to represent interactions between pairs of “anchor” regions in the genome (Figure 1A). It does so by storing pairs of anchor indices that point towards a reference set of genomic coordinates (specified as a GenomicRanges object). Each anchor index refers to a specific reference region, such that a pair of such indices represents a pairwise interaction between the corresponding regions. This design reduces memory usage as the reference coordinates need only be stored once, even if each region is involved in multiple interactions. Computational work is also reduced as calculations can be quickly applied across the small set of reference regions, and the results can be retrieved for each interaction based on the anchor indices. In addition, the GInteractions class inherits from the Vector class in Bioconductor’s S4Vectors package. This allows storage of metadata for each interaction (e.g., intensities, \(p\)-values) and for the entire object (e.g., experiment description). The InteractionSet class The InteractionSet class is designed to store experimental data for each feature (Figure 1B). It inherits from the SummarizedExperiment base class, where each object of the class stores any number of matrices of the same dimensions. Each row of each matrix corresponds to a pairwise genomic interaction (represented by a GInteractions object that is also stored within each InteractionSet object), while each column corresponds to an experimental sample. Each entry of the matrix then represents the observation for the corresponding interaction in the corresponding sample. Different matrices can be used to store different types of data, e.g., read counts, normalized intensities. The InteractionSet class also inherits a number of fields to store metadata for each interaction, for each sample, and for the entire object. The ContactMatrix class The ContactMatrix class is designed to represent pairwise interactions in a matrix format (Figure 1C). Each row and column of the matrix represents a genomic region, such that each cell of the matrix represents an interaction between the corresponding row/column regions. Experimental data for that interaction can be stored in the associated cell. This provides a direct representation of the “interaction space”, i.e., the two-dimensional space in which \((x, y)\) represents an interaction between \(x\) and \(y\). Like the GInteractions class, the genomic coordinates are not stored directly – rather, the rows/columns have indices that point towards a reference set of coordinates, which reduces memory usage and computational work. The matrix representation itself uses classes in the Matrix package to provide support for both dense and sparse matrices. The latter may be more memory-efficient, particularly for sparse areas of the interaction space. The ContactMatrix class is compatible with existing matrix-based classes such as those in the HiTC package. Overview of available methods The InteractionSet package provides a variety of methods for manipulating objects of each class. In addition to slot accessors and modifiers, methods are available to convert objects to different classes in the same package (e.g., GInteractions to ContactMatrix) or to base Bioconductor classes (e.g., GenomicInteractions to GRangesList). The distance between anchor regions on the linear genome can be computed for each pairwise interaction, to use in fitting a distance-dependent trend\(^1\) for diagnostics or normalization. The minimum bounding box in the interaction space can also be defined for a group of interactions (Figure 2A) to summarize the location of that group. The InteractionSet package supports one- or two-dimensional overlaps for its objects (Figure 2B). A one-dimensional overlap is considered to be present between an interaction and a genomic interval if either anchor region of the interaction overlaps the interval. This can be used to identify interactions overlapping predefined regions of interest. A two-dimensional overlap is considered to be present Figure 1. Overview of the classes in the InteractionSet package. Relevant slots of each class (i.e., data values stored in each object of the class) are labelled with a preceding “@”. (A) The GInteractions class represents pairwise interactions between genomic regions by storing pairs of anchor indices that refer to coordinates in a GenomicRanges object. (B) The InteractionSet class stores experimental data in an “assays” matrix where each row is an interaction and each column is a sample. Here, counts represent the number of read pairs mapped between each pair of interacting regions in each sample. (C) The ContactMatrix class represents the interaction space as a matrix, where each cell represents an interaction between the corresponding row/column regions. between an interaction and two genomic intervals if one anchor region overlaps one interval and the other anchor region overlaps the other interval. This can be used to identify interactions linking two specific regions of interest, e.g., a gene and its enhancer. The same framework can be used to define two-dimensional overlaps between two interactions, based on whether the corresponding anchor regions overlap – this can be used to relate similar interactions in different GInteractions objects or across different experiments. More generally, interactions can be identified that link any two regions in a set of regions of interest. For example, given a set of genes, interactions between two genes can be identified; or given a set of genes and another set of enhancers, interactions linking any gene to any enhancer can be found. Hi-C data in an InteractionSet object can also be converted into a 4C-like format (Figure 2C). Firstly, a bait region is defined as some region of interest, e.g., a target gene or enhancer. All interactions in the InteractionSet object that have one-dimensional overlaps with the bait are identified. For each overlapping interaction, the anchor region that does not overlap with the bait is extracted and – along with the data associated with that interaction – used to construct a RangedSummarizedExperiment object. This process yields data for intervals on the linear genome, which is similar to the output of 4C experiments that measure the intensity of interactions between the bait and all other regions. The “linearized” format may be preferable when a specific region can be defined as the bait, as intervals on the linear genome are easier to interpret than interactions in two-dimensional space. Implementation and operation details All classes and methods in the InteractionSet package are implemented using the S4 object-orientated framework in R. Classes are exported to allow package developers to derive custom classes for their specific needs. Pre-existing Bioconductor classes and generics are used to provide a consistent interface for users. After loading the InteractionSet package into an R session, instances of each class can be constructed from existing data structures, either directly (e.g., GInteractions objects from GRanges via the GInteractions constructor, or from Pairs via the makeGInteractionsfromGRanges function; ContactMatrix objects from GRanges and Matrix via the ContactMatrix constructor) or in a hierarchical manner (e.g., InteractionSet objects from matrices and a GInteractions object via the InteractionSet constructor). The methods described above can then be applied to each instance of the class. While the InteractionSet package does not have functions to load data from file, it can be combined with the import function in the rtracklayer package to construct class instances after importing... data from a range of formats including BED and BEDPE. A similar strategy can be used to export data to file. Conclusions The availability of common infrastructure is highly beneficial to software development by reducing redundancy and improving reliability, as more developers can check the same code; improving interoperability, as different packages use the same classes; and increasing the accessibility of useful features, which exist in a single package rather than being sequestered away in a variety of different packages. Here, we present the InteractionSet package that implements a number of classes and methods for representing, storing and manipulating genomic interaction data from Hi-C, ChIA-PET and related experiments. The package is fully integrated into the Bioconductor ecosystem, depending on a number of base packages to implement its classes (e.g., S4Vectors, GenomicRanges, SummarizedExperiment) while in turn being depended on by packages for higher-level analyses (e.g., diffHic, GenomicInteractions). Indeed, for any new packages, use of the features in InteractionSet will simplify development and improve interoperability with existing packages in the Bioconductor project. The InteractionSet package itself can be obtained for R version 3.3.0 at http://bioconductor.org/packages/InteractionSet. Software availability Software and latest source code available from: http://bioconductor.org/packages/InteractionSet Archived source code as at time of publication: http://dx.doi.org/10.5281/zenodo.512049 License: GNU General Public License version 3.0 Author contributions ATLL proposed and developed the InteractionSet package, with significant contributions from MP and EI-S. All authors wrote and approved the manuscript. Competing interests No competing interests were declared. Grant information ATLL was supported by core funding from Cancer Research UK (award no. A17197). MP and EI-S were supported by a Medical Research Council PhD studentships. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Acknowledgements We thank Annika Gable, Aleksandra Pekowska, Bernd Klaus, Michael Lawrence and Hervé Pagès for coding and feature suggestions. We also thank John Marioni and Boris Lenhard for comments on the manuscript. References Nicolas Servant Institut Curie, Paris, France The authors present the InteractionSet package, that eases the manipulation of chromosome conformation data within the BioConductor/R framework. The InteractionSet package was designed to store direct interactions between two genomic loci. It also proposed a ContactMatrix class allowing to store the interaction counts as a Matrix format. One important point is its ability to be generic allowing the manipulation of any type of interaction data, such as ChIA-PET, Hi-C or 4C data. This work provides an interesting base for package development in this field and should therefore be of great use to the community. In practice, the manuscript highlights the quality of the implementation and the optimization of this package. Dealing with Hi-C data can be challenging as the amount of data can be very large. Through this manuscript (Figure 1), it is clear that the authors propose an efficient strategy to manipulate such data. In addition, the package is well documented with a quick start guide (vignette) and the description of each function. The new classes are based on existing S4 classes and methods and should therefore be easy to use for users familiar with the intervals manipulation in R. The package is already used as a dependency of other packages such as GenomicInteractions and diffHiC. Finally, it is compatible with other existing BioConductor packages such as the HiTC package. Regarding the manuscript itself, it clearly describes what the InteractionSet does. It is well written and easy to read. I only have a few minor comments that I hope will help the authors to improve the manuscript and/or the package. 1. Storing direct interaction counts looks very interesting in practice. I'm just wondering how efficient is the GInteractions class in term of scalability and memory usage? As an example, Rao et al. recently generated Hi-C contact maps at a resolution of 5kb. This very high resolution dataset implies billion of Hi-C contacts. It would be interesting the know up to which resolution (or data throughput) the InteractionSet package is efficient, and/or which amount of RAM is require to deal with very large dataset. 2. The authors mentioned that the ContactMatrix class is compatible with matrix-based classes from the HiTC package. I therefore tried to convert a ContactMatrix object into a HTCexp object from HiTC (using the as() function) but It doesn't work. A note/example about that might be useful in the manual. 3. The package requires a recent version of R (>=3.3.0). It might be good to mention it somewhere. **Competing Interests:** No competing interests were disclosed. I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard. Author Response 02 Jun 2016 Aaron Lun, Cancer Research UK Cambridge Research Institute, UK Thanks for your comments Nicolas. Our responses are below: **Storing direct interaction counts looks very interesting in practice. I'm just wondering how efficient is the GInteractions class in term of scalability and memory usage? As an example, Rao et al recently generated Hi-C contact maps at a resolution of 5kb. This very high resolution dataset implies billion of Hi-C contacts. It would be interesting the know up to which resolution (or data throughput) the InteractionSet package is efficient, and/or which amount of RAM is require to deal with very large dataset.** In several Hi-C analyses that we have performed (50 kbp resolution, ~1 billion reads), the size of the InteractionSet object is around 100MB to 1GB. This is well within the capacity of modern desktop machines, let alone high performance computing (HPC) facilities. For smaller bin sizes, the quadratic increase in the potential number of bin pairs is mitigated by the fact that a greater number of those bin pairs will be empty. We only store non-empty interactions in GInteractions/InteractionSet objects, which avoids a quadratic increase in memory requirements. (In practice, further savings can be made by filtering to remove low-abundance interactions.) That said, for very large data sets with read coverage across the entire interaction space, the memory requirements will increase dramatically at higher resolutions. This can be mitigated to some extent by only operating on a single chromosome (or pair of chromosomes) at any given time. However, if this is not possible (e.g., the downstream analysis requires all interactions), then HPC resources and 64-bit R may be required to handle the resulting objects. We feel that such requirements are mostly unavoidable, as the generation of large data sets requires concomitant effort in the computational analysis. **The authors mentioned that the ContactMatrix class is compatible with matrix-based classes from the HiTC package. I therefore tried to convert a ContactMatrix object into a** HTCexp object from HiTC (using the as() function) but It doesn’t work. A note/example about that might be useful in the manual. Our concept of compatibility was based more on the class implementations and concepts, rather than through any explicit conversion. Specifically, both ContactMatrix and HTCexp use GRanges to represent the genomic coordinates of the row/column regions, and a Matrix class to represent store the intensity values across the interaction space. Thus, information extracted from an instance of a ContactMatrix class can directly be supplied to the HTCexp constructor: ```r coords <- GRanges("chrA", IRanges(1:10, 1:10)) x2 <- ContactMatrix(Matrix(1:100, 10, 10), coords, coords) # dummy object colnames(x2) <- rownames(x2) <- LETTERS[1:10] # dummy names HTCexp(as.matrix(x2), anchors(x2, "column"), anchors(x2, "row")) ``` We are reluctant to implement this directly as an "as" method in the InteractionSet package, because we have tried to maintain a distinction between the low-level base classes in our package and the high-level analysis and visualization methods in other packages like HiTC, diffHic, GenomicInteractions, etc. Our hope is that developers who would like to use or be compatible with InteractionSet would write appropriate methods in their own packages to convert to/from classes as necessary. The package requires a recent version of R (>=3.3.0). It might be good to mention it somewhere. Done. Competing Interests: No competing interests were disclosed. The paper is well written, simple, and accurately describes the package itself. I have downloaded and tested the package and both download and usage went smoothly. It behaves similarly to some of the packages that it builds off of such as GenomicRanges. And those familiar with GenomicRanges will be at home when using InteractionSet. In addition to simply storing and organizing pairwise data, InteractionSet includes a lot of handy features that will be of great use to the community. These include simple functions that organize pairwise genomic interactions such as swapAnchors that assures that the first of the two paired regions is always on the lower numbered chromosome or upstream (with regard to Watson strand) and more complex functions such as findOverlaps that allows users to overlap sets of pairwise interactions in a variety of ways. This package is very useful and powerful and provides a valuable resource to software developers and advanced users. The GenomicRanges-style organization of the data, that InteractionSet adopts, is often too complicated for casual R users to learn. In many cases simply reading files from BED, BEDPE, or sam format into data frames is easier and faster for simple tasks. However, developers will prefer this more standardized format for improved stability of their packages. And advanced users may prefer the standardized yet flexible approach to data organization and the powerful built in tools. Importantly, the package is accompanied by a detailed and clear online tutorial which clearly demonstrates how to use the classes and functions. In summary, this paper is succinct and clearly written and accurately describes an R package that will be of great use to the scientific community. **Competing Interests:** No competing interests were disclosed. **I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard.** --- **Author Response 02 Jun 2016** **Aaron Lun**, Cancer Research UK Cambridge Research Institute, UK Thanks for your comments Douglas. We agree that the Bioconductor ecosystem of data classes can be somewhat daunting for new users. Nonetheless, we believe that the use of standard Bioconductor tools is the safest strategy for the majority of users (and obviously developers), given the number of "gotchas" in data processing, e.g., off-by-one issues in BED file loading. **Competing Interests:** No competing interests were disclosed. The benefits of publishing with F1000Research: • Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage For pre-submission enquiries, contact research@f1000.com
{"Source-Url": "https://f1000research.s3.amazonaws.com/manuscripts/9426/5f814ad1-ae07-4199-b2b5-971b4532aa16_8759_-_aaron_lun.pdf?doi=10.12688/f1000research.8759.1&numberOfBrowsableCollections=105&numberOfBrowsableInstitutionalCollections=5&numberOfBrowsableGateways=53", "len_cl100k_base": 4863, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 26471, "total-output-tokens": 5957, "length": "2e12", "weborganizer": {"__label__adult": 0.0003750324249267578, "__label__art_design": 0.00040650367736816406, "__label__crime_law": 0.00046539306640625, "__label__education_jobs": 0.0018138885498046875, "__label__entertainment": 0.00017273426055908203, "__label__fashion_beauty": 0.0002639293670654297, "__label__finance_business": 0.0004563331604003906, "__label__food_dining": 0.0005373954772949219, "__label__games": 0.0007238388061523438, "__label__hardware": 0.0018901824951171875, "__label__health": 0.00263214111328125, "__label__history": 0.0003783702850341797, "__label__home_hobbies": 0.0002827644348144531, "__label__industrial": 0.0006704330444335938, "__label__literature": 0.0002741813659667969, "__label__politics": 0.0003709793090820313, "__label__religion": 0.0004892349243164062, "__label__science_tech": 0.3857421875, "__label__social_life": 0.00030994415283203125, "__label__software": 0.065185546875, "__label__software_dev": 0.53515625, "__label__sports_fitness": 0.0005517005920410156, "__label__transportation": 0.000457763671875, "__label__travel": 0.00025963783264160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26404, 0.02465]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26404, 0.28819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26404, 0.90064]], "google_gemma-3-12b-it_contains_pii": [[0, 1574, false], [1574, 2677, null], [2677, 9156, null], [9156, 10896, null], [10896, 12810, null], [12810, 16993, null], [16993, 19045, null], [19045, 21950, null], [21950, 23455, null], [23455, 25973, null], [25973, 26404, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1574, true], [1574, 2677, null], [2677, 9156, null], [9156, 10896, null], [10896, 12810, null], [12810, 16993, null], [16993, 19045, null], [19045, 21950, null], [21950, 23455, null], [23455, 25973, null], [25973, 26404, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26404, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26404, null]], "pdf_page_numbers": [[0, 1574, 1], [1574, 2677, 2], [2677, 9156, 3], [9156, 10896, 4], [10896, 12810, 5], [12810, 16993, 6], [16993, 19045, 7], [19045, 21950, 8], [21950, 23455, 9], [23455, 25973, 10], [25973, 26404, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26404, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
53c7761f6b128e88e4805034cfc02d08025e01b5
Binomial Checkpointing for Arbitrary Programs with No User Annotation Jeffrey Mark Siskind† Barak A. Pearlmutter† April 2016 Heretofore, automatic checkpointing at procedure-call boundaries [1], to reduce the space complexity of reverse mode, has been provided by systems like TAPENADE [2]. However, binomial checkpointing, or treeverse [3], has only been provided in AD systems in special cases, e.g., through user-provided pragmas on DO loops in TAPENADE, or as the nested taping mechanism in ADOL-C for time integration processes, which requires that user code be refactored. We present a framework for applying binomial checkpointing to arbitrary code with no special annotation or refactoring required. This is accomplished by applying binomial checkpointing directly to a program trace. This trace is produced by a general-purpose checkpointing mechanism that is orthogonal to AD. Consider the code fragment in Listing 1. This example, \( y = f(x) \), while contrived, is a simple caricature of a situation that arises commonly in practice, e.g., in adaptive grid methods. Here, the duration of the inner loop varies wildly as some function \( l(x,i) \) of the input and the outer loop index, perhaps \( 2^{\lfloor \log (n) \rfloor - \lfloor \log (1+1007/3^i) \mod n \rfloor} \), that is small on most iterations of the outer loop but \( O(n) \) on a few iterations. Thus the optimality of the binomial schedule is violated. The issue is that the optimality of the binomial schedule holds at the level of primitive atomic computations but this is not reflected in the static syntactic structure of the source code. Often, the user is unaware or even unconcerned with the micro-level structure of atomic computations and does not wish to break the modularity of the source code to expose such. Yet the user may still wish to reap the benefits of an optimal binomial checkpointing schedule [4]. Moreover, the relative duration of different paths through a program may vary from loop iteration to loop iteration in a fashion that is data dependent, as shown by the above example, and not even statically determinable. We present an implementation strategy for checkpointing that does not require user placement of checkpoints and does not constrain checkpoints to subroutine boundaries, DO loops, or other syntactic program constructs. Instead, it can automatically and dynamically introduce a checkpoint at an arbitrary point in the computation that need not correspond to a syntactic program unit. We have previously introduced vlad, a pure functional language with built-in AD operators for both forward and reverse mode. Here, we adopt slight variants of these operators with the following signatures. \[ \mathcal{F} : f \ x \hat{x} \mapsto y \hat{y} \\ \mathcal{F} : f \ y \hat{y} \mapsto y \hat{x} \] The \( \mathcal{F} \) operator calls a function \( f \) on a primal \( x \) with a tangent \( \hat{x} \) to yield a primal \( y \) and a tangent \( \hat{y} \). The \( \mathcal{F} \) operator calls a function \( f \) on a primal \( x \) with a cotangent \( \hat{y} \) to yield a primal \( y \) and a cotangent \( \hat{x} \). Here, we restrict ourselves to the case where (co)tangents are ground data values, i.e., reals and (arbitrary) data structures containing reals and other scalar values, but not functions (i.e., closures). For our purposes, the crucial aspect of the design is that the AD operators are provided within the language, since these provide the portal to the checkpointing mechanism. In previous work, we introduced Stalin\(\text{\textregistered}\), a highly optimizing compiler for vlad. Here, we formulate a simple evaluator (interpreter) for vlad (Fig. 1) and extend such to perform binomial checkpointing. The operators \( \circ \) and \( \bullet \) range over the unary and binary basis functions respectively. This evaluator is written in what is known in the programming-language community as direct style, where functions (in this case \( \mathcal{E} \), denoting ‘eval’, \( A \), denoting ‘apply’, \[ \begin{align*} &\mathcal{F} v1 v2 v3 = \text{let } (v4 \triangleright v5) = A v1 (v2 \triangleright v3) \text{ in } (v4, v5) \\ &\mathcal{F} v1 v2 v3 = \text{let } (v4 \triangleleft v5) = ((A v1 v2) \triangleleft v3) \text{ in } (v4, v5) \\ &\mathcal{E} \rho \rho c = c \\ &\mathcal{E} \rho x = \rho x \\ &\mathcal{E} \rho (\lambda x.e) = ((\lambda x.e), \rho) \\ &\mathcal{E} \rho (e1 e2) = A (E \rho e1) (E \rho e2) \\ &\mathcal{E} \rho (e1 \text{ if } e2 \text{ else } e3) = (E \rho e1) \text{ then } (E \rho e2) \text{ else } (E \rho e3) \\ &\mathcal{E} \rho (\circ e) = o(\rho e) \\ &\mathcal{E} \rho (e1 \bullet e2) = (E \rho e1) \bullet (E \rho e2) \\ &\mathcal{E} \rho (\mathcal{F} e1 e2 e3) = \mathcal{F} (E \rho e1) (E \rho e2) (E \rho e3) \\ &\mathcal{E} \rho (\mathcal{F} e1 e2 e3) = \mathcal{F} (E \rho e1) (E \rho e2) (E \rho e3) \end{align*} \] Figure 1: Direct-style evaluator for vlad. \[\text{Listing 1: FORTRAN example}\] and the implementations of \( \mathcal{F} \) and \( \mathcal{J} \) in the host) take inputs as function-call arguments and yield outputs as function-call return values [5]. AD is performed by overloading the basis functions in the host, in a fashion similar to FADBAD++ [6], \( x \to \hat{x} \) denotes recursively bundling a data structure containing primals with a data structure containing tangents, or alternatively recursively unbundling such when used as a binder, and \( y \leftarrow \hat{y} \) denotes running the reverse sweep on the tape \( y \) with the output cotangent \( \hat{y} \), or alternatively extracting the primal \( y \) and input cotangent \( \hat{x} \) from the tape when used as a binder \( y \leftarrow \hat{x} \). We introduce a new AD operator \( \mathcal{J} \) to perform binomial checkpointing. The crucial aspect of the design is that the signature (and semantics) of \( \mathcal{J} \) is identical to \( \mathcal{F} \); they are completely interchangeable, differing only in the space/time complexity tradeoffs. This means that code need not be modified to switch back and forth between ordinary reverse mode and binomial checkpointing, save interchanging calls to \( \mathcal{F} \) and \( \mathcal{J} \). Conceptually, the behavior of \( \mathcal{J} \) is shown in Fig. 2. In this inductive definition, a function \( f \) is split into the composition of two functions \( g \) and \( h \) in step 1, the checkpoint \( u \) is computed by applying \( g \) to the input \( x \) in step 2, and the cotangent is computed by recursively applying \( \mathcal{J} \) to \( h \) and \( g \) in steps 3 and 4. This divide-and-conquer behavior is terminated in a base case, when the function \( f \) is small, at which point the cotangent is computed with \( \mathcal{F} \), in step 0. If step 1 splits a function \( f \) into two functions \( g \) and \( h \) that take the same number of computational steps, the recursive divide-and-conquer process yields the logarithmic asymptotic space/time complexity of binomial checkpointing. The central difficulty in implementing the above is performing step 1, namely splitting a function \( f \) into two functions \( g \) and \( h \), ideally ones that take the same number of computational steps. A sophisticated user can manually rewrite a subroutine \( f \) into two subroutines \( g \) and \( h \). A sufficiently powerful compiler or source transformation tool might also be able to, with access only to local information, would not be able to. We solve this problem by providing an interface to a general-purpose checkpointing mechanism orthogonal to AD. \[ \begin{align*} \text{PRIMOPS } f \ x \rightarrow (y, n) & \quad \text{Return } y = f(x) \text{ along with the number } n \text{ of steps needed to compute } y. \\ \text{CHECKPOINT } f \ x \ n \rightarrow u & \quad \text{Run the first } n \text{ steps of the computation of } f(x) \text{ and return a checkpoint } u. \\ \text{RESUME } u \rightarrow y & \quad \text{If } u = (\text{CHECKPOINT } f \ x \ n), \text{ return } y = f(x). \end{align*} \] This interface allows (a) determining the number of steps of a computation, (b) interrupting a computation after a specified number of steps, usually half the number of steps determined by the mechanism in (a), and (c) resuming an interrupted computation to completion. A variety of implementation strategies for this interface are possible. We present one in detail momentarily and briefly discuss others below. Irrespective of how one implements the general-purpose checkpointing interface, one can use it to implement \( \mathcal{J} \) as shown in Fig. 3. The function \( f \) is split into the composition of two functions \( g \) and \( h \) by taking \( g \) as \( \lambda x. \text{CHECKPOINT } f \ x \ n, \) where \( n \) is half the number of steps determined by PRIMOPS \( f \ x, \) and \( h \) as \( \lambda u. \text{RESUME } u. \) One way of implementing the general-purpose checkpointing interface is to convert the evaluator from direct style to continuation-passing style (CPS, [7]), where functions (in this case \( \mathcal{E}, \mathcal{A}, \mathcal{F}, \) and \( \mathcal{J} \) in the host) take an additional continuation input \( k \) and instead of yielding outputs via function-call return, do so by calling the continuation with said output as arguments (Fig. 5). In such a style, functions never return; they just call their continuation. With tail-call merging, such corresponds to a computed \textbf{go to} and does not incur stack growth. This crucially allows the interruption process to actually return a checkpoint data structure containing the saved state of the evaluator, including its continuation, allowing the evaluation to be resumed by calling the evaluator with this saved state. This ‘level shift’ of return to calling a continuation allowing an actual return to constitute checkpointing interruption is analogous to the way backtracking is classically implemented in ProLOG, with success implemented as calling a continuation and failure implemented as actual return. In our case, we further instrument the evaluator to thread two values as inputs and outputs: the count \( n \) of the number of evaluation steps, which is incremented at each call to \( \mathcal{E} \), and the limit \( l \) of the number of steps, after which a checkpointing interrupt is triggered. With this CPS evaluator, it is possible to implement the general-purpose checkpointing interface (Fig. 4), not for programs in the host, but for programs in the target; hence our choice of formulating the implementation around an evaluator (interpreter). We remove this restriction below. The implementation of pr- mops calls the evaluator with no limit and simply counts the number of steps to completion. The implementation of check- pointing calls the evaluator with a limit that must be smaller than that needed to complete so a checkpointing interrupt is forced and the checkpoint data structure is returned. The implementation of resume calls the evaluator with arguments from the saved checkpoint data structure. With this, it is possible to reformulate the FORTRAN example from listing 1 in vlad (listing 2). Then one achieves binomial checkpointing simply by calling $\mathcal{J} f \ 3 \ 1$. The efficacy of our method can be seen in the plots (fig. 5) of the space and time usage, relative to that for the leftmost datapoint, of the above FORTRAN and vlad examples with varying $n$. TAPENADE was run without checkpointing, with manual checkpointing only around the body of the outer loop, with manual checkpointing only around the body of the inner loop, with binomial checkpointing around the bodies of both loops, and with binomial checkpointing. vlad was run with $\mathcal{F}$ and $\mathcal{J}$. Note that Tape- ade exhibits $O(n)$ space and time usage for all cases, while vlad exhibits $O(n)$ space and time usage with $\mathcal{F}$, but $O(1)$ space usage and $O(n)$ time usage with $\mathcal{J}$. The space complexity of $\mathcal{J}$ is the sum of the space required for the checkpoints and the space required for the tape. For a general computation of length $f$ and maximal live storage $w$, the former is $O(w \log t)$ while the latter is $O(w)$. For the code in our example, $t = O(n)$ and $w = O(1)$, leading to the former being $O(\log n)$ and the latter being $O(1)$. We observe $O(1)$ space usage since the constant factors of the latter overpower the former. The time complexity of $\mathcal{J}$ is the sum of the time required to (re)compute the primal and the time required to perform the reverse sweep. For a general computation, the former is $O(t \log t)$ while the latter is $O(t)$. For the code in our example, the former is $O(n \log n)$ and the latter is $O(n)$. We observe $O(n)$ time usage since, again, the constant factors of the latter overpower the former. Other methods present themselves for implementing the general-purpose checkpointing interface. One can use POSIX fork() much in the same way that it has been used to implement the requisite nondeterminism in probabilistic programming languages like probabilistic C [8]. A copy-on-write implementation of fork(), as is typical, would make this reasonably efficient and allow it to apply in the host, rather than the target, and thus could be used to provide an overloaded implementation of binomial checkpointing in a fashion that was largely transparent to the user. Alternatively, direct-style code could be compiled into CPS using a CPS transformation. A compiler for a language like vlad can be constructed that generates target code in CPS that is instrumented with step counting, step limits, and checkpointing interruptions. A driver can be wrapped around such code to implement $\mathcal{J}$. Existing high-performance compilers, like SMU/NJ [9], for functional languages like SML, already generate target code in CPS, so by adapting such to the purpose of AD with binomial checkpointing, it seems feasible to achieve high performance. In fact, the overhead of the requisite instrumentation for step counting, step limits, and checkpointing interruptions need not be onerous because the step counting, step limits, and checkpointing interruptions for basic blocks can be factored, and those for loops can be hoisted, much as is done for the instrumentation needed to support storage allocation and garbage collection in implementations like MLtOn [10], for languages like SML, that achieve very low overhead for automatic \[ A, n, l ((\lambda x, e), \rho) \ v = \ E, k, n, l, \rho[x \mapsto v] e \] \[ \mathcal{J} k, n, l, v_1, v_2, v_3 = A, (\lambda \ n, l, v_1, v_2, \ n, l, v_1, v_2) \] \[ \mathcal{J} k, n, l, v_1, v_2, v_3 = A, (\lambda \ n, l, v_1, \let \ (v_4 \ & v_5) = v \ & v_3 \ \in \ k, n, l, v_4, v_5) \] \[ \varepsilon, k, n, l, \rho, e = [k, l, \rho, e] \] \[ \varepsilon, k, n, l, \rho, c = k, (n + 1), l, c \] \[ \varepsilon, k, n, l, \rho, x = k, (n + 1), l, (\rho, x) \] \[ \varepsilon, k, n, l, \rho, (x, e) = k, (n + 1), l, ((x, e), \rho) \] \[ \varepsilon, k, n, l, \rho, (e_1, e_2) = \ E, \ (\lambda \ n, l, v_1, (\ E, \ (\lambda \ n, l, v_2, (A, k, n, l, v_1, v_2)) \ n, l, \rho, v_2)) \ (n + 1), l, \rho, v_1 \] \[ \varepsilon, k, n, l, \rho, \ (\text{if } e_1 \ \text{then } e_2 \ \text{else } e_3) = \ E, \ (\lambda, n, l, v_1, (\ E, \ (\text{if } e_1 \ \text{then } (E, k, n, l, \rho, e_2) \ \text{else } (E, k, n, l, \rho, e_3))) \ (n + 1), l, \rho, v_1 \] \[ \varepsilon, k, n, l, \rho, (\mathcal{J} e_1, e_2) = \ E, \ (\lambda, n, l, v_1, (\ E, \ (\lambda, n, l, v_2, (A, k, n, l, v_1, v_2)) \ n, l, \rho, v_2)) \ (n + 1), l, \rho, v_1 \] \[ \varepsilon, k, n, l, \rho, (\mathcal{J} e_1, e_2) = \ E, \ (\lambda, n, l, v_1, (\ E, \ (\lambda, n, l, v_2, (A, k, n, l, v_1, v_2)) \ n, l, \rho, v_2)) \ (n + 1), l, \rho, v_1 \] \[ \varepsilon, k, n, l, \rho, (\mathcal{J} e_1, e_2) = \ E, \ (\lambda, n, l, v_1, (\ E, \ (\lambda, n, l, v_2, (A, k, n, l, v_1, v_2)) \ n, l, \rho, v_2)) \ (n + 1), l, \rho, v_1 \] \[ \varepsilon, k, n, l, \rho, (\mathcal{J} e_1, e_2) = \ E, \ (\lambda, n, l, v_1, (\ E, \ (\lambda, n, l, v_2, (A, k, n, l, v_1, v_2)) \ n, l, \rho, v_2)) \ (n + 1), l, \rho, v_1 \] Figure 5: CPS evaluator for vlad. Listing 2: vlad example ``` (define (f x) (let ((n 100000)) (let outer ((i 1) (y x)) (if (> i n) y (let inner ((j 1) (y y)) (if (> j m) y (let ((m (1 x i))) (inner (+ j 1) ((sqrt (* y y))))))))) ``` ``` Figure 5: Space and time usage of reverse-mode AD with various checkpointing strategies, relative to the space and time for the first datapoint for each respective strategy. storage management. Acknowledgments This work was supported, in part, by NSF grant 1522954-IIS and by Science Foundation Ireland grant 09/IN.1/I2637. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors. References
{"Source-Url": "http://www.bcl.hamilton.ie/~barak/papers/ad2016a.pdf", "len_cl100k_base": 4876, "olmocr-version": "0.1.49", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 17275, "total-output-tokens": 5977, "length": "2e12", "weborganizer": {"__label__adult": 0.0004055500030517578, "__label__art_design": 0.00036406517028808594, "__label__crime_law": 0.0003991127014160156, "__label__education_jobs": 0.0006279945373535156, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00019884109497070312, "__label__finance_business": 0.0002827644348144531, "__label__food_dining": 0.0005369186401367188, "__label__games": 0.0006351470947265625, "__label__hardware": 0.0012884140014648438, "__label__health": 0.0008625984191894531, "__label__history": 0.0003237724304199219, "__label__home_hobbies": 0.00015485286712646484, "__label__industrial": 0.0007433891296386719, "__label__literature": 0.00031948089599609375, "__label__politics": 0.0003380775451660156, "__label__religion": 0.0006499290466308594, "__label__science_tech": 0.08734130859375, "__label__social_life": 0.00012230873107910156, "__label__software": 0.006618499755859375, "__label__software_dev": 0.89599609375, "__label__sports_fitness": 0.0004906654357910156, "__label__transportation": 0.000804901123046875, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19151, 0.02036]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19151, 0.4225]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19151, 0.80205]], "google_gemma-3-12b-it_contains_pii": [[0, 4992, false], [4992, 10645, null], [10645, 16552, null], [16552, 19151, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4992, true], [4992, 10645, null], [10645, 16552, null], [16552, 19151, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19151, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19151, null]], "pdf_page_numbers": [[0, 4992, 1], [4992, 10645, 2], [10645, 16552, 3], [16552, 19151, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19151, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
3684206526497658ce1e48ac1de8ed22a500ff21
An EMF-based Toolkit for Creation of Domain-specific Data Services Andreas Bender, Stefan Bozic and Ivan Kondov Steinbuch Centre for Computing (SCC), Karlsruhe Institute of Technology (KIT), Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Karlsruher, Germany Keywords: Metamodel, Eclipse Modeling Framework, Dataflow, Data Model, Workflow, Application Integration, Web Service. Abstract: Development of composite workflow applications in science and engineering is troublesome and costly due to high heterogeneity of data representations and data access interfaces of the underlying individual components. As an effective solution we present a generic toolkit enabling domain experts to develop data models and automatically generate a self-contained data access service. We defined a custom metamodel based on Ecore which can be readily used to create domain-specific data models. Using the generated data access service, instances of the modeled data residing on heterogeneous and distributed resources, such as databases and cloud data stores, are accessible from the individual application components via a language-independent Web service interface. We discuss the framework architecture, the toolkit implementation, the deployment process, as well as the performance of the data access service. Workflow designers as target users would benefit from the toolkit by using it for rapid and cost-efficient application integration. 1 INTRODUCTION The complexity of applications for simulation and data analysis in science and engineering has increased dramatically over the recent years. Such applications, often designed in the form of generic workflows, combine multiple software components originating from diverse application domains. Thus, developing such a composite application requires significant effort either for coordination of experts from these domains or for learning multiple domain-specific program codes by individual researchers. Scientists designing workflow applications face a rapidly increasing number of different programs which carry out very often the same functions. Thus the time necessary to implement changes, which are typically made on a short notice and rather frequently, has increased substantially. On the other hand, current environments for simulation and data analysis provided at computing centers are still too complicated for use by non-experts. Since computer simulations and data analyses are typically planned and carried out by scientists and engineers who are often non-experts in the technical field of computer science and software engineering, a corresponding easy-to-handle software infrastructure has to be provided. For this purpose, the paradigms of service-oriented architecture (SOA) (Erl, 2005) and model-driven engineering (MDE) (Schmidt, 2006) have been adopted to develop systems at such a level of technical abstraction that application domain experts can develop composite applications by integrating existing components following their design rather than spending efforts with the technicalities of the underlying computing environment. For instance, the SOA concept has been recently adopted to integrate multiple components into workflow applications for multiscale materials simulation (Kondov et al., 2011; Bozic et al., 2012) motivated by demands in the community. The major challenge we met with constructing a workflow application is the handling of data exchange between the workflow steps which is often referred to as dataflow. Data modeling in workflow’s specification has been shown to be very important to avoid potential data flow problems (Sadiq et al., 2004). Although complex data must be stored in such a way that all workflow steps can easily access it, many program codes usually cannot use the same data source in practice because they have mutually incompatible data representations, heterogeneous data formats or non-uniform data access interfaces. A common strategy employed in multiple application domains bases on format converters between the workflow steps which operate for a limited set of supported formats. This strategy was found unsatisfactory because it requires An EMF-based Toolkit for Creation of Domain-specific Data Services tedious, frequent and error-prone reimplementation of converters for each combination of simulation or analysis codes in a workflow application (Bozic and Kondov, 2012). Moreover, multiple conversions of large data can become a bottleneck and even make the workflow simulation unfeasible. Also, the converter solution is usually restricted to storage on a file system which introduces the need to manage additional metadata along the workflow. To treat the dataflow more efficiently different domain-specific non-transferable solutions have been developed in diverse application domains, e.g. in fusion plasma engineering (Manduchi et al., 2008), nuclear magnetic resonance (Vranken et al., 2005; Fagh et al., 2005; Nowling et al., 2011), computational chemistry (Murray-Rust et al., 2011; Birkenheuer et al., 2012), molecular engineering (Dubitzky et al., 2004; Sild et al., 2005; Sild et al., 2006), oil and gas industry (Rahon et al., 2012) and materials science (Bozic et al., 2012; Bozic and Kondov, 2012), whereby some of them have employed a model-driven approach. For example, the integration of COSMOS, a program code applied in the computational nuclear magnetic resonance, into the CCPN data model was straightforward (Schneider et al., 2012) while our attempt to adopt the same data model in other domains, e.g. in computational materials science, was less successful. In particular, we realized that a generic data model (a metamodel) and a generic tool is necessary to allow any domain expert in the role of workflow designer to develop domain-specific data models. Moreover, the data should be made accessible from each individual application code via a language-independent interface, e.g. a Web service, for rapid implementation of pre- and post-processors and construction of workflows from standard components. In this paper we present a generic solution for modeling and management of data in scientific workflows in a simple and uniform way. We report on a development of the concept from (Bozic and Kondov, 2012) resulting in a novel SOA- and MDE-based framework using modern standards for Web services. We will describe the implementation of the framework and discuss its functionality and general applicability. This service-oriented framework is based on a custom metamodel and domain models from which a complete service can be created with the Eclipse Modeling Framework (EMF). Elsewhere (Bender et al., 2013) we have demonstrated the applicability and the practical benefits of the framework in the use case of a composite multiscale modeling application in computational science. Here, we will focus on the program architecture and report on the technical implementation of the framework as a toolkit for model-driven automatic generation of data access services. The paper is organized as follows. In the next Section 2 we introduce the requirements and then in Section 3 discuss the conceptual design and program architecture of the toolkit. In Section 4 we discuss our specific selection of technologies which were used for the toolkit implementation and the generation process as described in Section 5. Further, in Section 6, we outline the unique advantages of our approach and analyze the performance of the data access service generated for a specific use case. In Section 7 we review related approaches in the context of the present work. Finally, in Section 8, we summarize the key results of this work and suggest directions for future work. 2 REQUIREMENTS ANALYSIS In order to develop and implement a concept for the framework we have considered all lessons learned from previous experience and the requirements of data modeling in different domains. In the following we will outline the essential requirements. The framework should - be generic and domain-independent so that it can be used in different domains with no further modification, - act as a bridge for access to heterogeneous and distributed storage from distributed workflow applications thus providing two interfaces — one for the storage and one for the application side, - provide a modeling environment, containing a graphical editor, for creation of data models for domain-specific needs, - provide utilities for fully automatic code generation of all components because it will only in this case provide a low-effort and low-threshold solution for the end-user, - provide a language-independent application access interface, e.g. a Web service, - provide an abstract storage access interface allowing applications to connect to distributed and heterogeneous data storage resources e.g. relational databases, simple files or cloud storage services such as Amazon S3. The implementation of the framework will be a toolkit providing interfaces that assist the user in all steps of the service creation: from the construction of a data model, through code generation of a service, up to the deployment as data access server. The toolkit should be operating system-independent and preferably based on open-source technology. In addition we aspire to use the modular service platform OSGi which enables reuse of components, reducing the complexity during developing components, versioning, simple deployment and dynamic updates. 3 FRAMEWORK ARCHITECTURE In this section we describe the architecture of the framework starting with the selection of the metamodel from which all domain-specific data models will be instantiated. Afterwards the concept of a Service Generator which transforms a given data model into a set of kernel classes is explained. Finally we will have a closer look at the generic kernel which provides two interfaces connecting a storage resource with arbitrary workflow applications. 3.1 Metamodel The creation of individual domain-specific models, as aimed by our framework, is part of an MDE process. The Meta Object Facility (MOF) is an MDE standard defined by the Object Management Group (OMG, http://www.omg.org/) forming the base for building custom metamodels as shown on the left hand side of Fig. 1. The M3 layer is the most generic one and defines a standard for the creation of meta-metamodels which are needed to build metamodels. Metamodels are resident in the M2 layer. The most prominent representative of this layer is the Unified Modeling Language (UML) (OMG, 2003). However, UML class diagrams contain methods or other components beyond those needed to model data entities and relationships. This UML complexity should not be exposed to the user constructing domain-specific models. To solve this problem we used the M3 (Ecore) layer to create a custom simple data-centric metamodel (in the M2 layer) as alternative to UML which can be understood by domain experts having no IT knowledge. This custom metamodel is restricted to the minimum necessary for modeling and managing data and forms the base for domain models which are part of the M1 layer. Thus, the proposed custom metamodel is innovative because it implements a practical trade-off between genericity and instant usability. A concrete instance of the M1 layer is a model that describes real objects and related data of the M0 layer. The objects of reality pertinent to the M0 layer represent domain-specific data units that users would like to make persistent in a storage instance. The meaning and value of such data units depend on the corresponding domain. For example, in computational materials science such objects could be atoms, molecules and chemical bonds between atoms. We designed the custom metamodel primarily for simulations and data analysis in scientific and engineering applications. However, owing to capabilities of Ecore the M2-layer metamodel can be readily extended to allow typing of data structures such as e.g. data cubes. The diagram in Fig. 1 shows the newly defined metamodel which is based on Ecore. The main element of the metamodel is the DataEntity which is used to model concrete data entities. These entities contain Attributes of different data types and may also contain References to other entities. With these references relationships or dependencies between entities can be defined. DataEntities are combined in a Package. A package is necessary to define identification information, such as a URI, in order to distinguish between different data models. It is planned to extend the metamodel with the concept of inheritance allowing the creation of more complex data models with less effort. Also it is intended to further investigate the applicability of the metamodel in different more extensive domains. 3.2 Service Generator Figure 2 depicts the generation process for a domain-specific Data Access Service. A domain expert uses a graphical editor to create a domain model based on the metamodel. Additionally the user has to define a set of properties that are needed to gain access to the target storage resource, typically including a connection URL and security credentials. Another set of properties is mandatory to define the Web service interface. The metamodel, the domain model and the service properties are then used by the Service Generator to transform the model to the final Data Access Service. ### 3.3 Data Access Service To support a large number of applications and make the Data Access Service accessible via the Internet we adopted the SOA paradigm making use of Web services technologies and open standards. The main component of the Data Access Service is a kernel that connects applications via a Web service with a data storage. The kernel has a three-layer architecture shown in Figure 3. The persistence layer handles the mapping between data objects and storage entities. A well known technology to realize this is O/R (object-relational) mapping (Ambler, 2012). Unfortunately this technology is limited to relational databases and we decided to use a storage-type independent solution for this part. The toolkit users should be able to choose a storage type for their data which fits best to their problem domain, for instance relational, graph-based, web-based or document-based data stores. The purpose of the representation layer is to marshal/unmarshal data objects to a transport format, e.g. XML or JSON. The most complex layer is the resource layer which defines the Web service interface. The layer acts as controller mapping the Web service operations to CRUD (Create, Read, Update, Delete) operations of the persistence layer. Furthermore it is responsible to deliver and receive marshaled data objects using the representation layer. ### 4 TECHNOLOGY SELECTION Previously (Bender et al., 2013) we have briefly introduced the technology stack which has been derived from the requirements and the framework concept. In this section we will discuss the pros and cons of different technologies available for the implementation. We suppose that such a discussion is interesting in the domain of model-based engineering and can be used to improve existing modeling tools. #### 4.1 Model Development The Eclipse Modeling Framework (EMF) has been established within the Eclipse Modeling Project (http://www.eclipse.org/modeling/) providing model-based development technologies and a large collection of modeling tools for the Java programming environment, including graphical tools for construction of models and metamodels, editors and for generation of source code. Since we have the requirement to define a metamodel and corresponding graphical editors as well as a code generator, therefore EMF seems to be the technology of choice. #### 4.2 Model Transformation The Eclipse Model To Text (M2T, http://www.eclipse.org/modeling/m2t/) project enfold three generator tools which are capable of transforming concrete models into text or source code using different template languages: Java Emitter Templates (JET), Xpand and Acceleo. JET uses a language that is similar to Java Server Pages (JSP) while Xpand uses a self-developed template language and the template language of Acceleo is an implementation of the MOF Model to Text Transformation Language standard (MOFM2T) of the OMG. Xpand and Acceleo assist the template developer with rich text editors that provide syntax highlighting, syntax validation and code completion. The standard template editor of JET is yet not fully developed. The template editors of Xpand and Acceleo look very similar in respect of usability and function volume. Because, in addition, Acceleo follows the MOF2M2T standard we decided to build our generator tool with the help of Acceleo. ### 4.3 Editor Development The Graphical Editing Framework (GEF) supports developers in creating graphical editors for the Eclipse Platform. Such editors provide convenient means to create complex objects such as state diagrams and process flow editors. To simplify the creation of graphical editors based on EMF metamodels and GEF, the Graphical Modeling Project (GMP) provides generation components and runtime infrastructure. Especially the Graphical Modeling Framework (GMF) allows the development of diagram editors based on Ecore models without programming any line of code. ### 4.4 Persistence Layer A main requirement of the service is the possibility to save data in different kinds of storage. A flexible way to handle this issue provides the standard Java Data Objects (JDO) (Oracle Corporation, 2013b). JDO is an annotation-driven framework which maps Java objects to storage entities. The reference implementation of JDO is DataNucleus (http://www.datanucleus.org) which supports a large set of various storage types such as RDBMS (Oracle, MySQL), map-based (HBase, Cassandra), document-based (MongoDB) and web-based (AmazonS3, Google Storage) storages. Several EMF-based frameworks address the topic of persisting models in different ways. Connected Data Objects (CDO, http://www.eclipse.org/cdo/) can be used to store EMF models in a central repository. Although the pluggable storage backend of CDO is very promising, the variety of supported storage types is very limited and covers mostly relational database systems. A similar approach regarding the persistence of models employs the framework Teneo. It provides a model-relational mapping using Hibernate and EclipseLink. Teneo is also used in the CDO Hibernate Store and supports only relational databases. A very interesting EMF-based framework, especially considering the resource layer which is discussed in the next section, is Texo. From a model definition it builds a web server which uses the Java Persistence API (JPA) (Oracle Corporation, 2013c) to store model data in relational data stores. In addition it provides a REST interface which allows clients to retrieve and modify data objects over a network. For this purpose the data objects are serialized in XML or JSON. Unfortunately it also does not fulfill the requirement that different types of data storage systems, e.g. document-oriented or cloud storage systems, should be supported. Therefore we decided to develop an own flexible solution with JDO and DataNucleus respectively. ### 4.5 Resource Layer The resource layer should provide a programming language-independent interface in order that different applications can access data objects in a simple and uniform way. As solution we chose a REST-based Web service (Richardson and Ruby, 2007). We decided to use REST over a SOAP-based Web service (W3C, 2007) because the processing of domain data will follow the CRUD functionality which is represented by the standard HTTP methods (GET, PUT, POST, DELETE). Furthermore the SOAP protocol has a significant overhead due to increased data volume transferred by the service. REST services with Java are defined by the JAX-RS specification (Oracle Corporation, 2013d). We used Jersey (http://jersey.java.net/) for the implementation of the resource layer because it is the reference implementa- tion of the JAX-RS specification which is widely used by well-known Java projects. 4.6 Representation Layer To combine the JDO technology with the REST service, we needed to transform Java objects into a language-independent representation format, e.g. XML or JSON. A standard for this purpose is JAXB (Java Architecture for XML Binding) (Oracle Corporation, 2013a) which can be used for binding XML data to Java objects and vice versa. Thus, object instances of the representation layer have dependencies on the model and resource layer and transform the incoming data from the application layer to the corresponding JDO objects of the storage layer. 4.7 Technology Stack An overview of the chosen technologies is given in the technology stack in Fig. 4. The stack has five layers of components that we used to build a toolkit for generation of Data Access Services. The layers of the stack have a fixed sequence owing to the dependencies between the layers. The sequence of the layers gives rise to the specific build process that is explained in Section 5.2. The first layer includes the definition of new domain-specific models done with a simple graphical editor based on EMF or a more complex diagram editor created with GMF. Acceleo templates are used to transform the model definition to concrete Java source code for the kernel API which is built with implementations of the specifications for JDO, JAXB and JAX-RS. Then an Eclipse plug-in compiles the kernel API to concrete Java bytecode. Finally the compiled kernel code is packaged and deployed in a lightweight runtime environment for Web services, for example Apache Tomcat or Jetty from the Eclipse Foundation, using Ant. 5 TOOLKIT IMPLEMENTATION One of the objectives is to provide a modeling environment which supports domain experts with graphical editors for creating their models including a self-explanatory GUI to manage the build process of the Service Generator and to configure the application and storage access layers. To tackle this task we have implemented a toolkit based on the Eclipse platform which in turn is based on the Equinox OSGi (http://www.eclipse.org/equinox/) implementation and available for many operating systems. All toolkit components were developed as Eclipse plug-ins and made available on an Eclipse update site so that users can install the toolkit with a few clicks into their Eclipse installation. The plug-ins can be classified in three categories: The first category are the plug-ins that are based on EMF and which are generated mostly automatically. The second category contains the generation plug-in which manages the model-to-text transformation process. Here the Acceleo templates are specified which are used to generate all source and configuration files of the Data Access Service from a domain model. The third category includes the main plug-in of the toolkit, which we will discuss as well as the EMF-based plug-ins in more detail below. 5.1 EMF-based Plug-ins The first plug-in in this category specifies the Ecore representation of the metamodel. Additionally a generator model automatically has been derived by EMF from the metamodel which stores parameters for the EMF code generator used to create source code for other plug-ins described below. These two models form the base for additional GMF models (gmfgraph, gmftool, gmfmap, gmfgen) that are used to create a high-quality diagram editor. A plug-in whose content is based on the generator model is the edit plug-in. Its classes define the base for graphical model editors. Some included icons are used to mark the different model elements in the editor view. To reduce the number of selectable choices for the data type of an Attribute we made some changes to this plug-in so that only the most common used types, e.g. Boolean, Integer, Float and String, are displayed in the editors. Also based on the generator model an editor plug-in has been generated. It contains classes for an entire graphical editor consisting of multiple views (model view and properties view). Additionally the plug-in adds a wizard to the Eclipse platform which assists users with the creation of new model files. We did not To provide a graphical model editor which is more comprehensive than the one defined before we built a diagram editor plug-in with GMF. The resulting diagram editor looks similar to a simple UML class editor and facilitates a more sophisticated overview of the data entities and the connections between them. 5.2 Main Plug-in The main plug-in references the others to provide an intuitive graphical interface and hides the details of the generation process of the Data Access Service from the users. With the help of wizards users can create a new project in Eclipse which contains a default model file as well as a basic properties file which are starting points for the users to define a new domain model. When a model is designed the user can execute a wizard which will guide them through the build process. The user can choose if only a web application file should be generated or a full-blown ready-to-start web server with the web application already installed. To make this possible we distribute a Jetty web server with the toolkit. Although we provide Jetty by default, the generated web application is runnable on different servlet containers or Java application servers, e.g. Apache Tomcat or GlassFish. A wizard parametrizes and manages the build process for the Data Access Service. The user has to enter several properties regarding the storage and the Web service layer that are mandatory to create a functional kernel. If the user enters valid parameters the wizard executes a sequential workflow which builds the Data Access Service with the help of Ant. Figure 5 shows the build process in more detail. The initialization step creates a folder structure for the generic Java source code, properties files and compiled classes. Then the code generation with Acceleo from the user-generated domain model and a properties file is triggered. The generated Java classes are stored in a package structure which reflects the kernel architecture presented in Fig. 3. The usage of DataNucleus implies an enhancement step which is necessary to extend the byte code of the classes from the resource package with additional functionality needed for persistence. In the deployment phase the compiled and enhanced classes, constituting the ready-to-run Data Access Service, are packed into a WAR archive. Finally the WAR file and additional storage drivers are copied to a Jetty web server which is then zipped. 6 BENEFITS OF THE DATA ACCESS SERVICE In this section we discuss some aspects in our model-driven Data Access Service that have been addressed to obtain acceptance in the scientific community and the industry. 6.1 Deployment The Service Generator produces a standard WAR file (Web application ARchive) containing the generated kernel classes and configuration files that constitute a web application which can be deployed on different application servers. We have deployed and run the generated WAR file successfully on an Apache Tomcat and a Jetty web servers. These servers can host multiple instances of Data Access Services in parallel. To increase the automation, the Service Generator has the ability to create a ready-to-use Jetty web server containing the web application together with a bunch of tested storage drivers. 6.2 Security Security is always important when processing user data especially when the service is available via the Internet. Application servers provide a lot of functionality in the area of security which covers encryption, authentication and authorization. All web servers support data encryption via the SSL protocol. Since we are using REST technology all aspects of authentication and authorization can be handled via Basic-Authentication or Digest-Authentication which are also supported well by the Web server vendors. Beside that alternative authentication protocols, e.g. Shibboleth, Public Key or Kerberos, can be used to protect the service from unauthorized parties. 6.3 Evolutionary Design Usually domain models are subject to permanent changes. This increases the effort for developing scientific workflows because changes in the domain model require changes on all components of the simulation infrastructure in particular on the client and the storage side. Changes on the client side can be minimized by generating the client code using the Web service description that is automatically published by the Data Access Service. Our major advantage of using JDO in the persistence layer is the support of schemaless storages, e.g. the NoSQL database MongoDB. Such databases keep untouched when changing the domain model because the structure of the stored documents can vary. However DataNucleus can handle changes on the domain model even for relational databases as long as the model changes are limited to the addition of new entities or attributes. DataNucleus will add new tables and columns automatically to the database schema. However, after deletion or change of an existing entity or attribute the database schema must be updated manually. 6.4 Service Access The implementation of the application access layer as a RESTful service allows access from applications written in various programming languages which have HTTP support. Furthermore the transport format is based on XML or JSON which are also in common use for many programming languages. The REST service is described in a WADL file (Web Application Description Language) including the URIs of the service resources. An XSD schema file, also provided by the service, describes the XML representation that is used to transfer the data entities between a client application and the service. In the Java environment, the Jersey Client API and the JAXB API can be used to create service stubs for clients basing on the WADL and XSD files. On the one hand this reduces the effort of programming client code to a minimum and on the other hand it even makes the integration of some applications containing many components feasible at all. Applied to constructing workflows from standard components the Data Access Service provides a better alternative to the common practice of using format converters between workflow steps by replacing m:n transformations by a 1:n transformation. Even existing tools, which operate for a limited set of supported formats, will benefit in such a way that they do not have to exchange data in any other formats. Thereby, the only remaining data transformation is the one between the internal representation and the one imposed by the data model. 6.5 Performance An important factor for the acceptance of the toolkit is the performance of reading and writing data entities via the service. To get some insight we have carried out performance measurements for three different storage types, a document-based MongoDB, a relational database system PostgreSQL and cloud storage Amazon S3 as persistence backends. For this purpose we defined an example entity type named Atom representing an atom with some attributes such as position coordinates, charge and element type. The performance test was done with a Jersey client that wrote Atom entities in XML format via HTTP POST requests to the storage. Beside the number of transferred entities we distinguished whether the entities are transferred bundled in one POST request (list) or sequentially (seq) in several POST requests. With these measurements we could examine the overall effort for writing data from the application to the storage as displayed in Fig. 6. For all measurements MongoDB outperforms the PostgreSQL. For up to 10000 atom entities all write methods scale almost linearly. Then there is a crossover point after which a steeper linear region is observed. The difference between the MongoDB and PostgreSQL is smaller as between the single and the bundled write modes, i.e. the effect of the choice between MongoDB and PostgreSQL is minor. Furthermore, MongoDB seems to scale better with increasing number of entities for single writes (the corresponding curves diverge in Fig. 6). In contrast, for bundled writes, MongoDB and PostgreSQL seem to converge and for sufficiently large data might have the same performance. Using the Amazon S3 storage, which is about 100 times slower than the slowest local storage and write mode (PostgreSQL/seq), the bundled transfer of data entities is always more efficient. The low performance is due to the used implementation of JDO (DataNucleus) which executes additional service requests to Amazon over the network. The bottleneck seems to be either the network connection, the structure of the network communication or constraints of the Amazon service. The possibility to choose between different storages depending on usage scenario is a clear benefit of the Data Access Service. 7 RELATED WORK In this section we outline some recent developments which have much in common with our present work. We are not aware of an environment satisfying all requirements of the user communities as discussed in Section 2. In specific scientific and engineering domains (for instance, in materials science) the practical benefits of the model-driven engineering and service-oriented architecture is still limited. Existing solutions, some of which are described in the following, are either too generic, and hence inaccessible for such communities, or too specific and difficult to transfer to other application scenarios. The MEMOPS (MEta-MOdel Programming System) (Fogh et al., 2010) code generation machinery is a good example for a graphical modeling tool created by the Collaborative Computational Project for NMR (CCPN) (Vranken et al., 2005) and deployed in the domain of the nuclear magnetic resonance. MEMOPS offers a graphical tool where domain specialists can design their models in the UML (Unified Modeling Language) notation. The MEMOPS framework creates data access libraries (via APIs) and data storage implementation automatically from the model description. Unfortunately, the generated APIs are restricted to Python, Java and C that limits the number of applications and the only supported storage types are local XML files and SQL databases. Therefore, the system is limited to non-distributed applications, i.e. running locally. In another approach for biomolecular NMR data analysis, workflow models and conceptual and logical data models (Ellis et al., 2006) have been proposed which have led recently to the CONNJUR integrated environment (Nowling et al., 2011) aiming to support the entire process of molecular structure determination. In the framework of the Integrated Tokamak Modeling Task Force (ITAL) (Manduchi et al., 2008) has been developed to provide capability of storing and retrieving data involved in a simulation using the Kepler workflow system. The underlying hierarchical data structure is based on the storage formats MDSplus and HDF5 and the granularity in data access is defined by a set of so-called Consistent Physical Objects. The SIMPL framework architecture has been proposed for access to heterogeneous data sources in scientific workflows (Reimann et al., 2011). The SIMPL framework has been designed as extension to existing scientific workflow management systems to provide abstraction for data management and data provisioning in scientific and engineering simulation workflows. However, the SIMPL framework does not provide means for meta-modeling and data models for the data structures. Recently an approach called Morsa (Espinazo Pagán et al., 2011) has been proposed for scalable access to large models through load on demand in which model persistence has been realized using a NoSQL database. Performance benchmarks with a prototype for EMF have shown performance superior to CDO and XMI especially in respect with reduced memory footprint achieved by partial loading of the data. More recently, the proposed language \( T_3 \) (Rabbi and MacCaull, 2012) enables model-driven development and generation of multi-component workflow applications. Thereby, the aspects of the persistence component have been less emphasized as in our present work. Rather, the provided elaborated language syntax allows for implementing procedural statements, ontology queries, declaring user interfaces, applying access control policy, and task scheduling via Web service based access interfaces for client applications and resources. MDE Eclipse tools have been employed for industrial development of distributed scientific workflow applications in the oil and gas domain (Rahon et al., 2012). Similar to the approach in our present work, the EMF/Ecore is used for modeling and Accelero for code generation. Nevertheless, the realization does not seem to allow the same variety of storage back-ends and does not provide a language-independent Web service as client application interface but a C++ library API. 8 CONCLUSIONS In this paper we presented a generic framework for model-driven management of data in composite scientific and engineering applications. The essential result of the realization of the framework is an EMF-based toolkit, consisting of a set of Eclipse plugins, which enables domain experts to develop data models and automatically generate self-contained data access services. For this purpose we defined as a specialization of the Ecore meta-metamodel a custom metamodel which is optimized for handling domain-specific pure data models. A data access service is automatically created employing a generative approach based on Acceleo and integrating JDO and JAX-RS to the service kernel, containing a persistence layer, a resource layer and a representation layer. The data is then accessible from the individual client application components via a language-independent Web service interface of the data access service. The proposed solution is especially suitable for applications with high heterogeneity and complexity of data representations, diversity of programming languages of the integrated components and data storage resources used. The solution makes possible an evolutionary design of dynamically changing applications. Thus the toolkit can be used in all application domains of computational science for rapid development of complex dynamic applications and effective deployment as Web services. Although the toolkit strongly reduces the technical burden of data modeling and management, it can be combined with ontology-based frameworks such as the Apache Jena framework (http://jena.apache.org/) to tackle model complexity even more effectively. Future work will focus on analysis of the scalability of the data access service, in particular considering applications for data-intensive analyses and evaluation in production environments. Additionally we envisage exploitation of the modeling approach to steer throughput performance optimizations. Also further practical aspects such as model revisions and model changes during service operation will be investigated. As soon as the framework is released we will provide tutorials and example use cases to demonstrate the operability. ACKNOWLEDGEMENTS This work has been partially funded by the 7th Framework Programme of the European Commission within the Research Infrastructures with grant agreement number RI-261594, project MMM@HPC. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2014/47019/47019.pdf", "len_cl100k_base": 7546, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34373, "total-output-tokens": 10697, "length": "2e12", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0004901885986328125, "__label__crime_law": 0.0003879070281982422, "__label__education_jobs": 0.0015697479248046875, "__label__entertainment": 0.00012189149856567384, "__label__fashion_beauty": 0.00024247169494628904, "__label__finance_business": 0.0004622936248779297, "__label__food_dining": 0.00045108795166015625, "__label__games": 0.0005254745483398438, "__label__hardware": 0.001312255859375, "__label__health": 0.0008463859558105469, "__label__history": 0.0004487037658691406, "__label__home_hobbies": 0.00012767314910888672, "__label__industrial": 0.000885009765625, "__label__literature": 0.0003592967987060547, "__label__politics": 0.00035691261291503906, "__label__religion": 0.0006403923034667969, "__label__science_tech": 0.23095703125, "__label__social_life": 0.00014734268188476562, "__label__software": 0.0180511474609375, "__label__software_dev": 0.73974609375, "__label__sports_fitness": 0.0003566741943359375, "__label__transportation": 0.0007166862487792969, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47098, 0.03511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47098, 0.372]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47098, 0.86589]], "google_gemma-3-12b-it_contains_pii": [[0, 4183, false], [4183, 9245, null], [9245, 13048, null], [13048, 16393, null], [16393, 20118, null], [20118, 24311, null], [24311, 28934, null], [28934, 33294, null], [33294, 37141, null], [37141, 42322, null], [42322, 47098, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4183, true], [4183, 9245, null], [9245, 13048, null], [13048, 16393, null], [16393, 20118, null], [20118, 24311, null], [24311, 28934, null], [28934, 33294, null], [33294, 37141, null], [37141, 42322, null], [42322, 47098, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47098, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47098, null]], "pdf_page_numbers": [[0, 4183, 1], [4183, 9245, 2], [9245, 13048, 3], [13048, 16393, 4], [16393, 20118, 5], [20118, 24311, 6], [24311, 28934, 7], [28934, 33294, 8], [33294, 37141, 9], [37141, 42322, 10], [42322, 47098, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47098, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
06008a0f429af8b1b3c58ad63b0836fb76c91090
Keeping intelligence under control Downloaded from: https://research.chalmers.se, 2019-03-22 23:17 UTC Citation for the original published paper (version of record): Keeping intelligence under control http://dx.doi.org/10.1145/3195555.3195558 N.B. When citing this work, cite the original published paper. Keeping Intelligence under Control Piergiuseppe Mallozzi | Patrizio Pelliccione | Claudio Menghi Chalmers University of Technology | Chalmers University of Technology | Chalmers University of Technology University of Gothenburg | University of Gothenburg | University of Gothenburg Sweden | Sweden | Sweden mallozzi@chalmers.se | patrizio.pelliccione@gu.se | menghi@chalmers.se ABSTRACT Modern software systems, such as smart systems, are based on a continuous interaction with the dynamic and partially unknown environment in which they are deployed. Classical development techniques, based on a complete description of how the system must behave in different environmental conditions, are no longer effective. On the contrary, modern techniques should be able to produce systems that autonomously learn how to behave in different environmental conditions. Machine learning techniques allow creating systems that learn how to execute a set of actions to achieve a desired goal. When a change occurs, the system can autonomously learn new policies and strategies for actions execution. This flexibility comes at a cost: the developer has no longer full control on the system behaviour. Thus, there is no way to guarantee that the system will not violate important properties, such as safety-critical properties. To overcome this issue, we believe that machine learning techniques should be combined with suitable reasoning mechanisms aimed at assuring that the decisions taken by the machine learning algorithm do not violate safety-critical requirements. This paper proposes an approach that combines machine learning with run-time monitoring to detect violations of system invariants in the actions execution policies. KEYWORDS Autonomous systems, Safety-critical, Reinforcement learning, Machine learning, Run-time verification ACM Reference Format: 1 INTRODUCTION There is a need for the systems to be "smart". Smart systems are systems that continuously sense the environment in which they operate, are able to detect changes, and react to those changes. The environment in which smart systems operate is usually dynamic, uncontrollable, and partially known. For example, in the automotive domain, drivers behaviours are sometimes unpredictable, animals unexpectedly can cross roads, etc. Smart systems have to deal with such unpredictability and uncertainty in a "self-adaptive" manner. Self-adaptation refers to the capability, performed at run-time without human intervention, of a system in autonomously changing its behaviour in response to changes [1, 2]. Classical development techniques require to fully describe the system behaviour in all the different environmental conditions. This is impractical — if not even impossible — in smart systems where there is a high number of environmental conditions to be considered for adaptation. For this reason, modern development techniques for smart systems must rely on techniques that allow creating systems that autonomously learn how to behave in different environmental conditions. Machine learning techniques are able to autonomously learn how to act on a running system to achieve a desired goal. Such techniques are based on models trained on data and examples rather than logic programs with predefined rules. The programmer is replaced by a machine that can continuously update its models as new data comes from the environment. After training them, machine learning models allow for the effective handling of changes i.e., when a change occurs the system autonomously learns new policies for actions execution. The use of machine learning drastically changes how software systems are developed, i.e., the choice of which the actions to be executed in the different environmental condition is no longer in the developer’s hands but is rather automatic. In this paper, we use reinforcement learning since it is a powerful machine learning technique for decision making. The automatic support provided by machine learning techniques moves control from the developers to the system hands. Thus, the developer is no longer able to ensure that the system will not violate important properties, i.e., invariants, that may be used to represent for example safety-critical properties. In the automotive domain, for example, the developer has no longer control on ensuring whether a self-driving car is going to take a decision that is safe for the particular situation or not. Wrong decisions performed by the machine learning engine may result in car accidents and serious injuries for the passengers. Thus, we believe that while there is a need for systems to be self-adaptive, there is also a need to "keep intelligence under control". We envision a new approach, in the following named WiseML, to ensure that machine learning decisions do not cause the violation of a set of "important" properties. This approach combines machine learning, and specifically Reinforcement Learning (RL) [3], with run-time monitoring techniques which aim at ensuring the preservation of important safety-critical requirements. On one side, WiseML allows the systems to adapt through the use of machine learning techniques. On the other side, WiseML employs run-time monitoring to continuously check that the policies suggested by reinforcement learning will not violate a set of safety-critical requirements. The paper is structured as follows. Section 2 describes some of the challenges of reinforcement learning when used as instrument to enable run-time adaptation. Section 3 describes WiseML, our envisioned approach. Section 4 presents reinforcement learning: a machine learning technique that can be used within WiseML to enable self-adaptation. Section 5 presents monitoring techniques that can be used within WiseML. Section 6 discusses the proposed approach. Section 7 concludes with final remarks. 2 CHALLENGES OF REINFORCEMENT LEARNING At each step, a reinforcement learning agent perceives observation (the state of the environment), applies actions, and receives a reward. The goal that the agent has to achieve is expressed by conveying a reward signal for each action it applies. The agent will eventually learn a policy, i.e. the action to be applied to the observed state, which maximises the cumulative expected reward. In this section, we describe some of the challenges of reinforcement learning as also pointed out by Koopman [4]. Overfitting problem - Reinforcement Learning (RL) techniques require a set of training data that have to be independent of the validation data to avoid overfitting. One main problem with machine learning methods is that they are optimised for average cost function and they do not guarantee for corner cases. Challenges in this area are compromised by the fact that when using methods such as neural networks, it is difficult for humans to understand the rules that have been learned by simply looking at its weights. Black swan - When the neural network learns the rules from a training set, if certain data is missing or wrongly correlated to the training data, the network may fail and potentially cause safety hazards. In other words, if there is a special case that the system has not experienced, it cannot correctly predict such case; this is known as the black swan problem [5]. Hence, it is hard to detect and isolate bugs where the behaviour is not expressed with traditional lines of codes but entrusted to a neural network. The network would need to be retrained with also the potential risk to “unlearn” correct behaviours. Reward hacking - An incorrect specification of the reward function can cause unexpected behaviours to the agent. One of the problems is reward hacking [6]: when the reward function is not exactly representing the designer’s intention, the agent might optimise towards an imprecise reward function and behave unexpectedly meaning that the agent exploits the reward function and manages to get a high reward without achieving the designer’s intentions, but instead optimising towards the rewards function that is indeed not exactly representing the designer’s intentions. For example, in the case of a cleaning robot, the reward function might give a positive reward for not seeing any mess then the agent might learn to disable its vision rather than cleaning up. Instead, if the reward is given only when the robot actually cleans up then the robot might learn to make a mess first and then cleaning up so that it keeps receiving more and more rewards. 3 WISEML Figure 1 shows an overview of WiseML. WiseML considers both functional and non-functional requirements in the adaptation framework. Functional requirements are the Goals (G) WiseML must achieve. Non-functional requirements refer to the properties WiseML must ensure and they are represented in the form of Invariants (I). WiseML receives as input the goals G to be achieved (2) and a set of invariants I to be ensured (3). It perceives the state of the environment through a set of input variables (1) and performs the actions (4) to apply to the environment that aims at reaching the desired goals. WiseML uses a reinforcement learning agent as machine learning engine, indicated in Fig. 1 using an appropriate component, and a monitor component (7). Once trained the reinforcement learning agent automatically computes the action to be executed. The monitor component blocks the actions that most probably will violate invariants and provides feedback to the machine learning component so that it will learn from mistakes intercepted by the run-time monitor (5). The interplay between the machine learning and the monitoring components is designed to enable the integration of enforcement techniques as well as other techniques able to manage the erroneous behaviours intercepted by the monitor. For instance, this would permit to switch to a safety mode in the case the system is in a critical situation. For example, an RL agent in charge of driving may receive rewards related with the following goals: stay in the middle of the lane, drive to the desired speed, avoid obstacles etc. The monitor will continuously check at runtime that the agent does produce actions that will violate important invariants of the system: keep a certain distance to the other vehicles, do not go off road, do not crash, etc. **Machine learning.** The reinforcement learning agent observes the state of the environment to detect how it reacts to the performed stimuli, i.e., the chosen actions \( \text{(1)} \). We envision the reinforcement learning agent to use a Reward Shaping component, that issues rewards to the agent to drive it towards the goals to be achieved, penalising it for acting bad and rewarding it for acting well. Based on the rewards it collects, it learns a policy, i.e. a function that maps the observations from the environment to actions to be performed on it. The reward function can also be affected by the feedback received by the monitor component, in case the selected action caused a violation of any invariant \( \text{(8)} \). In this sense, the monitor plays the role of a teacher for the learning algorithm. Eventually, the agent will learn a policy that maximises the cumulative reward by trial and error with the environment. Once an action is selected, it is then performed on the environment \( \text{(4)} \). **Monitoring.** It aims at detecting whether the actions chosen by the machine learning component \( \text{(6)} \) are going to cause a violation of any invariant. In this sense, the monitoring algorithm should be predictive, i.e., it should detect violations of invariants before they will occur. The monitoring component relies on the Runtime Enforcer to ensure that the behaviour of the RL agent is compliant with certain properties, i.e., the invariants of the system. The Runtime Enforcer acts on the running system by allowing and forbidding actions of being executed. The monitor evaluates the effect of actions on an abstract representation of the system, which is maintained updated using the information detected using the input variables \( \text{(5)} \). If the evaluation detects a violation, the monitoring component prevents the action for being executed \( \text{(7)} \) and sends feedback to the machine learning component \( \text{(8)} \). This feedback will be then integrated into the reward function by the reward shaping component. If no violation has been detected, then no barrier is activated \( \text{(7)} \) and the action is performed. For example, referring to the automotive domain, functional safety standards such as the ISO26262 can be used as a guide for the definition of the monitored properties as in the approach of Heffernan et al. [7]. The usage of approaches that perform predictive runtime verification of timed properties may also be investigated [8, 9]. For instance, the approach in [9] exploits the structure of the property to predict faults before they will actually materialize. The verification monitor might also provide additional information such as the minimum (maximum) time when the property can be violated (satisfied) in the future. ## 4 MACHINE LEARNING Reinforcement learning (RL) is an area of machine learning that deals with decision-making. An RL agent interacts with the environment by performing actions and it receives a reward. The agent will learn to choose actions that maximize the cumulative reward. In model-based reinforcement learning, the RL agent computes an optimal policy on a model of the world, usually formalized as a Markov Decision Process (MDP) extended with the reward information associated to the transitions from one state to another. The RL agent is also able to operate in a completely unknown environment, i.e., it is able to learn the best strategy to be employed when the only way to collect information about the environment is by interacting with it. Basically, the RL agent will learn a policy without knowing a priori a model of the environment, in this case, we refer to model-free reinforcement learning. In our approach we will use a combination of model-free and model-based RL. First, we formalize the goals in terms of reward functions at design-time. Then, at run-time, the monitor can interfere with the reward function in case a violation of the invariants is detected. This mechanism is called reward shaping and it steers the RL agent to perform actions that will not trigger the monitor in the future, guiding it towards its goals [10]. ## 5 MONITORING Our envisioned approach (WiseML) uses monitoring techniques at run-time to prevent violations of invariants that will be caused by the actions chosen by the machine learning engine. Since the goal of monitoring is to avoid the execution of actions that will cause violations, monitoring should be predictive. According the available knowledge of the system and or the environment, different monitoring approaches might be conceived and/or adopted. Given the current model of the environment, the invariant that must be ensured and the action \( a \) to be executed, the predictive monitoring engine aims at detecting whether an action execution will cause a violation of an invariant. In the case of no model of the environment is available, the predictive monitor can only make predictions on the structure of the property, on its current partial satisfaction, and on the distance, in terms of actions, to be performed in order to have a failure [9]. In general, the selection and design of predictive monitors open interesting challenges. These include the definition of appropriate semantics that consider whether a property will be satisfied or violated, the probability and distance from the potential failure, as well as whether it is possible to control the system and its environment in a way that satisfy the properties of interest (or avoid the failure). Since properties satisfaction must be verified at run-time, semantics must be defined taming the complexity of the corresponding verification algorithms. Solutions for these problems may exploit multi-valued semantics [11]. These semantics are usually employed since two-valued semantics cannot be used to monitor all properties, such as liveness properties, and the satisfaction of these properties rely on how the system will behave. An example of these semantics is LTL3. The semantics of LTL3 is defined as follows: 1) satisfied; 2) violated; and 3) inconclusive. The same authors also extend LTL3 with four-valued semantics: 1) satisfies the property, 2) violates the property, 3) will presumably violate the property, or 4) will presumably conform to the property in the future, once the system has stabilized. Another aspect to be considered in the selection of the monitor is the type of invariants that should be ensured. Invariants describe guarantees that the system must ensure. These may include properties that predicate on explicit time as well as branching or linear notions of time. Examples of invariants that use a linear notion of time and implicit and explicit time are “if the left turn signal of the vehicle is on, it must eventually turn left” and “if the left turn signal of the vehicle is on, it must turn left within 10 seconds”, respectively. Depending on the invariants to be analysed, different run-time monitors can be considered and used. It is important to note that these monitors cannot predict exactly what will be the behaviour of the system and if specific failures can be prevented. However, they should be able to inform the system about the possibility and probability of a failure. The output of our monitors may be exploited to enhance the system with run-time mechanisms to avoid failures. Triggered by our monitors, these mechanisms may be able to act before the failure and take all the possible actions to prevent failures. We plan to automatically generate a predictive monitor from the invariants using methods such as PREDIMO [12], a novel approach where the properties to be monitored are specified in terms of scenarios. This approach automatically synthesises a monitor by exclusively exploiting the structure of the property and a partial knowledge of the behaviour of the environment. By taking into account the actual status and also the foreseen possible evolutions of both system and environment in the near future, the generated monitors provide an estimate of a potential incoming failure, in terms of the distance to the failure and the degree of controllability of the system. This enables the definition of run-time mechanisms that, e.g. by avoiding specific actions or forcing other ones, might prevent failures in the near future. We will also evaluate the possibility of realizing monitors based on game theory, like the approaches in [13–15]. 6 DISCUSSION WiseML is based on the idea to not program all the behaviours and adaptation that the system should perform at design-time but instead set up goals to achieve and train the system to achieve them. Reinforcement learning is a framework that fits perfectly with our needs of learning and adapting in order to reach the desired goal. However, as pointed out in Section 2, using machine learning techniques instead of traditional software imposes some challenges. Decisions are not driven anymore by software written by programmers following the requirements but on data and the reward signal. With runtime monitoring, we envision to create a safety envelope around the machine learning system. The monitor will prevent the RL agent to choose an action that violates important invariants and it will train the agent to perform better in the future. The correct selection of the reinforcement learning algorithm and of the monitoring engine is crucial for obtaining an implementation of WiseML that behaves as expected. The machine learning algorithm should be chosen depending on whether there is full or partial observability about the environment in which the system is deployed. Most likely the RL agent will have some initial knowledge about the environment and it will be trained so that it can learn about the environment as it explores it, in an online fashion. The monitoring engine should be chosen depending on whether there is full or partial observability about the environment in which the system is. Having an effective monitor may reduce performance. Indeed a deeper analysis of invariants’ violations may cause a performance overhead and at the end invalidate verification results. This may occur when predictive monitoring is too slow and, before results are obtained, the model of the environment already changed drastically since other agents performed actions. For example, while the monitoring verifies whether the action “crosses the intersection within 2 seconds” can be performed (by also evaluating the effect of other agents actions) the semaphore may turn red and invalidate the obtained results. On the other hand, a less effective monitor may return not accurate results. For example, a monitor that checks whether a pedestrian is not crossing the road should also consider the volumes of the objects carried by the pedestrian. This check causes a performance overhead. 7 CONCLUSIONS This paper envisions a new approach named WiseML. This paper envisions a new approach named that aims at creating systems that, on one hand are able to learn and adapt their behavior based on changes that occur in the environment, on the other are able to ensure that adaptation does not cause invariants violation. We discussed some challenges posed by reinforcement learning and how combining it with runtime monitoring might solve some of these challenges. We broadly discussed some possible solutions for the proposed envisioned approach that may use reinforcement learning as mechanisms to enable adaptation and predictive monitoring as instrument to detect invariants violations. REFERENCES
{"Source-Url": "https://research.chalmers.se/publication/509113/file/509113_Fulltext.pdf", "len_cl100k_base": 4269, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16704, "total-output-tokens": 5612, "length": "2e12", "weborganizer": {"__label__adult": 0.0004897117614746094, "__label__art_design": 0.0004119873046875, "__label__crime_law": 0.00060272216796875, "__label__education_jobs": 0.0007953643798828125, "__label__entertainment": 0.00010967254638671876, "__label__fashion_beauty": 0.0002353191375732422, "__label__finance_business": 0.0003230571746826172, "__label__food_dining": 0.0005097389221191406, "__label__games": 0.0011091232299804688, "__label__hardware": 0.0013055801391601562, "__label__health": 0.0009851455688476562, "__label__history": 0.00034356117248535156, "__label__home_hobbies": 0.00013387203216552734, "__label__industrial": 0.0007047653198242188, "__label__literature": 0.00042176246643066406, "__label__politics": 0.00042128562927246094, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.1160888671875, "__label__social_life": 0.00011843442916870116, "__label__software": 0.007598876953125, "__label__software_dev": 0.86474609375, "__label__sports_fitness": 0.0005369186401367188, "__label__transportation": 0.0012598037719726562, "__label__travel": 0.00027441978454589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25420, 0.03154]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25420, 0.73676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25420, 0.91503]], "google_gemma-3-12b-it_contains_pii": [[0, 458, false], [458, 6028, null], [6028, 11073, null], [11073, 18063, null], [18063, 25420, null]], "google_gemma-3-12b-it_is_public_document": [[0, 458, true], [458, 6028, null], [6028, 11073, null], [11073, 18063, null], [18063, 25420, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25420, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25420, null]], "pdf_page_numbers": [[0, 458, 1], [458, 6028, 2], [6028, 11073, 3], [11073, 18063, 4], [18063, 25420, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25420, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
8a44ebefcd3f086eda926aac5a8201e134c2c808
How "object-oriented" is PHP? Let's try an answer: - **Single inheritance.** PHP allows a class definition to inherit from another class, using the `extends` clause. Both member variables and member functions are inherited. - **Multiple inheritance.** PHP offers no support for multiple inheritance and no notion of interface inheritance as in Java. Each class inherits from, at most, one parent class (though a class may implement many interfaces). - **Constructors.** Every class can have one constructor function, which in PHP is called `__construct()`. Note that there are two underscore characters at the front of that function name. Constructors of parent classes are not automatically called but must be invoked explicitly. - **Destructors.** PHP supports explicit destructor functions as of version 5. The destructor function of a class is always called `__destruct()`. - **Encapsulation/access control.** PHP supports public, private, and protected properties and methods as of version 5. • **Polymorphism/overloading.** PHP supports polymorphism in the sense of allowing instance of subclasses to be used in place of parent instances. The correct method will be dispatched at runtime. There is no support for method overloading, where dispatch happens based on the method's signature—each class only has one method of a given name. • **Static (or class) functions.** PHP offers static properties and static methods as of version 5. It is also possible to call methods via the `Classname::function()` syntax. • **Introspection.** PHP offers a wide variety of functions here, including the capability to recover class names, methods names, and properties names from an instance. In the next section, we cover the basic PHP syntax for OOP from the ground up, with some simple examples. ## 1. Basic PHP Constructs for OOP The general form for defining a new class in PHP is as follows: ```php class MyClass extends MyParent { var $var1; var $var2 = "constant string"; function myfunc ($arg1, $arg2) { //... } //... } ``` As an example, consider the simple class definition in the listing below, which prints out a box of text in HTML: ```php class TextBoxSimple { var $body_text = "my text"; function display() { print("<table><tr><td>$this->body_text\n print("</td></tr></table>\n } } ``` In general, the way to refer to a property from an object is to follow a variable containing the object with `->` and then the name of the property. So if we had a variable `$box` containing an object instance of the class `TextBox`, we could retrieve its `body_text` property with an expression like: ```php $text_of_box = $box->body_text; ``` Notice that the syntax for this access does not put a `$` before the property name itself, only the `$this` variable. After we have a class definition, the default way to make an instance of that class is by using the `new` operator. ```php $box = new TextBoxSimple; $box->display(); ``` The correct way to arrange for data to be appropriately initialized is by writing a constructor function—a special function called `__construct()`, which will be called automatically whenever a new instance is created. ```php class TextBox { var $bodyText = "my default text"; // Constructor function function __construct($newText) { $this->bodyText = $newText; } function display() { print("<table><tr><td>$this->bodyText\n print("</td></tr></table>\n } } ``` // creating an instance PHP class definitions can optionally inherit from a superclass definition by using the `extends` clause. The effect of inheritance is that the subclass has the following characteristics: - Automatically has all the property declarations of the superclass. - Automatically has all the same methods as the superclass, which (by default) will work the same way as those functions do in the superclass. In addition, the subclass can add on any desired properties or methods simply by including them in the class definition in the usual way. ```php class TextBoxHeader extends TextBox{ var $headerText; // CONSTRUCTOR function __construct($newHeaderText, $newBodyText) { $this->headerText = $newHeaderText; $this->bodyText = $newBodyText; } // MAIN DISPLAY FUNCTION function display() { $header_html = $this->make_header($this->headerText); $body_html = $this->make_body($this->bodyText); print("<table><tr><td>"); print($header_html); print("</td></tr> <tr><td>"); print($body_html); print("</td></tr></table>\n"); } // HELPER FUNCTIONS function make_header ($text) { return($text); } function make_body ($text) { return($text); } } ``` Function definitions in subclasses override definitions with the same name in superclasses. This just means that the overriding definition in the more specific class takes precedence and will be the one actually executed. Before we move onto the more advanced features of PHP's version of OOP, it's important to discuss issues of scope—that is, which names are meaningful in what way to different parts of our code. It may seem as though the introduction of classes, instances, and methods have made questions of scope much more complicated. Actually, though, there are only a few basic rules we need to add to make OOP scope sensible within the rest of PHP: - Names of properties and methods are never meaningful to calling code on their own—they must always be reached via the `->` construct. This is true both outside the class definition and inside methods. - The names visible within methods are exactly the same as the names visible within global functions—that is, methods can refer freely to other global functions, but can't refer to normal global properties unless those properties have been declared global inside the method definition. These rules, together with the usual rules about variable scope in PHP, are respected in the intentionally confusing example in the listing below. What number would you expect that code to print when executed? ```php $myGlobal = 3; function myFunction ($myInput) { global $myGlobal; return($myGlobal * $myInput); } class MyClass { var $myProperty; } ``` function __construct($myConstructorInput) { $this->myProperty = $myConstructorInput; } function myMethod ($myInput) { global $myGlobal; return($myGlobal * $myInput * myFunction($this->myProperty)); } $myInstance = new MyClass(4); print("The answer is: ".$myInstance->myMethod(5)); The answer is: 180 (or 3 * 5 * (3 * 4)). If any of these numerical variables had been undefined when multiplied, we would have expected the variable to have a default value of 0, making the answer have a value of 0 as well. This would have happened if we had: - Left out the global declaration in myFunction() - Left out the global declaration in myMethod() - Referred to $myProperty rather than $this->myProperty 2. Advanced OOP Features 2.1. Public, Private, and Protected Members Unless you specify otherwise, properties and methods of a class are public. That is to say, they may be accessed in three possible situations: - From outside the class in which it is declared; - From within the class in which it is declared; - From within another class that implements the class in which it is declared; If you wish to limit the accessibility of the members of a class, you should use private or protected. By designating a member private, you limit its accessibility to the class in which it is declared. The private member cannot be referred to from classes that inherit the class in which it is declared and cannot be accessed from outside the class. class MyClass { private $colorOfSky = "blue"; $nameOfShip = "Java Star"; function __construct($incomingValue) { // Statements here run every time an instance of the class // is created. } function myPublicFunction ($myInput) { return("I'm visible! "); } private function myPrivateFunction ($myInput) { global $myGlobal; return($myGlobal * $myInput * myFunction($this->myProperty)); } } A protected property or method is accessible in the class in which it is declared, as well as in classes that extend that class. Protected members are not available outside of those two kinds of classes, however. class MyClass { protected $colorOfSky = "blue"; $nameOfShip = "Java Star"; function __construct($incomingValue) { // Statements here run every time an instance // of the class is created. } } // of the class is created. } function myPublicFunction ($myInput) { return("I'm visible!"); } protected function myProtectedFunction ($myInput) { global $myGlobal; return($myGlobal * $myInput * myFunction($this->myProperty)); } 2.2. Interfaces In large object-oriented projects, there is some advantage to be realized in having standard names for methods that do certain work. In PHP5, it is also possible to define an interface, like this: interface Mail { public function sendMail(); } then, if another class implemented that interface, like this: class Report implements Mail { // Definition goes here } it would be required to have a method called sendMail. It's an aid to standardization. 2.3. Constants A constant is somewhat like a variable, in that it holds a value, but is really more like a function because a constant is immutable. Once you declare a constant, it does not change. class MyClass { const REQUIRED_MARGIN = 1.3; function __construct($incomingValue) { // Statements here run every time an instance of the class is created. } } In that class, REQUIRED_MARGIN is a constant. It is declared with the keyword const, and under no circumstances can it be changed to anything other than 1.3. Note that the constants name does not have a leading $, as variable names do. 2.4. Abstract Classes An abstract class is one that cannot be instantiated, only inherited. You declare an abstract class with the keyword abstract, like this: abstract class MyAbstractClass { abstract function myAbstractFunction(); } Note that function definitions inside an abstract class must also be preceded by the keyword abstract. It is not legal to have abstract function definitions inside a non-abstract class. 2.5. Simulating class functions Some other OOP languages make a distinction between instance properties, on the one hand, and class or static properties on the other. Instance properties are those that every instance of a class has a copy of (and may possibly modify individually); class properties are shared by all instances of the class. Similarly, instance methods depend on having a particular instance to look at or modify; class (or static) methods are associated with the class but are independent of any instance of that class. In PHP, there are no declarations in a class definition that indicate whether a function is intended for per-instance or per-class use. But PHP does offer a syntax for getting to functions in a class even when no instance is handy. The :: syntax operates much like the -> syntax does, except that it joins class names to member functions rather than instances to members. For example, in the following implementation of an extremely primitive calculator, we have some methods that depend on being called in a particular instance and one methods that does not: ```php class Calculator{ var $current = 0; function add($num) { $this->current += $num; } function subtract($num) { $this->current -= $num; } function getValue() { return($current); } function pi() { return(M_PI); // the PHP constant } } ``` We are free to treat the pi() methods as either a class methods or an instance methods and access it using either syntax: ```php $calcInstance = new Calculator; $calcInstance->add(2); $calcInstance->add(5); print("Current value is ".$calcInstance->current ."<br/>" ); print("Value of pi is ".$calcInstance->pi()."<br/>" ); print("Value of pi is ".Calculator::pi()."<br/>" ); ``` This means that we can use the pi() function even when we don't have an instance of Calculator at hand. ### 2.6. Calling parent functions Asking an instance to call a function will always result in the most specific version of that function being called, because of the way overriding works. If the function exists in the instance's class, the parent's version of that function will not be executed. Sometimes it is handy for code in a subclass to explicitly call functions from the parent class, even if those names have been overridden. It's also sometimes useful to define subclass functions in terms of superclass functions, even when the name is available. #### 2.6.1. Calling parent constructors Look to the following example: ```php class Name{ var $_firstName; var $_lastName; function Name($first_name, $last_name){ $this->_firstName = $first_name; $this->_lastName = $last_name; } function toString() { return($this->_lastName., "$this->_firstName; } } class NameSub1 extends Name{ var $_middleInitial; ``` function NameSub1($firstName, $middleInitial, $lastName) { Name::Name($firstName, $lastName); $this->_middleInitial = $middleInitial; } function toString() { return (Name::toString() . " ". $this->_middleInitial); } In this example, we have a superclass (Name), which has a two-argument constructor, and a subclass (NameSub1), which has a three-argument constructor. The constructor of NameSub1 functions by calling its parent constructor explicitly using the :: syntax (passing two of its arguments along) and then setting an additional property. Similarly, NameSub1 defines its nonconstructor toString() function in terms of the superclass function that it overrides. It might seem strange to call Name::Name() here, without reference to $this. The good news is that both $this and any member variables that are local to the superclass are available to a superclass method when invoked from a subclass instance. 2.6.2. The special name parent There is a stylistic objection to the previous example, which is that we have hardcoded the name of a superclass into the code for a subclass. Some would say that this is bad style because it makes it harder to revise the class hierarchy later. A fix is to use the special name parent, which when used in a method, always refers to the superclass of the current class. Here is a revised version of the example using parent rather than Name: class NameSub2 extends Name{ var $_middleInitial; function NameSub2($firstName, $middleInitial, $lastName) { $parentClass = get_parent_class($this); parent::$parentClass($firstName, $lastName); $this->_middleInitial = $middleInitial; } function toString() { return (parent::toString() . " ". $this->_middleInitial); } } 2.7. Serialization Serialization of data means converting it into a string of bytes in such a way that you can produce the original data again from the string (via a process known, unsurprisingly, as unserialization). After you have the ability to serialize/unserialize, you can store your serialized string pretty much anywhere (a system file, a database, and so on) and recreate a copy of the data again when needed. PHP offers two functions, serialize() and unserialize(), which take a value of any type (except type resource) and encode the value into string form and decode again, respectively. Here is a quick example, which we'll extend later in this section: class ClassToSerialize { var $storedStatement = "data"; function __construct($statement) { $this->storedStatement = $statement; } function display (){ print ($this->storedStatement . "<br/>"); } } $instance1 = new ClassToSerialize("You're objectifying me!"); $serialization = serialize($instance1); $instance2 = unserialize($serialization); $instance2->display(); This class has just one property and a couple of methods, but it's sufficient to demonstrate that both properties and methods can survive serialization. PHP provides a hook mechanism so that objects can specify what should happen just before serialization and just after unserialization. The special member function `__sleep()` (that’s two underscores before the word `sleep`), if defined in an object that is being serialized, will be called automatically at serialization time. It is also required to return an array of the names of variables whose values are to be serialized. This offers a way to not bother serializing variables that are not expected to survive serialization anyway (such as database resources) or that are expensive to store and can be easily recreated. The special function `__wakeup()` (again, two underscores) is the flip side—it is called at unserialization time (if defined in the class) and is likely to do the inverse of whatever is done by `__sleep()` (restore database connections that were dropped by `__sleep()` or recreate variables that `__sleep()` said not to bother with). ```php class ClassToSerialize2 { var $storedStatement = "data"; var $easilyRecreatable = "data again"; function __construct($statement) { $this->storedStatement = $statement; $this->easilyRecreatable = $this->storedStatement." Again!"; } function __sleep() { // Could include DB cleanup code here return array('storedStatement'); } function __wakeup() { // Could include DB restoration code here $this->easilyRecreatable = $this->storedStatement." Again!"; } function display (){ print($this->easilyRecreatable."<br/>"); } } $instance1 = new ClassToSerialize2("You're objectifying me!"); $serialization = serialize($instance1); $instance2 = unserialize($serialization); $instance2->display(); ``` The serialization mechanism is pretty reliable for objects, but there are still a few things that you must know: - The code that calls `unserialize()` must also have loaded the definition of the relevant class. (This is also true of the code that calls `serialize()` too, of course, but that will usually be true because the class definition is needed for object creation in the first place.) - Object instances can be created from the serialized string only if it is really the same string (or a copy thereof). A number of things can happen to the string along the way, if stored in a database (make sure that slashes aren’t being added or subtracted in the process), or if passed as URL or form arguments. (Make sure that your URL-encoding/decoding is preserving exactly the same string and that the string is not long enough to be truncated by length limits.) - If you choose to use `__sleep()`, make sure that it returns an array of the variables to be preserved; otherwise no variable values will be preserved. (If you do not define a `__sleep()` function for your class, all values will be preserved.) See also the current manual for new changes. ### 2.8. Introspection Functions Introspection allows the programmer to ask objects about their classes, ask classes about their parents, and find out all the parts of an object without having to crunch the source code to do it. Introspection also can help you to write some surprisingly flexible code, as we will see. <table> <thead> <tr> <th>Function</th> <th>Description</th> <th>Operates on Class Names</th> <th>Operates on Instances</th> <th>As of PHP Version</th> </tr> </thead> <tbody> <tr> <td>get_class()</td> <td>Returns the name of the class an object belongs to.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>get_parent_class()</td> <td>Returns the name of the superclass of the given object.</td> <td>Yes (as of PHP 4.0.5)</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>class_exists()</td> <td>Returns TRUE if the string argument is the name of a class, FALSE otherwise.</td> <td>Yes</td> <td>No</td> <td>4.0.0</td> </tr> <tr> <td>get_declared_classes()</td> <td>Returns an array of strings representing names of classes defined in the current script.</td> <td>N/A</td> <td>N/A</td> <td>4.0.0</td> </tr> <tr> <td>is_subclass_of()</td> <td>Returns TRUE if the class of its first argument (an object instance) is a subclass of the second argument (a class name), FALSE otherwise.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>is_a()</td> <td>Returns TRUE if the class of its first argument (an object instance) is a subclass of the second argument (a class name), or is the same class, and FALSE otherwise.</td> <td>No</td> <td>Yes</td> <td>4.2.0</td> </tr> <tr> <td>get_class_vars()</td> <td>Returns an associative array of var/value pairs representing the name of variables in the class and their default values. Variables without default values will not be included.</td> <td>Yes</td> <td>No</td> <td>4.0.0</td> </tr> <tr> <td>get_object_vars()</td> <td>Returns an associative array of var/value pairs representing the name of variables in the instance and their default values. Variables without values will not be included.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>Function</td> <td>Description</td> <td>Operates on Class Names</td> <td>Operates on Instances</td> <td>As of PHP Version</td> </tr> <tr> <td>----------------------</td> <td>-----------------------------------------------------------------------------</td> <td>-------------------------</td> <td>------------------------</td> <td>-------------------</td> </tr> <tr> <td>method_exists()</td> <td>Returns TRUE if the first argument (an instance) has a method named by the second argument (a string) and FALSE otherwise.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>get_class_methods()</td> <td>Takes a string representing a method name, an instance that should have such a method, and additional arguments. Returns the result of applying the method (and the arguments) to the instance.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>call_user_method()</td> <td>Takes a string representing a method name, an instance that should have such a method, and additional arguments. Returns the result of applying the method (and the arguments) to the instance.</td> <td>No</td> <td>Yes</td> <td>4.0.0</td> </tr> <tr> <td>call_user_method_array()</td> <td>Same as call_user_method(), except that it expects its third argument to be an array containing the arguments to the method.</td> <td>No</td> <td>Yes</td> <td>4.0.5</td> </tr> </tbody> </table> Example 1. Matching variables and DB columns One frequent use for PHP objects in database-driven systems is as a wrapper around the entire database API. The theory is that the wrapper insulates the code from the specific database system, which will make it trivial to swap in a different RDBMS when the technical needs change. Another use that is almost as common (and that your authors like better) is to have object instances correspond to database result rows. In particular, the process of reading in a result row looks like instantiating a new object that has member variables corresponding to the result columns we care about, with extra functionality in the member functions. As long as the fields and columns match up (and as long as you can afford object instantiation for every row), this can be a nice abstraction away from the database. A repetitive task that arises when writing this kind of code is assigning database column values to member variables, in individual assignment statements. This feels like it should be unnecessary, especially when the columns and the corresponding variables have exactly the same names. In this example, we try to automate this process. Let's start with an actual database table. Following are the MySQL statements necessary to create a simple table and insert one row into it: mysql> create table book (id int not null primary key auto_increment, author varchar(255), title varchar(255), publisher varchar(255)); mysql> insert into book (author, title, publisher) values ("Robert Zubrin", "The Case For Mars","Touchstone"); Because the id column is auto-incremented, it will happen to have the value 1 for this first row. Now, let's say that we want a Book object that will exactly correspond to a row from this table, with fields corresponding to the DB column names. There's no way around actually defining the variable names (because PHP doesn't let us dynamically add variables to classes), but we can at least automate the assignment. The code in listing below assumes a database called oop with the table created as above, and also that we have a file called dbconnect_vars that sets $host, $user, and $pass appropriately for our particular MySQL setup (the code assumes the connection works, that the row was retrieved successfully, and so on). The main point we want to highlight is the hack in the middle of the Book constructor. ```php <?php include_once("dbconnect_vars.php"); class Book{ var $id; // variables corresponding to DB columns var $author = "DBSET"; var $title = "DBSET"; var $publisher = "DBSET"; function __construct($db_connection, $id) { $this->id = $id; $query = "select * from book "."where id = $id"; $result = mysql_query($query, $db_connection); $db_row_array = mysql_fetch_array($result); $class_var_entries = get_class_vars(get_class($this)); while ($entry = each($class_var_entries)) { $var_name = $entry['key']; $var_value = $entry['value']; if ($var_value == "DBSET") { $this->$var_name = $db_row_array[$var_name]; } } } function toString () { // code to return a string representation of the object } } The database query returns all columns from the book table, and the values are indexed in the result array by the column names. The constructor then uses get_class_vars() to discover all the variables that have been set in the object, tests them to see if they have been bound to the string "DBSET", and then sets those variables to the value of the column of the same name. The result is the output: BOOK Author: Robert Zubrin Title: The Case For Mars Publisher: Touchstone 3. OOP Style in PHP. The PEAR Coding Style We offer in the following some brief notes on writing readable, maintainable PHP OOP code. For more information on the coding style, see the PEAR Web site (at http://pear.php.net). PEAR recommends that class names begin with an uppercase letter and (if in a PEAR approved directory hierarchy of packages) have that inclusion path in the class name, separated by underscores. So your class that counts words, and which belongs to a PEAR package called TextUtils, might be called TextUtils_WordCounter. If building large OOP packages, you may want to emulate this underscore convention with your own package names; otherwise you can simply give your classes names like WordCounter. Member variables and member function names should have their first real letter be lowercase and have word boundaries be delineated by capitalization. In addition, names that are intended to be private to the class (that is, they are used only within the class, and not by outside code) should start with an underscore. So the variable in your WordCounter class that holds the count of words might be called _wordCount (if intended to be messed with from the outside) or __wordCount (if intended to be private to the class). Another style of documenting your intent about use of internal variables is to have your variables marked as private, in general, and provide "getter" and "setter" functions to outside callers. For example, we might define a class like this: class Customer{ private var _name; // comments come here private var _creditCardNumber; private var _rating; */ * Comments come here */ function getName (){ return($this->_name); } } function getRating (){ return($this->rating); } function setRating($rating){ $this->rating = $rating; } 3.1. Indenting, whitespace, and line length Code is much easier to read if you use indentation to indicate the relationship among lines of code that are tied together in a common functional block, as well as whitespace to logically group elements. Another issue is the number of spaces to indent each new code block—some people insist that two saves space, others swear by four, and some outliers actually employ eight-space indents (the horror!). If you want your code to be accepted into PEAR, it must use four-space indents. Because different editors on different platforms interpret tab characters differently, it’s recommended that you use groups of four space characters in all places you would, under other circumstances, use a tab character. Table 2. Indenting, whitespace, and line length <table> <thead> <tr> <th></th> <th>No</th> <th>Yes</th> </tr> </thead> <tbody> <tr> <td>switch ($flag)</td> <td>case 1: doWork(); break; case 2: doOtherWork(); break; default: doNothing(); break;</td> <td>case 1: doWork(); break; case 2: doOtherWork(); break; default: doNothing(); break;</td> </tr> </tbody> </table> 3.2. Formatting control structures Control structures—like if, if/else, if/elseif, and switch statements—can be confusing if not properly formatted. PEAR has recommended styles for all of these language constructs. ### Table 3. Formatting control structures <table> <thead> <tr> <th>Structure</th> <th>Formatting recommendation</th> </tr> </thead> <tbody> <tr> <td><strong>if Statements</strong></td> <td>if (condition1) &amp;&amp; (condition2)) { doSomething(); }</td> </tr> <tr> <td><strong>if/else Statements</strong></td> <td>if((condition1) &amp;&amp; (condition2)){ doSomething(); } else { doSomethingElse(); }</td> </tr> <tr> <td><strong>if/elseif Statements</strong></td> <td>if((condition1) &amp;&amp; (condition2)){ doSomething(); } elseif { doSomethingElse(); }</td> </tr> <tr> <td><strong>switch Statements</strong></td> <td>switch ($flag) { case 1: doWork(); break; case 2: doOtherWork(); break; default: doNothing(); break; }</td> </tr> </tbody> </table> The else appears on the same line as the closing bracket that terminates the if block. ### 3.3. Formatting functions and function calls Much of PHP is concerned with defining functions, then making calls to them; and obviously code libraries like PEAR will be almost all functions. Properly formatting your functions can make it more obvious what’s going on and can therefore make debugging and maintenance easier. The PEAR style rules mandate that functions be defined with both their beginning and ending braces flush with the left margin, like this: ```php function myFunction(){ // Function code goes here. } ``` Personally, I prefer another variant that seems to be more complete for me and also save space. ```php /* * Comments goes here */ function myFunction(){ // Function code goes here. } ``` ### 3.4. PHPDoc For very large and complex programs, code-embedded comments are not sufficient. You want separate documentation that someone can read without delivering into the code itself. For example, if you have followed a given commenting convention, you can point the javadoc tool at your Java code and it will extract class and method comments into a set of HTML pages documenting the API. This is a solution for the problem of keeping docs in sync with code. (It will break down, for example, if people begin writing new methods by copying old methods, and leaving the original comments in place.) But at least developers have to write only one description of a given method rather than two. There is an analogous phpdoc tool that uses PHP (naturally) to scan PHP code for special comments, producing HTML output. For more on phpdoc, see www.phpdoc.de/. Bibliography David Sklar, Learning PHP 5, O'Reilly 2004.
{"Source-Url": "https://www.informatik.tu-cottbus.de/~giurca/tutorials/PHP/PHP-Tutorial-Part2-OOP.pdf", "len_cl100k_base": 7330, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32812, "total-output-tokens": 8064, "length": "2e12", "weborganizer": {"__label__adult": 0.00033283233642578125, "__label__art_design": 0.0002357959747314453, "__label__crime_law": 0.00018799304962158203, "__label__education_jobs": 0.00043845176696777344, "__label__entertainment": 3.540515899658203e-05, "__label__fashion_beauty": 8.356571197509766e-05, "__label__finance_business": 0.0001519918441772461, "__label__food_dining": 0.00025463104248046875, "__label__games": 0.0002560615539550781, "__label__hardware": 0.00030612945556640625, "__label__health": 0.0001766681671142578, "__label__history": 0.00010609626770019533, "__label__home_hobbies": 6.03795051574707e-05, "__label__industrial": 0.00015354156494140625, "__label__literature": 0.000118255615234375, "__label__politics": 0.00011813640594482422, "__label__religion": 0.0002923011779785156, "__label__science_tech": 0.000698089599609375, "__label__social_life": 6.03795051574707e-05, "__label__software": 0.00598907470703125, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.00016355514526367188, "__label__transportation": 0.0002142190933227539, "__label__travel": 0.00018334388732910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32680, 0.0071]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32680, 0.7192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32680, 0.83105]], "google_gemma-3-12b-it_contains_pii": [[0, 1004, false], [1004, 3536, null], [3536, 6335, null], [6335, 8689, null], [8689, 10836, null], [10836, 13327, null], [13327, 16175, null], [16175, 19560, null], [19560, 21668, null], [21668, 23043, null], [23043, 26227, null], [26227, 28406, null], [28406, 30052, null], [30052, 31679, null], [31679, 32680, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1004, true], [1004, 3536, null], [3536, 6335, null], [6335, 8689, null], [8689, 10836, null], [10836, 13327, null], [13327, 16175, null], [16175, 19560, null], [19560, 21668, null], [21668, 23043, null], [23043, 26227, null], [26227, 28406, null], [28406, 30052, null], [30052, 31679, null], [31679, 32680, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32680, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32680, null]], "pdf_page_numbers": [[0, 1004, 1], [1004, 3536, 2], [3536, 6335, 3], [6335, 8689, 4], [8689, 10836, 5], [10836, 13327, 6], [13327, 16175, 7], [16175, 19560, 8], [19560, 21668, 9], [21668, 23043, 10], [23043, 26227, 11], [26227, 28406, 12], [28406, 30052, 13], [30052, 31679, 14], [31679, 32680, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32680, 0.05814]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
39c1099819c30042d987dd6f54353c8e664ca81a
Introduction to the Altera SOPC Builder Using VHDL Design This tutorial presents an introduction to Altera’s SOPC Builder software, which is used to implement a system that uses the Nios II processor on an Altera FPGA device. The system development flow is illustrated by giving step-by-step instructions for using the SOPC Builder in conjunction with the Quartus® II software to implement a simple system. The last step in the development process involves configuring the designed circuit in an actual FPGA device, and running an application program. To show how this is done, it is assumed that the user has access to the Altera DE2 Development and Education board connected to a computer that has Quartus II and Nios® II software installed. The screen captures in the tutorial were obtained using the Quartus II version 5.1; if other versions of the software are used, some of the images may be slightly different. Contents: Nios II System Altera’s SOPC Builder Integration of the Nios II System into a Quartus II Project Running the Application Program Altera’s Nios II is a soft processor, defined in a hardware description language, which can be implemented in Altera’s FPGA devices by using the Quartus® II CAD system. To implement a useful system it is necessary to add other functional units such as memories, input/output interfaces, timers, and communications interfaces. To facilitate the implementation of such systems, it is useful to have computer-aided-design (CAD) software for implementing a system-on-a-programmable-chip (SOPC). Altera’s SOPC Builder is the software needed for this task. This tutorial provides a basic introduction to Altera’s SOPC Builder, which will allow the reader to quickly implement a simple Nios II system on the Altera DE2 board. For a fuller treatment of the SOPC Builder, the reader can consult the Nios II Hardware Development Tutorial. A complete description of the SOPC Builder can be found in the Quartus II Handbook Volume 4: SOPC Builder. These documents are available on the Altera web site. 1 Nios II System A Nios II system can be implemented on the DE2 board as shown in Figure 1. The Nios II processor and the interfaces needed to connect to other chips on the DE2 board are implemented in the Cyclone II FPGA chip. These components are interconnected by means of the interconnection network called the Avalon Switch Fabric. The memory blocks in the Cyclone II device can be used to provide an on-chip memory for the Nios II processor. The SRAM, SDRAM and Flash memory chips on the DE2 board are accessed through the appropriate interfaces. Parallel and serial input/output interfaces provide typical I/O ports used in computer systems. A special JTAG UART interface is used to connect to the circuitry that provides a Universal Serial Bus (USB) link to the host computer to which the DE2 board is connected. This circuitry and the associated software is called the *USB-Blaster*. Another module, called the JTAG Debug module, is provided to allow the host computer to control the Nios II system. It makes it possible to perform operations such as downloading programs into memory, starting and stopping execution, setting breakpoints, and collecting real-time execution trace data. Since all parts of the Nios II system implemented on the FPGA chip are defined by using a hardware description language, a knowledgeable user could write such code to implement any part of the system. This would be an onerous and time consuming task. Instead, one can use the SOPC Builder to implement a desired system simply by choosing the required components and specifying the parameters needed to make each component fit the overall requirements of the system. In this tutorial, we will illustrate the capability of the SOPC Builder by designing a very simple system. The same approach is used to design large systems. ![Diagram of a Nios II system](image) Figure 2. A simple example of a Nios II system. Our example system is given in Figure 2. The system realizes a trivial task. Eight toggle switches on the DE2 board, SW7 − 0, are used to turn on or off the eight green LEDs, LEDG7 − 0. The switches are connected to the Nios II system by means of a parallel I/O interface configured to act as an input port. The LEDs are driven by the signals from another parallel I/O interface configured to act as an output port. To achieve the desired operation, the eight-bit pattern corresponding to the state of the switches has to be sent to the output port to activate the LEDs. This will be done by having the Nios II processor execute a program stored in the on-chip memory. Continuous operation is required, such that as the switches are toggled the lights change accordingly. We will use the SOPC Builder to design the hardware depicted in Figure 2. Next, we will assign the Cyclone II pins to realize the connections between the parallel interfaces and the switches and LEDs which act as I/O devices. Then, we will configure the FPGA to implement the designed system. Finally, we will use the software tool called the Nios II Debug Client to assemble, download and execute a Nios II program that performs the desired task. Doing this tutorial, the reader will learn about: - Using the SOPC Builder to design a Nios II-based system - Integrating the designed Nios II system into a Quartus II project - Implementing the designed system on the DE2 board - Running an application program on the Nios II processor 2 Altera’s SOPC Builder The SOPC Builder is a tool used in conjunction with the Quartus II CAD software. It allows the user to easily create a system based on the Nios II processor, by simply selecting the desired functional units and specifying their parameters. To implement the system in Figure 2, we have to instantiate the following functional units: - Nios II processor, which is referred to as a Central Processing Unit (CPU) - On-chip memory, which consists of the memory blocks in the Cyclone II chip; we will specify a 4-Kbyte memory arranged in 32-bit words - Two parallel I/O interfaces - JTAG UART interface for communication with the host computer To define the desired system, start the Quartus II software and perform the following steps: 1. Create a new Quartus II project for your system. As shown in Figure 3, we stored our project in a directory called sopc_builder_tutorial, and we assigned the name lights to both the project and its top-level design entity. You can choose a different directory or project name, but be aware that the SOPC Builder software does not permit the use of spaces in file names. For example, an attempt to use a directory name sopc builder tutorial would lead to an error. In your project, choose the EP2C35F672C6 chip as the target device, because this is the FPGA on the DE2 board. 2. Select Tools > SOPC Builder, which leads to the pop-up box in Figure 4. Enter nios_system as the system name; this will be the name of the system that the SOPC Builder will generate. Choose VHDL as the target HDL, in which the system module will be specified. Click OK to reach the window in Figure 5. 3. Figure 5 displays the System Contents tab of the SOPC Builder, which is used to add components to the system and configure the selected components to meet the design requirements. The available components are listed on the left side of the window. Before choosing our components, examine the area in the figure labeled Target. A drop-down list is provided that allows some available Altera boards to be selected. It is not necessary to select a board, and since the DE2 board is not included in the list leave the selection as Unspecified board. Next, check the setting for the Device Family and ensure that Cyclone II is selected. 4. The Nios II processor runs under the control of a clock. For this tutorial we will make use of the 50-MHz clock that is provided on the DE2 board. As shown in Figure 5, it is possible to specify the names and frequency of clock signals in the SOPC Builder display. If not already included in this list, specify a clock named \textit{clk} with the source designated as External and the frequency set to 50.0 MHz. 5. Next, specify the processor as follows: - On the left side of the window in Figure 5 select Avalon Components > Nios II Processor - Altera Corporation and click Add, which leads to the window in Figure 6. Choose Nios II/e which is the simplest version of the processor. Click Finish to return to the window in Figure 5, which now shows the Nios II processor specified as indicated in Figure 7. There may be some warnings or error messages displayed in the SOPC Builder Messages window (at the bottom of the screen), because some parameters have not yet been specified. Ignore these messages as we will provide the necessary data later. Observe also that a new tab called Nios II More “cpu_0” Settings appears, which allows further configuration of the processor - we will not use it. Figure 7. The defined processor. 6. To specify the on-chip memory perform the following: - Select Avalon Components > Memory > On-Chip Memory (RAM or ROM) and click Add - In the On-Chip Memory Configuration Wizard window, shown in Figure 8, set the memory width to 32 bits and the total memory size to 4 Kbytes - Do not change the other default settings - Click Finish, which returns to the System Contents tab as indicated in Figure 9 7. Specify the input parallel I/O interface as follows: - Select Avalon Components > Other > PIO (Parallel I/O) and click Add to reach the PIO Configuration Wizard in Figure 10 • Specify the width of the port to be 8 bits and choose the direction of the port to be Input, as shown in the figure. • Click Finish to return to the System Contents tab as given in Figure 11. Figure 10. Define a parallel input interface. Figure 11. The parallel input interface is included. 8. In the same way, specify the output parallel I/O interface: - Select Avalon Components > Other > PIO (Parallel I/O) and click Add to reach the PIO Configuration Wizard again - Specify the width of the port to be 8 bits and choose the direction of the port to be Output - Click Finish to return to the System Contents tab 9. We wish to connect to a host computer and provide a means for communication between the Nios II system and the host computer. This can be accomplished by instantiating the JTAG UART interface as follows: - Select Avalon Components > Communication > JTAG UART and click Add to reach the JTAG UART Configuration Wizard in Figure 12 - Do not change the default settings - Click Finish to return to the System Contents tab ![Figure 12. Define the JTAG UART interface.](image) 10. The complete system is depicted in Figure 13. Note that the SOPC Builder automatically chooses names for the various components. The names are not necessarily descriptive enough to be easily associated with the target design, but they can be changed. In Figure 2, we use the names Switches and LEDs for the parallel input and output interfaces, respectively. These names can be used in the implemented system. Right-click on the pio_0 name and then select Rename. Change the name to Switches. Similarly, change pio_1 to LEDs. 11. The base and end addresses of the various components in the designed system can be assigned by the user, but they can also be assigned automatically by the SOPC Builder. We will choose the latter possibility. So, select the command (using the menus at the top of the SOPC Builder window) System > Auto-Assign Base Addresses, which produces the assignment shown in Figure 14. 12. Having specified all components needed to implement the desired system, it can now be generated. Select the System Generation tab, which leads to the window in Figure 15. Turn off Simulation - Create simulator project files, because in this tutorial we will not deal with the simulation of hardware. Click Generate on the bottom of the SOPC Builder window. The generation process produces the messages displayed in the figure. When the message “SUCCESS: SYSTEM GENERATION COMPLETED” appears, click Exit. This returns to the main Quartus II window. Changes to the designed system are easily made at any time by reopening the SOPC Builder tool. Any component in the System Contents tab of the SOPC Builder can be selected and deleted, or a new component can be added and the system regenerated. 3 Integration of the Nios II System into a Quartus II Project To complete the hardware design, we have to perform the following: - Instantiate the module generated by the SOPC Builder into the Quartus II project - Assign the FPGA pins - Compile the designed circuit - Program and configure the Cyclone II device on the DE2 board 3.1 Instantiation of the Module Generated by the SOPC Builder The instantiation of the generated module depends on the design entry method chosen for the overall Quartus II project. We have chosen to use VHDL, but the approach is similar for both Verilog and schematic entry methods. Normally, the Nios II module is likely to be a part of a larger design. However, in the case of our simple example there is no other circuitry needed. All we need to do is instantiate the Nios II system in our top-level VHDL file, and connect inputs and outputs of the parallel I/O ports, as well as the clock and reset inputs, to the appropriate pins on the Cyclone II device. The VHDL entity generated by the SOPC Builder is in the file nios_system.vhd in the directory of the project. Note that the name of the VHDL entity is the same as the system name specified when first using the SOPC Builder. The VHDL code is quite large. Figure 16 depicts the portion of the code that defines the port signals for the entity nios_system. The 8-bit vector that is the input to the parallel port Switches is called in_port_to_the_Switches. The 8-bit output vector is called out_port_from_the_LEDs. The clock and reset signals are called clk and reset_n, respectively. Note that the reset signal is added automatically by the SOPC Builder; it is called reset_n because it is active low. ```vhdl entity nios_system is port ( -- 1) global signals: signal clk : IN STD_LOGIC; signal reset_n : IN STD_LOGIC; -- the LEDs signal out_port_from_the_LEDs : OUT STD_LOGIC_VECTOR (7 DOWNTO 0); -- the Switches signal in_port_to_the_Switches : IN STD_LOGIC_VECTOR (7 DOWNTO 0) ); end entity nios_system; ``` Figure 16. A part of the generated VHDL entity. Figure 17 shows a top-level VHDL entity that instantiates the Nios II system. This entity is named lights, because this is the name we specified in Figure 3 for the top-level design entity in our Quartus II project. Note that the input and output ports of the entity use the pin names for the 50-MHz clock, CLOCK_50, pushbutton switches, KEY, toggle switches, SW, and green LEDs, LEDG, that are specified in the DE2 User Manual. Type this code into a file called lights.vhd. Add this file and all the *.vhd files produced by the SOPC Builder to your Quartus II project. Also, add the necessary pin assignments on the DE2 board to your project. The procedure for making pin assignments is described in the tutorial Quartus II Introduction Using VHDL Design. Note that an easy way of making the pin assignments when we use the same pin names as in the DE2 User Manual is to import the assignments given in the file called DE2_pin_assignments.csv in the directory DE2_tutorials\design_files, which is included on the CD-ROM that accompanies the DE2 board and can also be found on Altera’s DE2 web pages. Since the system we are designing needs to operate at a 50-MHz clock frequency, add the needed timing assignment in your Quartus II project. The tutorial Timing Considerations with VHDL-Based Designs shows how this is done. --- Implements a simple Nios II system for the DE2 board. --- Inputs: SW7—0 are parallel port inputs to the Nios II system --- CLOCK_50 is the system clock --- KEY0 is the active-low system reset --- Outputs: LEDG7—0 are parallel port outputs from the Nios II system LIBRARY ieee; USE ieee.std_logic_1164.all; USE ieee.std_logic_arith.all; USE ieee.std_logic_unsigned.all; ENTITY lights IS PORT (SW : IN STD_LOGIC_VECTOR(7 DOWNTO 0); KEY : IN STD_LOGIC_VECTOR(0 DOWNTO 0); CLOCK_50 : IN STD_LOGIC; LEDG : OUT STD_LOGIC_VECTOR(7 DOWNTO 0) ); END lights; ARCHITECTURE Structure OF lights IS COMPONENT nios_system PORT (clk : IN STD_LOGIC; reset_n : IN STD_LOGIC; out_port_from_the_LEDs : OUT STD_LOGIC_VECTOR (7 DOWNTO 0); in_port_to_the_Switches : IN STD_LOGIC_VECTOR (7 DOWNTO 0) ); END COMPONENT; BEGIN -- Instantiate the Nios II system entity generated by the SOPC Builder NiosII: nios_system PORT MAP (CLOCK_50, KEY(0), LEDG, SW); END Structure; Figure 17. Instantiating the Nios II system. Having made the necessary settings compile the code. You may see some warning messages associated with the Nios II system, such as some signals being unused or having wrong bit-lengths of vectors; these warnings can be ignored. 3.2 Programming and Configuration Program and configure the Cyclone II FPGA in the JTAG programming mode as follows: 1. Connect the DE2 board to the host computer by means of a USB cable plugged into the USB-Blaster port. Turn on the power to the DE2 board. Ensure that the RUN/PROG switch is in the RUN position. 2. Select Tools > Programmer to reach the window in Figure 18. 3. If not already chosen by default, select JTAG in the Mode box. Also, if the USB-Blaster is not chosen by default, press the Hardware Setup... button and select the USB-Blaster in the window that pops up. 4. The configuration file lights.sof should be listed in the window. If the file is not already listed, then click Add File and select it. 5. Click the box under **Program/Configure** to select this action. 6. At this point the window settings should appear as indicated in Figure 18. Press **Start** to configure the FPGA. ![Image of Programmer window](image) **Figure 18.** The Programmer window. ### 4 Running the Application Program Having configured the required hardware in the FPGA device, it is now necessary to create and execute an application program that performs the desired operation. This can be done by writing the required program either in the Nios II assembly language or in a high-level language such as C. We will illustrate both approaches. A parallel I/O interface generated by the SOPC Builder is accessible by means of registers in the interface. Depending on how the PIO is configured, there may be as many as four registers. One of these registers is called the Data register. In a PIO configured as an input interface, the data read from the Data register is the data currently present on the PIO input lines. In a PIO configured as an output interface, the data written (by the Nios II processor) into the Data register drives the PIO output lines. If a PIO is configured as a bidirectional interface, then the PIO inputs and outputs use the same physical lines. In this case there is a Data Direction register included, which determines the direction of the input/output transfer. In our unidirectional PIOs, it is only necessary to have the Data register. The addresses assigned by the SOPC Builder are 0x00001800 for the Data register in the PIO called Switches and 0x00001810 for the Data register in the PIO called LEDs, as indicated in Figure 14. You can find a full description of the PIO interface by opening the SOPC Builder window in Figure 14 and right-clicking on the module name of a PIO (either Switches or LEDs). Then, in the pop-up box select **Data Sheet** to open the document **PIO Core with Avalon Interface** which gives a full description of the interface. To use this facility you need to be connected to the Internet. #### 4.1 Using a Nios II Assembly Language Program Figure 19 gives a Nios II assembly-language program that implements our trivial task. The program loads the addresses of the Data registers in the two PIOs into processor registers r2 and r3. It then has an infinite loop that merely transfers the data from the input PIO, *Switches*, to the output PIO, *LEDs*. .include "nios_macros.s" .equ Switches, 0x00001800 .equ LEDs, 0x00001810 .global _start _start: movia r2, Switches movia r3, LEDs loop: ldbio r4, 0(r2) stbio r4, 0(r3) br loop Figure 19. Assembly language code to control the lights. The program includes the assembler directive .include "nios_macros.s" which informs the Assembler to use the Nios II macros that specify how the movia pseudoinstructions can be assembled. The directive .global _start indicates to the Assembler that the label _start is accessible outside the assembled object file. This label is the default label we use to indicate to the Linker program the beginning of the application program. For a detailed explanation of the Nios II assembly language instructions see the tutorial Introduction to the Altera Nios II Soft Processor. Enter this code into a file lights.s and place the file into a working directory. We placed the file into the directory sopc_builder_tutorial\app_software. The program has to be assembled and converted into an S-Record file, lights.srec, suitable for downloading into the implemented Nios II system. Altera provides the monitor software, called Altera Debug Client, for use with the DE2 board. This software provides a simple means for compiling, assembling and downloading of programs into a Nios II system implemented on a DE2 board. It also makes it possible for the user to perform debugging tasks. A description of this software is available in the Altera Debug Client tutorial. Open the Altera Debug Client, which leads to the window in Figure 20. This software needs to know the characteristics of the designed Nios II system, which are given in the ptf file nios_system.ptf. Click the Nios II > Configure system... menu item to display the Nios II System Configuration window, shown in Figure 21, and perform the following steps: 1. Select the USB-Blaster cable from the Cable drop-down list, which is used with DE2 board. 2. Click Browse... to display a file selection window and choose the nios_system.ptf file. Note that this file is in the design directory sopc_builder_tutorial. 3. Click Load. 4. The Altera Debug Client also needs to know where to load the application program. In our case, this is the memory block in the FPGA device. The SOPC Builder assigned the name onchip_memory_0 to this block. As shown in Figure 21, the Debug Client has already selected the correct memory device. 5. Having provided the necessary information, click Ok to confirm the system configuration. Next, the source file `lights.s` needs to be specified. Click the Nios II > Configure program... menu item to display the Nios II Program Configuration window in Figure 22 and perform the following steps: 1. Click Add... to display a file selection window and choose the `lights.s` file. Note that this file is in the directory `sopc_builder_tutorial\app_software`. 2. Click Ok to confirm the program configuration. Next, to assemble and download the `light.s` program, click the Actions > Compile & Load menu item. The Altera Debug Client will invoke an assembler program, followed by a linker program. The commands used to invoke these programs, and the output they produce, can be viewed in the Info & Errors window of the Debug Client window. After the program has been downloaded onto the board, the program is displayed in the Disassembly window of the Debug Client as illustrated in Figure 23. Observe that `movia` is a pseudoinstruction which is implemented as two separate instructions. Click the Actions > Continue menu item to execute the program. With the program running, you can now test the design by turning the switches, SW7 to SW0 on and off; the LEDs should respond accordingly. The Debug Client allows a number of useful functions to be performed in a simple manner. They include: - single stepping through the program - examining the contents of processor registers - examining the contents of the memory - setting breakpoints for debugging purposes - disassembling the downloaded program A description of this software and all of its features is available in the *Altera Debug Client* tutorial. Figure 20. The Altera Debug Client window on startup. Figure 21. The Nios II System Configuration window. 4.2 Using a C-Language Program An application program written in the C language can be handled in the same way as the assembly-language program. A C program that implements our simple task is given in Figure 24. Enter this code into a file called lights.c. ```c #define Switches (volatile char *) 0x0001800 #define LEDs (char *) 0x0001810 void main() { while (1) { *LEDs = *Switches; } } ``` Figure 24. C language code to control the lights. Perform the following steps to use this program: 1. Disconnect from the current debugging session by clicking the **Actions > Disconnect** menu item. 2. Click the **Nios II > Configure program...** menu item to launch the Nios II Program Configuration window. 3. Select **C** as the **Program Type** in the drop-down list. 4. Select the **lights.s** file and click **Remove** to remove it from the list of source files. 5. Click **Add...** and choose the **lights.c** file. 6. Click **Ok** to confirm the new program configuration. The steps to compile, load, and run the program are the same as for an assembly language program. Copyright ©2006 Altera Corporation. All rights reserved. Altera, The Programmable Solutions Company, the stylized Altera logo, specific device designations, and all other words and logos that are identified as trademarks and/or service marks are, unless noted otherwise, the trademarks and service marks of Altera Corporation in the U.S. and other countries. All other product or service names are the property of their respective holders. Altera products are protected under numerous U.S. and foreign patents and pending applications, mask work rights, and copyrights. Altera warrants performance of its semiconductor products to current specifications in accordance with Altera’s standard warranty, but reserves the right to make changes to any products and services at any time without notice. Altera assumes no responsibility or liability arising out of the application or use of any information, product, or service described herein except as expressly agreed to in writing by Altera Corporation. Altera customers are advised to obtain the latest version of device specifications before relying on any published information and before placing orders for products or services. This document is being provided on an “as-is” basis and as an accommodation and therefore all warranties, representations or guarantees of any kind (whether express, implied or statutory) including, without limitation, warranties of merchantability, non-infringement, or fitness for a particular purpose, are specifically disclaimed.
{"Source-Url": "http://hamblen.ece.gatech.edu/DE1/DE1_CDROM/DE1_tutorials/tut_sopc_introduction_vhdl.pdf", "len_cl100k_base": 6031, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 36317, "total-output-tokens": 6914, "length": "2e12", "weborganizer": {"__label__adult": 0.0008702278137207031, "__label__art_design": 0.0011053085327148438, "__label__crime_law": 0.0004820823669433594, "__label__education_jobs": 0.0007576942443847656, "__label__entertainment": 0.0001424551010131836, "__label__fashion_beauty": 0.0004549026489257813, "__label__finance_business": 0.000331878662109375, "__label__food_dining": 0.0006213188171386719, "__label__games": 0.0013093948364257812, "__label__hardware": 0.1378173828125, "__label__health": 0.0007467269897460938, "__label__history": 0.0004265308380126953, "__label__home_hobbies": 0.0004305839538574219, "__label__industrial": 0.00438690185546875, "__label__literature": 0.00020444393157958984, "__label__politics": 0.0003197193145751953, "__label__religion": 0.0013589859008789062, "__label__science_tech": 0.116943359375, "__label__social_life": 7.87973403930664e-05, "__label__software": 0.0181884765625, "__label__software_dev": 0.7109375, "__label__sports_fitness": 0.0006642341613769531, "__label__transportation": 0.0011043548583984375, "__label__travel": 0.0002593994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27150, 0.02017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27150, 0.68004]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27150, 0.87869]], "google_gemma-3-12b-it_contains_pii": [[0, 1061, false], [1061, 2146, null], [2146, 3962, null], [3962, 7115, null], [7115, 8166, null], [8166, 8375, null], [8375, 9404, null], [9404, 9584, null], [9584, 9879, null], [9879, 11593, null], [11593, 11903, null], [11903, 12723, null], [12723, 15819, null], [15819, 17862, null], [17862, 20265, null], [20265, 22806, null], [22806, 24430, null], [24430, 24537, null], [24537, 24795, null], [24795, 27150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1061, true], [1061, 2146, null], [2146, 3962, null], [3962, 7115, null], [7115, 8166, null], [8166, 8375, null], [8375, 9404, null], [9404, 9584, null], [9584, 9879, null], [9879, 11593, null], [11593, 11903, null], [11903, 12723, null], [12723, 15819, null], [15819, 17862, null], [17862, 20265, null], [20265, 22806, null], [22806, 24430, null], [24430, 24537, null], [24537, 24795, null], [24795, 27150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27150, null]], "pdf_page_numbers": [[0, 1061, 1], [1061, 2146, 2], [2146, 3962, 3], [3962, 7115, 4], [7115, 8166, 5], [8166, 8375, 6], [8375, 9404, 7], [9404, 9584, 8], [9584, 9879, 9], [9879, 11593, 10], [11593, 11903, 11], [11903, 12723, 12], [12723, 15819, 13], [15819, 17862, 14], [17862, 20265, 15], [20265, 22806, 16], [22806, 24430, 17], [24430, 24537, 18], [24537, 24795, 19], [24795, 27150, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27150, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ce9cded8be5cc842bf1b4e7a111cb87712a4d373
A Framework for Building Real-Time Expert Systems S. Daniel Lee Inference Corporation 550 N. Continental Blvd. El Segundo, CA 90245 Abstract NASA's Space Station Freedom is an example of complex systems that require both traditional and AI real-time methodologies. It has been mandated that Ada should be used for all new software development projects. The Station also requires distributed processing. Catastrophic failures on the Station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the Station. This is even more critical for other NASA projects that would have longer transmission delays (e.g. the Lunar base, Mars missions, etc.) To address these issues, we propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. The proposed testbed for DAA is APEX (Autonomous Power EXpert), which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of NASA's Space Station Freedom. 1. Introduction The current, ongoing work of Inference, the "Real-Time Expert Systems" project for NASA Johnson Space Center, under a subcontract to the University of Houston - Clear Lake, has provided valuable insights into requirements for real-time knowledge-based systems being developed for NASA's Space Station Freedom. NASA's Space Station Freedom is an example of complex systems that require both traditional and AI real-time methodologies. The standard on-board processor on the Station is an 80836-based workstation with limited memory. In the ground-based control center, on the other hand, conventional engineering workstations can be used for AI applications. It has also been mandated that Ada should be used for all new software development projects. The Station also requires distributed processing. For example, if expert systems for fault detection isolation and recovery (FDIR) for the Station were fielded only in the ground-based control center, communication delays could cause serious problems. Catastrophic failures on the Station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the Station. This is even more critical for other NASA projects that would have longer transmission delays (e.g. the Lunar base, Mars missions, etc.) However, current real-time knowledge-based system architectures suffer from a variety of shortcomings: - A heavy dependence on inefficient implementation platforms, usually Common Lisp, which makes it difficult if not impossible to be deployed in real-time embedded systems. - A weak integration with traditional real-time computing methodologies. - An inability for the architectures to be distributed among multiple heterogeneous platforms that communicate asynchronously. We have, previously, implemented an Ada-based expert system tool, ART-Ada, to facilitate the deployment of expert systems in Ada, which addresses the first point above [13], [14], [11], [15]. We propose a distributed agent architecture (DAA) that can support a variety of paradigms based on both traditional real-time computing and artificial intelligence. 2. Distributed Agent Architecture Figure 2-1: Distributed Agent Architecture The distributed agent architecture (DAA) for real-time knowledge-based systems is depicted in figure 2-1. DAA has the following technical objectives: - The overall system performance should satisfy real-time requirements. Onboard systems should prevent catastrophic failures during the absence of assistance from ground-based systems due to the malfunction of communication systems. - Onboard systems should adapt gracefully to dynamic environments by trading quality for speed of response. - The architecture should be based on distributed and cooperative processing, which will enable migration of knowledge-based system modules from ground-based systems to onboard systems. - Its baseline implementation language should be Ada. Ada will make it possible to employ traditional real-time computing methodologies and to deploy knowledge-based systems in embedded systems. If both ground systems and onboard systems are implemented in Ada, it would be easier to migrate modules from ground to the Station. DAA consists of distributed agents that are classified into two categories: reactive and cognitive. Reactive agents can be implemented directly in Ada to meet hard real-time requirements and to be deployed on on-board embedded processors. A traditional real-time computing methodology under consideration is the rate monotonic theory that can guarantee schedulability based on analytical methods [20], [21]. AI techniques under consideration for reactive agents are approximate or "anytime" reasoning that can be implemented using Bayesian belief networks as in Guardian [8], [7]. Fuzzy logic [16], [26], [22] and reactive planning [1], [5], [10], [17], [18] are also being considered for reactive agents. Cognitive agents are traditional expert systems that can be implemented in ART-Ada to meet soft real-time requirements. During the initial design of cognitive agents, it is critical to consider the migration path that would allow initial deployment on ground-based workstations with eventual deployment on on-board processors. ART-Ada technology enables this migration while Lisp-based technologies make it difficult if not impossible. In addition to reactive and cognitive agents, a meta-level agent would be needed to coordinate multiple agents and to provide meta-level control. An important area of coordination is timeline management. Following [2], we intend to implement three timelines --- occurred, expected, and intended --- where each timeline records one type of information. Any agents can process or post events in any timelines through the meta-level agent. 3. Reactive Agents Reactive agents are designed to meet hard real-time requirements. Hard real-time requirements are different from soft real-time in that if hard deadlines are not met, catastrophic failures are likely to occur. Catastrophic failures include the loss of human lives, the loss of major hardware components, etc. On the other hand, even if soft deadlines are violated, no major catastrophic failures are likely to occur. It is also critical that reactive agents fit into embedded processors of the Space Station Freedom. Some AI tasks can be directly implemented in a procedural language such as Ada. The use of Ada will enable us to take advantage of recent progress that has been made in the area of real-time computing in Ada. A noteworthy example is the rate monotonic theory that can guarantee schedulability based on analytical methods [20], [21]. The rate monotonic theory guarantees schedulability of multiple tasks if certain conditions are satisfied. There are some restrictions, however: - The execution time of a task must be known because it is a parameter in conditions that must be satisfied. - It assigns the highest priority to a periodic task with the shortest period. Therefore, it prevents tasks from having priorities based on other criteria. • The theory applies only to multiple tasks — periodic and aperiodic — that reside on a single processor. It is not clear whether the theory can be used for dynamic scheduling. It is usually used before the program execution to determine whether deadlines could be met. If deadlines are not met, periods of periodic tasks must be adjusted properly. We believe that the theory can be used to adjust periods dynamically if they are allowed to change dynamically. The theory does not prescribe how to find periods that would meet the deadlines, however. With the right Ada runtime executive that supports rate monotonic scheduling, the schedulability can be guaranteed in advance by applying the theory analytically. It is expected that the Ada 9X Project will incorporate the rate monotonic algorithm in the next revision of the Ada language, which is due for release in 1993. An AI technique that is useful for reactive agents is approximate or "anytime" reasoning. For example, Guardian uses a Bayesian belief network to provide reactive diagnosis. Each node of a Bayesian belief network is associated with an action. When a deadline is reached, Guardian simply recommends the action associated with the current node. If more time is given, it will continue to refine its belief and may recommend a conflicting action later on. We plan to implement an approximate reasoning module based on Bayesian belief networks in Ada. Fuzzy logic-based systems [16], [26], [22] can also be used as reactive agents, using either modeling software or fuzzy hardware. In fact, fuzzy logic may subsume probabilistic reasoning using Bayesian belief networks. Fuzzy systems are becoming popular in Japan [19]. Togai InfraLogic, Inc in Irvine, California manufactures fuzzy-system chips and modeling software written in C. Fuzzy systems are suitable for reactive agents because: • Real-time response can be achieved by implementing the logic on a chip. • Fuzzy logic allows approximate reasoning. Various reactive planning methods have been proposed [1], [6], [10], [17], [18]. These planning methods (a.k.a. universal planning) have been sharply criticized mainly for the exponential growth of their size with the complexity of the domain [6]. We plan to study both sides of arguments and investigate the possibilities of implementing reactive planning agents using some of these methods in DAA. 4. Cognitive Agents Cognitive agents are traditional knowledge-based systems that are designed to meet soft real-time requirements. AI problems such as diagnosis demand accuracy of solution within a soft deadline rather than sacrifice of solution quality to meet a hard deadline. While reactive agents address the latter through approximate reasoning, cognitive agents should be based on AI techniques that facilitate deeper reasoning. For example, in Guardian, model-based reasoning is used for cognitive diagnosis while a Bayesian belief network is used for reactive diagnosis. Although AI systems usually run on a ground-based engineering workstation today, it is becoming increasingly important that these systems are readily available in real-time embedded environments. Inference has already developed ART-Ada, an Ada-based expert system tool, for this specific purpose. ART-Ada supports rule-based reasoning as well as frame-based reasoning that can be used to implement model-based reasoning. When the current version of ART-Ada is used, the total memory requirement for an ART-Ada application with hundreds of rules is 2-3 megabyte. It may be reasonable for embedded systems based on newer processors such as the Intel 80386 and 80960, the Motorola 68000 and 88000, and the MIPS RISC chip. It is important, however, to note that the current version of ART-Ada is not optimized. The primary focus of the current release was to provide functionality. Inference plans to release an optimized version of ART-Ada in the near future. Because of numerous bugs found in the Ada compilers used for this project, we could not make some of the obvious performance optimizations that could have made ART-Ada faster and smaller [11]. In addition to compiler problems, we also discovered some fundamental issues with the Ada language itself that also affected the performance of ART-Ada [11]. In particular, the problem with dynamic memory management has the most significant impact on the execution size and performance of ART-Ada. Our current research effort is focused on implementing ART-Ada's own memory manager using an existing technology. If it is not possible to implement it in Ada, we will implement it in an assembly language. Another area of research is to improve real-time support in ART-Ada. Several extensions to ART-Ada are proposed to address real-time issues and included in Appendix I. 5. A Meta-Level Agent In a distributed architecture like DAA, the problem is how to provide meta-level control and coordination between distributed agents. A meta-level agent is a common blackboard for meta-level control and coordination. Some examples of meta-level control are: - to control the data input rate of the preprocessor --- when a serious problem arises, the input data rate can be reduced so that agents spend more resources in dealing with the current situation; - to assign tasks to agents --- crisis situations may have to be handled by reactive agents to provide quick fixes while cognitive agents may follow up on it later; - to reconcile conflicting recommendations --- when reactive agents and cognitive agents make conflicting recommendations, it is necessary to reconcile the differences; and - to schedule operations for effectors --- when multiple agents try to control effectors, it is necessary to schedule effector assignments. Another important area of coordination is timeline management. Following [2], we intend to implement three timelines where each timeline records one type of information. The occurred timeline is used for representing facts acquired from monitoring sensors. The expected timeline represents what we expect in the future. The intended timeline represents goals. The intended timeline is different from the expected timeline in that actions can be taken to ensure that goals are met, whereas no actions need to be taken to produce expected results. Any agents can process or post events in any timelines through the meta-level agent. We intend to use ART-Ada to implement the meta-level agent. 6. Interagent Communication There are several possible layers in the interagent communication protocol: - protocol for interprocess communication, - protocol for telemetry, - protocol for distributed objects, - protocol for distributed knowledge bases, and - protocol for distributed autonomous agents. Unix interprocess communication protocol (e.g. sockets and TCP/IP) would be a reasonable low-level protocol for prototypes. We intend to develop a protocol for distributed objects because we believe that it is an optimal layer for interagent communication. Other higher-level protocols are interesting research topics, but they may not be as practical as the distributed object protocol. Eventually, protocols used in prototypical systems should be replaced with actual protocols supported by the Space Station Freedom. 7. APEX Testbed The proposed testbed for DAA is a real-time monitoring and diagnosis expert system called APEX (Autonomous Power EXpert) for the electrical power distribution system of the Space Station Freedom [23], [24]. We will use APEX to illustrate how DAA can be applied to real-time knowledge-based systems for Space Station Freedom. It was previously implemented in KEE and Common Lisp and is being ported to ART-Ada and Ada at NASA Lewis Research Center. The APEX testbed will be used to demonstrate the advantages of this approach. ![Figure 7-1: Current APEX](image-url) Figure 7-1 is a simplified block diagram of the current APEX implementation while Figure 7-2 is that of the new implementation based on DAA. In the current implementation of APEX, there are three modules: • an expert system module written in KEE and Common Lisp that detects multiple faults, predicts possible future faults, and recommends fixes; • a scheduler module written in C based on linear programming that schedules electrical power distribution for maximum utilization of generated electrical power; and • several software controller modules written in Ada that detect single faults and fix them immediately [25]. The software controller modules are written in Ada and deployed on the hardware controllers of the electrical power distribution system. These modules are designed to meet timing requirements of less than a second. They are examples of reactive agents. The scheduler module is implemented separately from the expert system module, and runs on a PC communicating through a network. It is expected to be deployed on the Station as a reactive agent because its absence is unacceptable when the transmission between the Station and the control center is down. This module seems to lack dynamic scheduling capability. We intend to investigate the possibilities of applying AI techniques for dynamic scheduling. NASA Lewis Research Center is also considering COMPASS (COMputer Aided Scheduling System). COMPASS is an interactive planning and scheduling system developed by McDonnell Douglas, and is available through NASA Johnson Space Center [3]. It is written in Ada and uses X windows interfaces. Figure 7-2: APEX based on DAA The expert system module should be distributed; more critical functionality that requires reactive responses should be separated as a reactive diagnostician and deployed on the Station while less critical functionalities such as trend analysis and long-term prediction can remain as a cognitive diagnostician in the ground-based control center. Following [8], [7], the reactive diagnostician based on associative reasoning methods will be implemented as a Bayesian belief network while the cognitive diagnostician based on rule- and model-based reasoning methods will be implemented in ART-Ada. By the same token, a recovery planner may have to be separated into a reactive planner and a cognitive planner. 8. Conclusion DAA focuses on the cooperation between onboard systems and ground-based ones, which is not currently well addressed by the Space Station Freedom Program. It is not easy to achieve cooperative processing between onboard systems and ground systems. We believe that it is technically feasible, but it is difficult because it involves multiple organizations. Currently, onboard systems and ground-based systems are handled by different contractors. If an architecture like DAA is adopted as a general framework for the Space Station, it could be used as a "glue" between different contractors. Many flight-related software components will reside in the SSCC (Space Station Control Center) because onboard computing resources are very limited. We believe that ground-based flight-related software systems should operate in the same environment as onboard flight software for two reasons: • If ground-based software components are crucial for flight, it should be considered as part of the flight software. The same verification and validation standard that is normally applied to onboard flight software should also be applied to these software components. • If ground-based software components are destined to migrate to the Station, it would be essential for the SSCC to have the same operating environment as the onboard environment. Because of these reasons, the Ada mandate should be imposed on the development of any new ground-based flight-related software components as well as onboard software. Another important issue raised by DAA is the assessment of risks caused by communication delays. Average communication delay may be less than a minute in normal operating conditions, which is not significant. On the other hand, there might be longer delays caused by "blind spots" in the communication networks or by hardware failures in the transmission systems. NASA should assess any risks of having catastrophic failures on the Station due to the absence of support from ground-based systems during these communication delays. 9. Acknowledgments The author wishes to acknowledge the guidance and support of Chris Culbert and Bob Savely of NASA Johnson Space Center, Greg Swietek of NASA Headquarters, and Captain Mark Gersh of the U.S. Air Force. Brad Allen, Mark Auburn and Sherry Walden of Inference Corporation contributed to the project. Barbara Hayes-Roth of Stanford University, Rajendra Dodhiawala and Cindy Pickering of FMC, Francois Felix Ingrand of SRI International, Tom Broten of TRW, Rich Knackstedt and Steve Bate of McDonnell Douglas, Jerry Walters of NASA LeRC and many other NASA scientists and contractors provided useful discussions and feedback. References I. Proposed Real-Time Extensions to ART-Ada 1.1. Performance Monitoring and Tuning The performance of an expert system varies widely depending on how it is implemented. It is often necessary to monitor activities in the pattern matcher (e.g., the number of pattern instantiations, partial matches, activations, etc.) or the execution time of a rule RHS (right-hand side) action in order to determine areas for optimization. Performance analysis can be aided by a set of tools that graphically display the information. Unlike conventional software, rule-based systems are sensitive to the ordering of patterns in rules. Currently, the only way to optimize pattern ordering is to monitor activities in the pattern and join networks and optimize them manually. It may be possible, however, to automate this manual optimization process. It has been reported that an automated tool was successfully used to optimize join ordering [9]. An optimization algorithm can be automatically applied to a rule-based program to find near-optimal pattern ordering for the entire program. 1.2. Temporal Reasoning and Trend Analysis In a real-time expert system, it is often necessary to reason about and perform statistical analysis on temporal data -- data that change over time. In order to avoid information overloading, several levels of abstraction should be used. Raw data should be preprocessed to suppress noises and redundant data. Historical data should not participate in the pattern-matching process directly. Rather, high-level abstraction acquired by applying temporal reasoning and trend analysis to the historical data, should used in the knowledge base. We propose to implement a set of functions that can be layered on top of ART-Ada as a separate library for temporal reasoning and trend analysis. This library is based on the concepts, monitors, events and timers. A monitor is used to store historical data in a ring buffer outside of the knowledge base. A monitor is referred to only by its name, which is stored in a hash table. Events are used to extract temporal relations between parameters. Events are a collection of time that satisfies certain conditions. Rule-based systems are usually data-driven. In a real-time system, however, processing must be driven by time as well as data. A timer can be used to implemented time-driven processing. For more details on monitors, events and timers, see [12]. I.3. Dynamic Rule Priority In real-time AI architectures, the priority of a task should be dynamically determined based on the timing constraints and the resource requirements of the task [8], [4]. In the current version of ART-Ada, the priority of a rule cannot be changed dynamically. If the priority of a rule is allowed to be changed at runtime, rule scheduling strategy can also be modified dynamically. In the following example, the closer the distance is, the higher priority will be assigned to the rule activation. In fact, the same rule can be activated with different priorities if its priority can be modified dynamically. In order for the rule dynamic priority to function properly, the priorities of all activated rules in the agenda must be refreshed before a rule is selected for execution. If the execution time of a rule is known, it can be used to adjust its priority. It is often desirable to assign a higher priority to a rule with a shorter execution time. In fact, it is the strategy used by the rate monotonic theory [20], [21]. In the following example, duration is the execution time of a rule RHS action. The execution time can be either measured or estimated. (defrule foo (declare (salience ?s = 1/?d)) (declare (duration 1 sec)) (schema ?enemy-plane (distance ?d)) => (...)) I.4. Message Passing between Distributed Expert Systems Multiple cooperating ART-Ada applications can run on loosely-coupled multiple processors. ART-Ada supports object-oriented programming. A method is a function associated with an object or a class that can be inherited. When a message is sent via an ART-Ada function send, an appropriate method will be invoked. If objects are distributed over multiple processors, and a data dictionary is used to define mapping between a processor and an object, the message passing mechanism through send can be used without modification to implement distributed message passing. When a message is sent, the system can simply check the data dictionary and send the message to the appropriate processor. Each ART-Ada application can use an asynchronous function to check its message queue between every rule firing.
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19910011375.pdf", "len_cl100k_base": 4726, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23123, "total-output-tokens": 5848, "length": "2e12", "weborganizer": {"__label__adult": 0.0003516674041748047, "__label__art_design": 0.0004303455352783203, "__label__crime_law": 0.0005130767822265625, "__label__education_jobs": 0.0006527900695800781, "__label__entertainment": 0.00010514259338378906, "__label__fashion_beauty": 0.0001995563507080078, "__label__finance_business": 0.00044083595275878906, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.0006418228149414062, "__label__hardware": 0.0027408599853515625, "__label__health": 0.0006952285766601562, "__label__history": 0.00041365623474121094, "__label__home_hobbies": 0.00013136863708496094, "__label__industrial": 0.0011663436889648438, "__label__literature": 0.0002663135528564453, "__label__politics": 0.0004122257232666016, "__label__religion": 0.0004954338073730469, "__label__science_tech": 0.30029296875, "__label__social_life": 0.0001003742218017578, "__label__software": 0.019195556640625, "__label__software_dev": 0.6689453125, "__label__sports_fitness": 0.0002949237823486328, "__label__transportation": 0.0009288787841796876, "__label__travel": 0.00022459030151367188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27342, 0.01955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27342, 0.75659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27342, 0.92397]], "google_gemma-3-12b-it_contains_pii": [[0, 3379, false], [3379, 7334, null], [7334, 12127, null], [12127, 15401, null], [15401, 18907, null], [18907, 22748, null], [22748, 25166, null], [25166, 27342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3379, true], [3379, 7334, null], [7334, 12127, null], [12127, 15401, null], [15401, 18907, null], [18907, 22748, null], [22748, 25166, null], [25166, 27342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27342, null]], "pdf_page_numbers": [[0, 3379, 1], [3379, 7334, 2], [7334, 12127, 3], [12127, 15401, 4], [15401, 18907, 5], [18907, 22748, 6], [22748, 25166, 7], [25166, 27342, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27342, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
3ecb37d0fbcb58dafc9ddcda10e9472a6386ce6e
2014-07-16 SDN management layer: design requirements and future direction Wang, Yuefeng Computer Science Department, Boston University Boston University SDN Management Layer: Design Requirements and Future Direction Yuefeng Wang Ibrahim Matta Computer Science Department, Boston University Boston, MA 02215 {wyf, matta}@bu.edu Abstract—Computer networks are becoming more and more complex and difficult to manage. The research community has been expending a lot of efforts to come up with a general management paradigm that is able to hide the details of the physical infrastructure and enable flexible network management. Software Defined Networking (SDN) is such a paradigm that simplifies network management and enables network innovations. In this survey paper, by reviewing existing SDN management layers (platforms), we identify the general common management architecture for SDN networks, and further identify the design requirements of the management layer that is at the core of the architecture. We also point out open issues and weaknesses of existing SDN management layers. We conclude with a promising future direction for improving the SDN management layer. I. INTRODUCTION Traditional networks are managed through low-level and vendor-specific configurations of individual network components, which is a very complicated and error-prone process. And nowadays computer networks are becoming more and more complex and difficult to manage. This increases the need for a general management paradigm that provides common management abstractions, hides the details of the physical infrastructure, and enables flexible network management. Making the network programmable (pioneered by earlier research in Active Networking [25]) leads to such a general paradigm, as programmability simplifies network management and enables network innovations. Software Defined Networking (SDN) has been proposed to enable programmable networks. In SDN, the network is considered to have two components: (1) control plane which determines how to handle and forward data traffic, and (2) data plane which handles and forwards data traffic toward its destination. SDN separates the control plane and data plane, and focuses on programming the control plane through a network management layer\(^1\). Through a high-level interface provided by the network management layer, network managers can easily manage the network without dealing with the complexity of low-level network details. In general, the data plane might not only be a forwarding plane that just stores and forwards packets (or discards them) through packet flow (forwarding) table manipulations, but it might also include more application-specific data processing capabilities [1][8]. This is similar to the focus of earlier research in Active Networking, where network devices (switches or routers) are expected to perform computation on and modification of packet contents [25]. In this paper we focus on the control plane only for the purpose of programming the forwarding of packet flows, i.e., the network management layer for SDN networks. The main contribution of this paper is to identify the general common management architecture for SDN networks, and further identify the design requirements of the network management layer that is at the core of the architecture. The rest of the paper is organized as follows. We present the common management architecture for SDN networks in Section II. Design requirements of the SDN management layer, along with open issues and weaknesses of existing management layers, are described in Section III. Section IV concludes the paper with a promising future direction for improving the SDN management layer. II. MANAGEMENT ARCHITECTURE FOR SDN NETWORKS The core of a management architecture for SDN networks is the management layer as shown in Figure 1. A management layer should enable the monitoring and control of the network. The management layer itself does not manage the network but provides a programmable interface to management (or user) applications, which in turn manage the network. Examples of management applications include access control, virtual-machine (VM) migration, traffic-aware path selection and path adaptation, and redirecting or dropping suspected attack traffic. A. Management Architecture Overview ![Fig. 1. A general network management architecture for SDN networks.](image) \(^1\)We use the terms “management platform”, “management layer” and “control platform” interchangeably. including switches or routers\(^2\). There is a process (switch process) running on each network device, and this process hides the internal details of the physical device but exposes a Network Device Interface (the so-called “Southbound API” [21]). The Network Device Interface provides a standardized way to access the switch processes which operate on the switches. The switch process is responsible for low-level operations on switches such as adding/removing packet flow entries, and configuration of ports and queues. The Management Layer consists of one or more controller processes, which may run on one or more physical servers. Controller processes collaborate to provide the network monitoring and control functionalities. The Management Layer exposes a Network Management Interface (the so-called “Northbound API” [21]) for management (or user) application processes to manage the network. **B. OpenFlow-based SDN networks** In an SDN network, the Network Device Interface can be supported by any mechanism (protocol) that provides communication between the control plane (management layer) and data plane (switch processes). OpenFlow [19] is such a mechanism (protocol) that gives the management layer access to switches and routers. OpenFlow is the first standardized open protocol that allows network administrators or experimenters to adapt the configuration of switches and routers from different vendors in a uniform way so as to add and remove packet flow state (forwarding) entries. As OpenFlow can be easily deployed on existing hardware, OpenFlow soon became popular in the research community and industry. OpenFlow enables programming of the hardware without needing vendors to expose the internal details of their devices. OpenFlow is now supported by major vendors, and OpenFlow-enabled switches are commercially available. OpenFlow is now the most commonly deployed SDN technology and is seen as an enabler of SDN. However, OpenFlow is not the only mechanism to enable SDN and support the Network Device Interface, and any mechanism that could provide communication between the control plane and data plane can be used. Forwarding and Control Element Separation (ForCES) [30] protocol is an example, however it is not adopted by major switch/router vendors. In this paper, we focus on OpenFlow-based SDN networks, which have recently attracted a lot of attention in the network management area due to the growing popularity of OpenFlow. **C. Administrator-level Interface and User-level Interface** There are two types of interface that can be provided by the network management layer: administrator-level interface and user-level interface. An administrator-level interface is provided to the network administrator, who uses this interface to write management applications to monitor and control the network as a whole. This interface is provided by default by all management layers. On the other hand, a user-level interface is provided to network end-users. End-users write general applications (such as video conference application or Hadoop-based application) using this interface to affect the management of their traffic and as a result to achieve better performance, security or predictable behavior for their applications [13]. To achieve the same goal in an SDN network without the user-level interface, end-users may either (1) have to out-of-band request service from the network administrator, which is inconvenient and increases the workload on the network administrator, or (2) use a dedicated per-application management controller that runs as the administrator, which makes it hard to combine different application management controllers on the same physical network since decisions from different management controllers may conflict with each other. **D. Policy-Based Network Management and Scope** By policy-based network management we mean that network management can be expressed in terms of high-level policies instead of network device configurations, which are low-level and vendor-specific. The network management layer is responsible for translating these high-level policies into low-level and vendor-specific configurations of network devices (switches or routers). Policies are in the form of a set of rules which defines a set of network conditions, responses to network conditions, and network components that perform these responses [17]. Advantages of policy-based network management include: simplifying device, network and service management, enabling the provision of different services to different users, managing the increasing complexity of programming devices, and supporting business-driven network configurations [24]. **Contribution:** One of our contributions in this paper is introducing and defining the concept of scope and scoping in network management as follows. A network management layer manages a network over a certain scope that includes network’s physical components, i.e. devices, and logical components, i.e. processes. For a distributed management layer that consists of multiple management controllers, each management controller is a process that has its management subscope, which consists of a subset of network components (devices and processes). Also each policy has its own subscope where the policy may only affect a subset of network components. A policy is enforced on the network through one or multiple management controllers. Scoping (or support for scope) means that a management layer explicitly defines the subscope induced by a given policy, and dynamically creates new management subscopes and associated controller processes to activate such a policy. Scoping enables fine-grained control over the network. ### III. DESIGN REQUIREMENTS OF MANAGEMENT LAYER In this section we describe the design requirements of the management layer for OpenFlow-based SDN networks. \(^2\)In this paper, switches and routers are considered to be the same, and both provide Layer 2 and Layer 3 operations. A. A Global Network View and General API A basic requirement of the management layer is to provide a global network view and offer a general API, which simplify the programming of management applications. NOX [15], as shown in Figure 2, is the first OpenFlow management platform that met such requirement. It is a follow-up work to previous control platforms (SANE [6] / Ethane [5]) that only focused on security features (access control). NOX management layer contains only one controller. The global view of NOX includes the switch-level topology, and the location of users, hosts and services. NOX constructs the network view and bindings between (user, host and service) names and addresses through packet-flow initiations and built-in base applications that use DNS, DHCP and LLDP. The view does not include the current state of network traffic, but applications can query the status of switches through OpenFlow messages. NOX applications register handlers for particular events. These events include connection creation and deletion, user registration and unregistration, link going up and down, switch join and leave, packet received, switch statistics received, and other application-specified events. NOX controls network traffic by sending instructions to switches through OpenFlow messages which install flow state (forwarding) entries in switches. A flow entry in OpenFlow switches contains a set of header matching fields, packet counters and corresponding actions. When a packet arrives at a switch, if the packet matches a flow entry at the switch, the switch updates the counter and applies corresponding actions. If the packet does not match any flow entry, the packet is forwarded to the management layer (NOX controller), and the controller determines what to do by checking registered event handlers. As illustrative examples, we describe next how NOX performs network discovery, and access control and routing: (1) For network discovery, each switch sends out LLDP messages through its ports to its neighbors. When LLDP messages are received by neighbor switches, as these messages do not match any flow entry, they are forwarded to the NOX controller. Through monitoring the sending and receiving of these LLDP messages by switches, NOX figures out the network topology. (2) For access control, the first packet to the destination from the sender is forwarded to the NOX controller by the first-hop switch as it does not have a corresponding flow entry. When the NOX controller receives this packet, the built-in access control application (handler) decides if the flow is allowed or not. If so, the built-in routing application computes the Layer-2 route in a centralized way (similar to the Routing Control Platform in [3]) based on the network topology, and translates the route to a set of flow entries installed in switches along the path to the destination; otherwise the packet is simply discarded. Writing complicated programs with NOX is difficult since (1) management applications have to configure each switch separately, as well as the behavior of the NOX controller itself when no matching rule is found when a switch receives a packet, and (2) different flow rules are not easy to compose as NOX does not support rule operations such as negation and union. Many management platforms with high-level language support have been proposed to simplify management programming, wherein they translate the programs written in a high-level language into low-level switch configurations. These platforms include Flow-based Management Language (FML) [16], Procera [28], Frentic [14], Pyretic [20], and Maple [29]. Also, the NOX controller is a single-threaded process and not optimized for performance, and many multi-threaded management controllers have been proposed, including NOX-MT [27] and Beacon [10]. Open Issues: Even though many high-level languages have been developed, programming management applications still has to deal with a lot of low-level details of the network, such as per-link or per-switch configurations. Also there is no standard SDN management API — many different management APIs have been proposed, but they are not extended from existing ones and there is not much evolution of these APIs. B. Distributed Controllers NOX is a centralized management layer that has a single controller. However, for a large-scale network, a centralized management layer is not enough with respect to scalability and reliability, so it is necessary to distribute the management layer to run on distributed controllers. Onix [18], shown in Figure 3, is a distributed (proprietary) management layer that consists of multiple Onix instances (controllers). Each Onix instance connects to and manages a subset of the network devices. Onix enables flexible network state distribution primitives between Onix instances and between Onix instances and switches, so management applications do not have to re-implement the distribution mechanism. Onix helps address the scalability issue through multiple Onix instances and also by enabling the partitioning and aggregation of management subscope of each Onix instance. Onix maintains a Network Information Base (NIB), which contains all network entities, including nodes, links, ports, forwarding tables, and so on. NIB is replicated and distributed over Onix instances, and Onix makes sure that states are consistent among them. Each network entity is stored as an object in the NIB. Onix provides a more general API than NOX: it enables management applications to access (creating, destroying, inspecting, and modifying) network entities through operations on the NIB and it supports notification callbacks on some network state changes. The operations on the NIB are automatically translated to flow operations on switches. This is different from NOX as NOX applications have to specify operations on each switch. Open Issues: Scoping (Section II-D) is not well supported in Onix and other SDN distributed management layers such as HyperFlow [26]. Onix allows creating new Onix instances with new scopes through aggregation or partitioning, but the new scope is restricted to devices that are physically close to each other. Scope in Onix is thus flat, i.e. it spans only one level of processes and a higher-level scope that spans distant processes is not supported. Furthermore, it is not easy to define the subscope induced by a given policy. C. Network Virtualization Network virtualization provides support for multiple isolated virtual networks to be built on top of the same physical network, and it is an important aspect of the management layer since (1) it can improve resource utilization of the physical network by enabling network consolidation, and (2) it can be used to build (virtual) testbeds that provide a safe and realistic environment for developing and testing new network features (protocols and applications) in isolation before running them on the real network. FlowVisor [23] is a centralized management layer which provides network virtualization that enables building and controlling multiple user-defined virtual networks on the same physical network. FlowVisor can be seen as a network hypervisor as shown in Figure 4. FlowVisor acts as a transparent proxy between user-defined guest controllers and switches. It enables multiple NOX controllers (or other controllers such as Beacon [10]) to share the same switches. Each guest controller has full control over its subscope, or so-called network slice (an instance of virtual network), where a slice is a subscope (subset) of the scope managed by FlowVisor. FlowVisor provides transparency and isolation between slices by inspecting, rewriting and policing OpenFlow messages that it receives from guest controllers. In FlowVisor, a flowspace assigned to a slice is defined by a collection of packet header fields including: src/dst MAC address, VLAN id, Ethernet protocol type, IP protocol, src/dst IP address, ToS/DSCP and src/dst port number. FlowVisor isolates slices from each other by making sure slices’ flowspaces do not overlap. FlowVisor has several drawbacks including: (1) the virtual topologies are restricted by the physical topology. If two physical switches, to which two virtual switches map, are not directly connected in the physical network, then these two virtual switches cannot be directly connected in the virtual network; and (2) virtual networks do not have a separate virtual flowspace. Flowspaces of the physical network are assigned to different virtual networks, and the same flowspace cannot be controlled by different slices. To overcome the above drawbacks, several management layers have been proposed such as ADVisor (Advanced FlowVisor) [22] and FlowN [9]. Both ADVvisor and FlowN enable the creation of virtual topologies that are completely decoupled from the underlying physical network, and guest controllers have completely separate virtual flowspaces. Open Issues: The management layers mentioned above can only provide network virtualization over networks that are under a single administrative domain. Network virtualization across multiple administrative domains is not supported. However this is important in environments such as a cloud computing marketplace where multiple cloud providers are present. D. User-level Interface Support As we have mentioned in Section II-C, it is important for the management layer to support a user-level interface that enables better user application performance. FlowVisor enables users to place control over the network through network virtualization, but each user has to program a separate controller which introduces more overhead. PANE [13], as shown in Figure 5, is a centralized management layer, which directly delegates read and write authority from the network administrator to end-users by providing a user-level interface. PANE is developed based on the concept of participatory networks. PANE enables multiple user applications to place controls over the network (including reserving resources, providing hints about future traffic, and querying network state). PANE uses a Network Information Base (NIB) to store network elements and their states (including hosts, switches, ports, queues, links, and their capacity such as rate-limiter or per-port output queues in a switch). In PANE, a share determines privileges that principals have in order to read or write the state of a set of flows. A principal is a triple consisting of an application running on a host by a user. PANE maintains a share tree that stores the authority (permissions) of principals, and shares in the share tree are added (or removed) by the network administrator. ![Diagram](image) **Fig. 5.** PANE provides user-level API to end-users. **Open Issues:** PANE allows users to reserve bandwidth, however other aspects of QoS support (including loss rate and delay guarantees) are not supported. A management layer should provide users with an API that offers predictable network connections as this is crucial for user application performance. However, since existing SDN systems are tied to the TCP/IP architecture, the rudimentary “best-effort” delivery service of TCP/IP makes it hard for the SDN management layer to support QoS requirements. **E. Network Orchestration** The management layer may receive requests from different management (or user) applications. These requests may conflict with each other (for example, one request may deny all traffic to port 80, and another one may allow such traffic), which affects the normal operation of the network. Many management layers (such as NOX and Onix) expect applications themselves to avoid or resolve conflicts, but this is difficult to achieve especially when applications belong to different users. So it is important for the management layer to provide a network orchestration mechanism: the capability of resolving conflicts between different applications. PANE [13] resolves conflicts between different user-level applications through Hierarchical Flow Tables (HFTs) [12]. HFTs is a policy tree where each node in the tree stores one or more policy atoms (requests that are installed on the network). A policy atom is a pair of flow matching rule and corresponding action. In PANE, a conflict happens when policy atoms overlap with each other, i.e., there is a flow that matches more than one policy atom with contradictory actions. To resolve conflict, when a packet arrives at PANE, PANE first finds all matching policy atoms in the policy tree, and applies the conflict-resolution operator based on the positions of policy atoms in the policy tree, and eventually returns a single resolved action. A conflict-resolution operator takes two policy atoms as an input, and returns a resolved action based on their relation in the policy tree (in-node, parent-child, or sibling-sibling). Namely, PANE first resolves the conflict between policy atoms in the same node (in-node), then in siblings under the same parent node, and lastly resolves the conflict with the parent node. The semantics of the conflict-resolution operators need to be predefined by the PANE administrator and can be extended. **Open Issues:** PANE and other work such as Maestro [4] focus on resolving conflicts between requests sent to the management layer. However, an important aspect that is not yet well studied is how to compose different policies (which may or may not conflict with each other) over different scopes (or the same scope) in order to achieve better performance in terms of resource utilization, routing convergence and overhead, etc. **IV. Conclusion and Future Work** In this survey paper, through reviewing existing SDN management layers, we identify the common management architecture for SDN networks, as well as the design requirements of the management layer that is at the core of the architecture. We also point out open issues and weaknesses of existing management layers, including weak QoS support and manageability. Existing SDN management layers are tied to the Internet architecture, which is known to be flawed in many respects such as security, mobility and QoS support. Tied to TCP/IP, this inevitably introduces these problems into the management layer and costs more just to work around these problems. The research community has been trying to improve the SDN management layer by resorting to ad-hoc patches that resolve issues with TCP/IP. Take QoS support as an example, earlier versions of the OpenFlow protocol only provide operations on forwarding entries, and do not allow operations on switch queues and scheduling policies, which are important aspects to support QoS. Many SDN management layers (such as PANE [13]), in their attempt to provide QoS support, have to rely on mechanisms such as reservations and prioritized queue management. We believe that a better approach is to build a management layer on top of a new network architecture without the shortcomings of the TCP/IP architecture. Our solution is to adopt the Recursive InterNetwork Architecture (RINA) [7], which inherently solves such shortcomings by addressing the communication problem in a fundamental and structured way. RINA provides better manageability support with scoping—it enables recursive dynamic layer instantiation [11], where a layer (virtual network of processes providing communication service) with a new management scope can be dynamically and recursively formed over existing management scopes. The new scope can be a subscope of an existing scope, and more importantly, it can be a larger scope that spans multiple existing scopes (over multiple management administrative domains), i.e., RINA supports nested scopes. Layers over different scopes can be easily configured with different policies, but they use the same recursive RINA mechanisms [7]. On the contrary, most existing SDN management layers are limited to networks within a single administrative domain. And it is not easy to define new scopes (or subscopes), and so far there are no common SDN mechanisms to facilitate collaboration across different administrative domains. What’s more, RINA inherently and explicitly supports QoS through the RINA API to connect application processes. Due to the explicitness of the QoS request during the connection allocation phase, RINA can achieve better resource utilization, and more importantly, help end-users improve application performance. The provisioning of QoS can be easily supported by RINA’s recursive mechanisms, such as flow allocation and error control, and the management policies can be recursively composed over different management scopes. Our preliminary work indicates that RINA’s policy-based network management architecture offers a promising direction for SDN [2] which we continue to investigate. ACKNOWLEDGMENT This work is supported in part by the National Science Foundation (NSF grant CNS-0963974). REFERENCES
{"Source-Url": "http://dcommon.bu.edu/bitstream/handle/2144/20814/2014-006-sdn-management-survey.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 5249, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21761, "total-output-tokens": 7946, "length": "2e12", "weborganizer": {"__label__adult": 0.00034356117248535156, "__label__art_design": 0.0003817081451416016, "__label__crime_law": 0.00035572052001953125, "__label__education_jobs": 0.0012178421020507812, "__label__entertainment": 0.0001811981201171875, "__label__fashion_beauty": 0.0001773834228515625, "__label__finance_business": 0.0005998611450195312, "__label__food_dining": 0.000396728515625, "__label__games": 0.0006785392761230469, "__label__hardware": 0.0034160614013671875, "__label__health": 0.0008459091186523438, "__label__history": 0.00046706199645996094, "__label__home_hobbies": 0.00010609626770019533, "__label__industrial": 0.0006184577941894531, "__label__literature": 0.00031757354736328125, "__label__politics": 0.00035119056701660156, "__label__religion": 0.000499725341796875, "__label__science_tech": 0.47119140625, "__label__social_life": 0.00012314319610595703, "__label__software": 0.03411865234375, "__label__software_dev": 0.482177734375, "__label__sports_fitness": 0.00030350685119628906, "__label__transportation": 0.0008425712585449219, "__label__travel": 0.0002703666687011719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34128, 0.03071]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34128, 0.47158]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34128, 0.89608]], "google_gemma-3-12b-it_contains_pii": [[0, 402, false], [402, 4809, null], [4809, 10814, null], [10814, 15410, null], [15410, 20603, null], [20603, 26166, null], [26166, 34128, null]], "google_gemma-3-12b-it_is_public_document": [[0, 402, true], [402, 4809, null], [4809, 10814, null], [10814, 15410, null], [15410, 20603, null], [20603, 26166, null], [26166, 34128, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34128, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34128, null]], "pdf_page_numbers": [[0, 402, 1], [402, 4809, 2], [4809, 10814, 3], [10814, 15410, 4], [15410, 20603, 5], [20603, 26166, 6], [26166, 34128, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34128, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
d6c7bdf480d6943c27b4f81ae75c9d8e908685a4
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00498865/file/cmsga.pdf", "len_cl100k_base": 4882, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26538, "total-output-tokens": 6105, "length": "2e12", "weborganizer": {"__label__adult": 0.0002815723419189453, "__label__art_design": 0.00031566619873046875, "__label__crime_law": 0.00028705596923828125, "__label__education_jobs": 0.0004718303680419922, "__label__entertainment": 6.604194641113281e-05, "__label__fashion_beauty": 0.0001310110092163086, "__label__finance_business": 0.00019109249114990232, "__label__food_dining": 0.0002753734588623047, "__label__games": 0.00044918060302734375, "__label__hardware": 0.0010728836059570312, "__label__health": 0.00045561790466308594, "__label__history": 0.0002493858337402344, "__label__home_hobbies": 7.677078247070312e-05, "__label__industrial": 0.0003752708435058594, "__label__literature": 0.0002157688140869141, "__label__politics": 0.0002363920211791992, "__label__religion": 0.0004563331604003906, "__label__science_tech": 0.03619384765625, "__label__social_life": 7.426738739013672e-05, "__label__software": 0.00861358642578125, "__label__software_dev": 0.9482421875, "__label__sports_fitness": 0.0002777576446533203, "__label__transportation": 0.0005230903625488281, "__label__travel": 0.00022172927856445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28183, 0.03385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28183, 0.51252]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28183, 0.9236]], "google_gemma-3-12b-it_contains_pii": [[0, 1069, false], [1069, 2483, null], [2483, 5482, null], [5482, 8222, null], [8222, 9777, null], [9777, 12568, null], [12568, 15042, null], [15042, 17869, null], [17869, 19209, null], [19209, 20650, null], [20650, 22561, null], [22561, 25528, null], [25528, 28183, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1069, true], [1069, 2483, null], [2483, 5482, null], [5482, 8222, null], [8222, 9777, null], [9777, 12568, null], [12568, 15042, null], [15042, 17869, null], [17869, 19209, null], [19209, 20650, null], [20650, 22561, null], [22561, 25528, null], [25528, 28183, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28183, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28183, null]], "pdf_page_numbers": [[0, 1069, 1], [1069, 2483, 2], [2483, 5482, 3], [5482, 8222, 4], [8222, 9777, 5], [9777, 12568, 6], [12568, 15042, 7], [15042, 17869, 8], [17869, 19209, 9], [19209, 20650, 10], [20650, 22561, 11], [22561, 25528, 12], [25528, 28183, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28183, 0.06481]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
6b08a79c6c7555dd6baabc1f33f0b686aca91f57
Pass-Efficient Unsupervised Feature Selection Crystal Maung Department of Computer Science The University of Texas at Dallas Crystal.Maung@gmail.com Haim Schweitzer Department of Computer Science The University of Texas at Dallas HSchweitzer@utdallas.edu Abstract The goal of unsupervised feature selection is to identify a small number of important features that can represent the data. We propose a new algorithm, a modification of the classical pivoted QR algorithm of Businger and Golub, that requires a small number of passes over the data. The improvements are based on two ideas: keeping track of multiple features in each pass, and skipping calculations that can be shown not to affect the final selection. Our algorithm selects the exact same features as the classical pivoted QR algorithm, and has the same favorable numerical stability. We describe experiments on real-world datasets which sometimes show improvements of several orders of magnitude over the classical algorithm. These results appear to be competitive with recently proposed randomized algorithms in terms of pass efficiency and run time. On the other hand, the randomized algorithms may produce more accurate features, at the cost of small probability of failure. 1 Introduction Work on unsupervised feature selection has received considerable attention. See, e.g., [1, 2, 3, 4, 5, 6, 7, 8]. In numerical linear algebra unsupervised feature selection is known as the column subset selection problem, where one attempts to identify a small subset of matrix columns that can approximate the entire column space of the matrix. See, e.g., [9, Chapter 12]. The distinction between supervised and unsupervised feature selection is as follows. In the supervised case one is given labeled objects as training data and features are selected to help predict that label; in the unsupervised case nothing is known about the labels. We describe an improvement to the classical Businger and Golub pivoted QR algorithm [9, 10]. We refer to the original algorithm as the QRP, and to our improved algorithm as the IQRP. The QRP selects features one by one, using $k$ passes in order to select $k$ features. In each pass the selected feature is the one that is the hardest to approximate by the previously selected features. We achieve improvements to the algorithm run time and pass efficiency without affecting the selection and the excellent numerical stability of the original algorithm. Our algorithm is deterministic, and runs in a small number of passes over the data. It is based on the following two ideas: 1. In each pass we identify multiple features that are hard to approximate with the previously selected features. A second selection step among these features uses an upper bound on unselected features that enables identifying multiple features that are guaranteed to have been selected by the QRP. See Section 4 for details. 2. Since the error of approximating a feature can only decrease when additional features are added to the selection, there is no need to evaluate candidates with error that is already “too small”. This allows a significant reduction in the number of candidate features that need to be considered in each pass. See Section 4 for details. 2 Algorithms for unsupervised feature selection The algorithms that we consider take as input large matrices of numeric values. We denote by \( m \) the number of rows, by \( n \) the number of columns (features), and by \( k \) the number of features to be selected. Criteria for evaluating algorithms include their run time and memory requirements, the number of passes over the data, and the algorithm accuracy. The accuracy is a measure of the error of approximating the entire data matrix as a linear combination of the selection. We review some classical and recent algorithms for unsupervised feature selection. 2.1 Related work in numerical linear algebra **Businger and Golub QRP** was established by Businger and Golub [9, 10]. We discuss it in detail in Section 3. It requires \( k \) passes for selecting \( k \) features, and its run time is \( 4kmn - 2k^2(m+n) + 4k^3/3 \). A recent study [11] compares experimentally the accuracy of the QRP as a feature selection algorithm to some recently proposed state-of-the-art algorithms. Even though the accuracy of the QRP is somewhat below the other algorithms, the results are quite similar. (The only exception was the performance on the Kahan matrix, where the QRP was much less accurate.) **Gu and Eisenstat** algorithm [1] was considered the most accurate prior to the work on randomized algorithms that had started with [12]. It computes an initial selection (typically by using the QRP), and then repeatedly swaps selected columns with unselected column. The swapping is done so that the product of singular values of the matrix formed by the selected columns is increased with each swapping. The algorithm requires random access memory, and it is not clear how to implement it by a series of passes over the data. Its run time is \( O(m^2n) \). 2.2 Randomized algorithms Randomized algorithms come with a small probability of failure, but otherwise appear to be more accurate than the classical deterministic algorithms. Frieze et al [12, 13] have proposed a randomized algorithm that requires only two passes over the data. This assumes that the norms of all matrix columns are known in advance, and guarantees only an additive approximation error. We discuss the run time and the accuracy of several generalizations that followed their studies. **Volume sampling** Deshpande et al [14] have studied a randomized algorithm that samples \( k \)-tuples of columns with probability proportional to their “volume”. The volume is the square of the product of the singular values of the submatrix formed by these columns. They show that this sampling scheme gives rise to a randomized algorithm that computes the best possible solution in the Frobenius norm. They describe an efficient \( O(kmn) \) randomized algorithm that can be implemented in \( k \) passes and approximates this sampling scheme. These results were improved (in terms of accuracy) in [15], by computing the exact volume sampling. The resulting algorithm is slower but much more accurate. Further improvements to the speed of volume sampling in [6] have reduced the run time complexity to \( O(kmn^2) \). As shown in [15,6], this optimal (in terms of accuracy) algorithm can also be derandomized, with a deterministic run time of \( O(km^3n) \). **Leverage sampling** The idea behind leverage sampling is to randomly select features with probability proportional to their “leverage”. Leverage values are norms of the rows of the \( n \times k \) right eigenvector matrix in the truncated SVD expansion of the data matrix. See [16,2]. In particular, the “two stage” algorithm described in [2] requires only 2 passes if the leverage values are known. Its run time is dominated by the calculation of the leverage values. To the best of our knowledge the currently best algorithms for estimating leverage values are randomized [17,18]. One run takes 2 passes and \( O(mn \log n + m^3) \) time. This is dominated by the \( mn \) term, and [18] show that it can be further reduced to the number of nonzero values. We note that these algorithms do not compute reliable leverage in 2 passes, since they may fail at a relatively high (e.g., 1/3) probability. As stated in [18] “the success probability can be amplified by independent repetition and taking the coordinate-wise median”. Therefore, accurate estimates of leverage can be computed in constant number of passes. But the constant would be larger than 2. Input: The features (matrix columns) \( x_1, \ldots, x_n \), and an integer \( k \leq n \). Output: An ordered list \( S \) of \( k \) indices. 1. In the initial pass compute: 1.1. For \( i = 1, \ldots, n \) set \( \tilde{x}_i = x_i, v_i = |\tilde{x}_i|^2 \). \((\tilde{x}_i \) is the error vector of approximating \( x_i \) by a linear combination of the columns in \( S \).) At the end of the pass set \( z_1 = \arg \max_i v_i \), and initialize \( S = (z_1) \). 2. For each pass \( j = 2, \ldots, k \): 2.1. For \( i = 1, \ldots, n \) set \( v_i \) to the square error of approximating \( x_i \) by a linear combination of the columns in \( S \). At the end of pass \( j \) set \( z_j = \arg \max_i v_i \), and add \( z_j \) to \( S \). Figure 1: The main steps of the QRP algorithm. 2.3 Randomized ID In a recent survey \cite{19} Halko et.al. describe how to compute QR factorization using their randomized Interpolative Decomposition. Their approach produces an accurate \( Q \) as a basis of the data matrix column space. They propose an efficient “row extraction” method for computing \( R \), that works when \( k \), the desired rank, is similar to the rank of the data matrix. Otherwise the row extraction introduces unacceptable inaccuracies, which led Halko et.al to recommend using an alternative \( O(kmn) \) technique in such cases. 2.4 Our result, the complexity of the IQRP The savings that the IQRP achieves depend on the data. The algorithm takes as input an integer value \( l \), the length of a temporary buffer. As explained in Section 4 our implementation requires temporary storage of \( l+1 \), which takes \( (l+1)m \) floats. The following values depend on the data: the number of passes \( p \), the number of IO-passes \( q \) (explained below), and a unit cost of orthogonalization \( c \) (see Section 4.3). In terms of \( l \) and \( c \) the run time is \( 2mn + 4mnc + 4mlk \). Our experiments show that for typical datasets the value of \( c \) is below \( k \). For \( l \approx k \) our experiments show that the number of passes is typically much smaller than \( k \). The number of passes is even smaller if one considers IO-passes. To explain what we mean by IO-passes consider as an example a situation where the algorithm runs three passes over the data. In the first pass all \( n \) features are being accessed. In the second, only two features are being accessed. In the third, only one feature is being accessed. In this case we take the number of IO-passes to be \( q = 1 + \frac{3}{n} \). We believe that \( q \) is a relevant measure of the algorithm pass complexity when skipping is cheap, so that the cost of a pass over the data is the amount of data that needs to be read. 3 The Businger Golub algorithm (QRP) In this section we describe the QRP \cite{9,10} which forms the basis to the IQRP. The main steps are described in Figure 1. There are two standard implementations for Step 2.1 in Figure 1. The first is by means of the “Modified Gram-Schmidt” (e.g., \cite{9}), and the second is by Householder orthogonalization (e.g., \cite{9}). Both methods require approximately the same number of flops, but error analysis (see \cite{9}) shows that the Householder approach is significantly more stable. 3.1 Memory-efficient implementations The implementations shown in Figure 2 update the memory where the matrix \( A \) is stored. Specifically, \( A \) is overwritten by the \( R \) component of the QR factorization. Since we are not interested in \( R \), overwriting \( A \) may not be acceptable. The procedure shown in Figure 3 does not overwrite \( A \), but it is more costly. The flops count is dominated by Steps 1 and 2, which cost at most \( 4(j-1)mn \) at pass \( j \). Summing up for \( j = 1, \ldots, k \) this gives a total flops count of approximately \( 2k^2mn \) flops. the (squared) error of approximating $Q$ where values can be kept in core or a secondary memory. They are defined as follows: The threshold value $l$ identified values are computed as the error of predicting each candidate by identifying the condition holds: The list $L$ identified columns that have already been selected. In this section we describe our main result: the improved QRP. The algorithm maintains three ordered lists of columns: The list $F$ is the input list containing all columns. The list $S$ contains columns that have already been selected. The list $L$ is of size $l$, where $l$ is a user defined parameter. For each column $x_i$ in $F$ the algorithm maintains an integer value $r_i$ and a real value $v_i$. These values can be kept in core or a secondary memory. They are defined as follows: $$r_i \leq |S|, \quad v_i = v_i(r_i) = \|x_i - Q_{r_i}Q_{r_i}^T x_i\|^2$$ where $Q_{r_i} = (q_1, \ldots, q_{r_i})$ is an orthonormal basis to the first $r_i$ columns in $S$. Thus, $v_i(r_i)$ is the (squared) error of approximating $x_i$ with the first $r_i$ columns in $S$. In each pass the algorithm identifies the $l$ candidate columns $x_i$ corresponding to the $l$ largest values of $v_i(|S|)$. That is, the $v_i$ values are computed as the error of predicting each candidate by all columns currently in $S$. The identified $l$ columns with the largest $v_i(|S|)$ are stored in the list $L$. In addition, the value of the $l+1$'th largest $v_i(|S|)$ is kept as the constant $B_F$. Thus, after a pass is terminated the following condition holds: $$v_{\alpha}(r_{\alpha}) \leq B_F \quad \text{for all } x_{\alpha} \in F \setminus L.$$ The list $L$ and the value $B_F$ can be calculated in one pass using a binary heap data structure, with the cost of at most $n \log(l + 1)$ comparisons. See [20 Chapter 9]. The main steps of the algorithm are described in Figure 4. **4 The IQRP algorithm** Details of Steps 2.0, 2.1 of the IQRP. The threshold value $T$ is defined by: $$T = \begin{cases} -\infty & \text{if the heap is not full.} \\ \text{top of the heap} & \text{if the heap is full.} \end{cases}$$ Input: The matrix columns (features) \(x_1, \ldots, x_n\), and an integer \(k \leq n\). Output: An ordered list \(S\) of \(k\) indices. 1. (The initial pass over \(F\).) 1.0. Create a min-heap of size \(l+1\). In one pass go over \(x_i, i = 1, \ldots, n\): 1.1. Set \(v_i(0) = |x_i|^2, r_i = 0\). Fill the heap with the candidates corresponding to the \(l+1\) largest \(v_i(0)\). 1.2. At the end of the pass: Set \(B_F\) to the value at the top of the heap. Set \(L\) to heap content excluding the top element. Add to \(S\) as many candidates from \(L\) as possible using \(B_F\). 2. Repeat until \(S\) has \(k\) candidates: 2.0. Create a min-heap of size \(l+1\). Let \(T\) be defined by (3). In one pass go over \(x_i, i = 1, \ldots, n\): 2.1. Skip \(x_i\) if \(v_i(r_i) \leq T\). Otherwise update \(v_i, r_i, \) heap. 2.2. At the end of the pass: Set \(B_F = T\). Set \(L\) to heap content excluding the top element. Add to \(S\) as many candidates from \(L\) as possible using \(B_F\). Figure 4: The main steps of the IQRP algorithm. Thus, when the heap is full, \(T\) is the value of \(v\) associated with the \(l+1\)'th largest candidate encountered so far. The details of Step 2.1 are shown in Figure 5 Step A.2.2.1 can be computed using either Gram-Schmidt or Householder, as shown in Figures 3 and 4. A.1. If \(v_i(r_i) \leq T\) skip \(x_i\). A.2. Otherwise check \(r_i\): A.2.1. If \(r_i = |S|\) conditionally insert \(x_i\) into the heap. A.2.2. If \(r_i < |S|\): A.2.2.1. Compute \(v_i(|S|)\). Set \(r_i = |S|\). A.2.2.2. Conditionally insert \(x_i\) into the heap. Figure 5: Details of Step 2.1 Details of Steps 1.2 and 2.2 of the IQRP. Here we are given the list \(L\) and the value of \(B_F\) satisfying (2). To move candidates from \(L\) to \(S\) run the QRP on \(L\) as long as the pivot value is above \(B_F\). (The pivot value is the largest value of \(v_i(|S|)\) in \(L\).) The details are shown in Figure 6. B.1. \(z = \arg \max_{x_i \in L} v_i(|S|)\) B.2. If \(v_z(|S|) < B_F\), we are done exploiting \(L\). B.3. Otherwise: B.3.1. Move \(z\) from \(L\) to \(S\). B.3.2. Update the remaining candidates in \(L\) using either Gram-Schmidt or the Householder procedure. For example, with Householder: B.3.2.1. Create the Householder matrix \(h_j\) from \(x_z\). B.3.2.2. for all \(x\) in \(L\) replace \(x\) with \(h_jx\). Figure 6: Details of Steps 1.2 and 2.2 4.1 Correctness In this section we show that the IQRP computes the same selection as the QRP. The proof is by induction on \( j \), the number of columns in \( S \). For \( j = 0 \) the QRP selects \( x_j \) with \( v_j = |x_j|^2 = \max_i |x_i|^2 \). The IQRP selects \( v'_j \) as the largest among the \( l \) largest values in \( F \). Therefore \( v'_j = \max_{x_i \in L} |x_i|^2 = \max_{x_i \in F} |x_i|^2 = v_j \). Now assume that for \( j = |S| \) the QRP and the IQRP select the same columns in \( S \) (this is the inductive assumption). Let \( v_j(|S|) \) be the value of the \( j+1 \)'th selection by the QRP, and let \( v'_j(|S|) \) be the value of the \( j+1 \)'th selection by the IQRP. We need to show that \( v'_j(|S|) = v_j(|S|) \). The QRP selection of \( j \) satisfies: \( v_j(|S|) = \max_{x_i \in F} v_i(|S|) \). Observe that if \( x_i \in L \) then \( r_i = |S| \). (Initially \( L \) is created from the heap elements that have \( r_i = |S| \).) Once \( S \) is increased in Step B.3.1 the columns in \( L \) are updated according to B.3.2 so that they all satisfy \( r_i = |S| \). The IQRP selection satisfies: \[ v'_j(|S|) = \max_{x_i \in L} v_i(|S|) \quad \text{and} \quad v'_j(|S|) \geq B_F. \] (4) Additionally for all \( x_{\alpha} \in F \setminus L \): \[ B_F \geq v_{\alpha}(r_{\alpha}) \geq v_{\alpha}(|S|). \] (5) This follows from (5), the observation that \( v_{\alpha}(r) \) is monotonically decreasing in \( r \), and \( r_{\alpha} \leq |S| \). Therefore, combining (4) and (5) we get \[ v'_j(|S|) = \max_{x_i \in F} v_i(|S|) = v_j(|S|). \] This completes the proof by induction. 4.2 Termination To see that the algorithm terminates it is enough to observe that at least one column is selected in each pass. The condition at Step B.2 in Figure 6 cannot hold at the first time in a new \( L \). The value of \( B_F \) is the \( l+1 \) largest \( v_i(|S|) \), while the maximum at B.1 is among the \( l \) largest \( v_i(|S|) \). 4.3 Complexity The formulas in this section describe the complexity of the IQRP in terms of the following: - \( n \) the number of features (matrix columns) - \( m \) the number of objects (matrix rows) - \( k \) the number of selected features - \( q \) number of IO-passes - \( p \) number of passes - \( c \) a unit cost of orthogonalizing \( F \) - \( l \) user provided parameter. \( 1 \leq l \leq n \) The value of \( c \) depends on the implementation of Step A.2.2.1 in Figure 5. We write \( c_{\text{memory}} \) for the value of \( c \) in the memory-efficient implementation, and \( c_{\text{flops}} \) for the faster implementation (in terms of flops). We use the following notation. At pass \( j \) the number of selected columns is \( k_j \), and the number of columns that were not skipped in Step 2.1 of the IQRP (same as Step A.1) is \( n_j \). The number of flops in the memory-efficient implementation can be shown to be \[ flops_{\text{memory}} = 2mn + 4mnc + 4mlk, \quad \text{where} \quad c = \sum_{j=2}^{p} \frac{n_j}{n} \sum_{j'=1}^{j-1} k_{j'}, \] (6) Observe that \( c \leq k^2/2 \), so that for \( l < n \) the worst case behavior is the same as the memory-optimized QRP algorithm, which is \( O(k^2mn) \). We show in Section 5 that the typical run time is much faster. In particular, the dependency on \( k \) appears to be linear and not quadratic. For the faster implementation that overwrites the input it can be shown that: \[ flops_{\text{time}} = 2mn + 4m \sum_{i=1}^{n} \tilde{r}_i, \quad \text{where} \quad \tilde{r}_i \text{ is the value of } r_i \text{ at termination}. \] (7) Since \( \tilde{r}_i \leq k - 1 \) it follows that \( \text{flops}_{\text{time}} \leq 4kmn \). Thus, the worst case behavior is the same as the flops-efficient QRP algorithm. Memory in the memory-efficient implementation requires \( km \) in-core floats, and additional memory for the heap, that can be reused for the list \( L \). Additional memory to store and manipulate \( v_i, r_i \) for \( i = 1, \ldots, n \) is roughly \( 2n \) floats. Observe that these memory locations are being accessed consecutively, and can be efficiently stored and manipulated out-of-core. The data itself, the matrix \( A \), is stored out-of-core. When the method of Figure 3 is used in A.2.2.1, these matrix values are read-only. IO-passes. We wish to distinguish between a pass where the entire data is accessed and a pass where most of the data is skipped. This suggests the following definition for the number of IO-passes: \[ q = \sum_{j=1}^{p} u_j n - 1 + \sum_{j=2}^{p} n_j. \] Number of floating point comparisons. Testing for the skipping and manipulating the heap requires floating point comparisons. The number of comparisons is \( n(p - 1 + (q - 1) \log_2(l + 1)) \). This does not affect the asymptotic complexity since the number of flops is much larger. 5 Experimental results We describe results on several commonly used datasets. “Day1”, with \( m = 20,000 \) and \( n = 3,231,957 \) is part of the "URL-reputation" collection at the UCI Repository. “thrombin”, with \( m = 1,909 \) and \( n = 139,351 \) is the data used in KDD Cup 2001. “Amazon”, with \( m = 1,500 \) and \( n = 10,000 \) is part of the “Amazon Commerce reviews set” and was obtained from the UCI Repository. “gisette”, with \( m = 6,000 \) and \( n = 5,000 \) was used in NIPS 2003 selection challenge. Measurements. We vary \( k \), and report the following: \( \text{flops}_{\text{memory}}, \text{flops}_{\text{time}} \) are the ratios between the number of flops used by the IQR and \( kmn \), for the memory-efficient orthogonalization and the time-efficient orthogonalization. \# \text{passes} \) is the number of passes needed to select \( k \) features. \# IO-passes \) is discussed in sections 2.4 and 4.3. It is the number of times that the entire data is read. Thus, the ratio between the number of IO-passes and the number of passes is the fraction of the data that was not skipped. Run time. The number of flops of the QRP is between \( 2kmn \) and \( 4kmn \). We describe experiments with the list size \( l \) taken as \( l = k \). For Day1 the number of flops beats the QRP by a factor of more than 100. For the other datasets the results are not as impressive. There are still significant savings for small and moderate values of \( k \) (say up to \( k = 600 \)), but for larger values the savings are smaller. Most interesting is the observation that the memory-efficient implementation of Step 2.1 is not much slower than the optimization for time. Recall that the memory-optimized QRP is \( k \) times slower than the time-optimized QRP. In our experiments they differ by no more than a factor of 4. Number of passes. We describe experiments with the list size \( l \) taken as \( l = k \), and also with \( l = 100 \) regardless of the value of \( k \). The QRP takes \( k \) passes for selecting \( k \) features. For the Day1 dataset we observed a reduction by a factor of between 50 to 250 in the number of passes. For IO-passes, the reduction goes up to a factor of almost 1000. Similar improvements are observed for the Amazon and the gisette datasets. For the thrombin it is slightly worse, typically a reduction by a factor of about 70. The number of IO-passes is always significantly below the number of passes, giving a reduction by factors up to 1000. For the recommended setting of \( l = k \) we observed the following. In absolute terms the number of passes was below 10 for most of the data; the number of IO-passes was below 2 for most of the data. 6 Concluding remarks This paper describes a new algorithm for unsupervised feature selection. Based on the experiments we recommend using the memory-efficient implementation and setting the parameter \( l = k \). As explained earlier the algorithm maintains 2 numbers for each column, and these can also be kept in-core. This gives a \( 2(km + n) \) memory footprint. Our experiments show that for typical datasets the number of passes is significantly smaller than \( k \). In situations where memory can be skipped the notion of IO-passes may be more accurate than passes. IO-passes indicate the amount of data that was actually read and not skipped. The performance of the IQRP depends on the data. Therefore, the improvements that we observe can also be viewed as an indication that typical datasets are “easy”. This appears to suggest that worst case analysis should not be considered as the only criterion for evaluating feature selection algorithms. Comparing the IQRP to the current state-of-the-art randomized algorithms that were reviewed in Section 2.2 we observe that the IQRP is competitive in terms of the number of passes and appears to outperform these algorithms in terms of the number of IO-passes. On the other hand, it may be less accurate. References
{"Source-Url": "http://papers.nips.cc/paper/4933-pass-efficient-unsupervised-feature-selection.pdf", "len_cl100k_base": 6730, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 37902, "total-output-tokens": 8860, "length": "2e12", "weborganizer": {"__label__adult": 0.0003802776336669922, "__label__art_design": 0.00042629241943359375, "__label__crime_law": 0.0005927085876464844, "__label__education_jobs": 0.0015821456909179688, "__label__entertainment": 0.00013530254364013672, "__label__fashion_beauty": 0.00027489662170410156, "__label__finance_business": 0.0004935264587402344, "__label__food_dining": 0.0004360675811767578, "__label__games": 0.0008597373962402344, "__label__hardware": 0.0016956329345703125, "__label__health": 0.0013179779052734375, "__label__history": 0.0004055500030517578, "__label__home_hobbies": 0.0001766681671142578, "__label__industrial": 0.00077056884765625, "__label__literature": 0.0004024505615234375, "__label__politics": 0.0005106925964355469, "__label__religion": 0.0007009506225585938, "__label__science_tech": 0.372314453125, "__label__social_life": 0.00015735626220703125, "__label__software": 0.013275146484375, "__label__software_dev": 0.60205078125, "__label__sports_fitness": 0.00040435791015625, "__label__transportation": 0.0006313323974609375, "__label__travel": 0.0002238750457763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28860, 0.03236]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28860, 0.53348]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28860, 0.88193]], "google_gemma-3-12b-it_contains_pii": [[0, 3260, false], [3260, 7704, null], [7704, 11576, null], [11576, 13705, null], [13705, 16252, null], [16252, 20021, null], [20021, 24468, null], [24468, 25076, null], [25076, 28860, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3260, true], [3260, 7704, null], [7704, 11576, null], [11576, 13705, null], [13705, 16252, null], [16252, 20021, null], [20021, 24468, null], [24468, 25076, null], [25076, 28860, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28860, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28860, null]], "pdf_page_numbers": [[0, 3260, 1], [3260, 7704, 2], [7704, 11576, 3], [11576, 13705, 4], [13705, 16252, 5], [16252, 20021, 6], [20021, 24468, 7], [24468, 25076, 8], [25076, 28860, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28860, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
da50071996b45ae8cba9bf9c1dd234eaa902dc99
The KInfoCenter Mike McBride # Contents 1 The KInfoCenter 1.1 Starting the KInfoCenter ........................................ 5 1.2 The KInfoCenter Screen ....................................... 5 1.3 The KInfoCenter Toolbar ...................................... 6 1.3.1 Module Help button ..................................... 6 1.3.2 Help Menu button ....................................... 6 1.4 Exiting The Information Center ............................... 7 2 The Default KInfoCenter Modules .............................. 8 2.1 About System Module ........................................ 8 2.2 Memory Information Module .................................. 8 2.2.1 Memory Types .......................................... 8 2.2.2 Memory Information Module ............................ 8 2.3 Energy Information Module .................................. 9 2.4 Device Information Module ................................ 9 2.4.1 Device Viewer .......................................... 9 2.4.1.1 Information Panel .................................. 10 2.4.1.2 UDI Information ................................... 11 2.4.2 Interrupt Request (IRQ) Information Module ......... 11 2.4.3 DMA Channel Information Module ....................... 11 2.4.4 USB Controller/USB Devices Information Module .... 11 2.4.5 Input/Output Port Information Module ................ 12 2.4.6 PCI-bus/Installed PCI Cards Information Module .... 12 2.5 Network Information Module ................................. 12 2.5.1 Network Interfaces Information Module ............... 12 2.5.2 Samba Status Information Module ....................... 13 2.5.2.1 Exports ............................................. 13 2.5.2.2 Imports ............................................. 13 2.5.2.3 Log ................................................ 13 2.5.2.4 Statistics .......................................... 14 2.5.2.5 Section Author ..................................... 14 2.6 Graphical Information Module ............................... 15 2.6.1 Wayland Information Module ........................... 15 2.6.2 X Server Information Module ........................... 15 2.6.3 OpenGL Information Module ............................ 15 3 Credits and License ............................................. 16 Abstract This documentation describes Plasma’s information center. Chapter 1 The KInfoCenter The KInfoCenter provides you with a centralized and convenient overview of your system and desktop environment. The information center is made up of multiple modules. Each module is a separate application, but the information center organizes all of these programs into a convenient location. This next section details the use of the information center itself. For information on individual modules, please see Default KInfoCenter Modules. 1.1 Starting the KInfoCenter The KInfoCenter can be started in three ways: 1. By selecting Applications → System → KInfoCenter from the application launcher in the panel. 2. By pressing Alt+F2 or Alt+Space. This will bring up KRunner. Type kinfocenter, and press Enter. 3. You can type kinfocenter & at any command prompt. All three of these methods are equivalent, and produce the same result. 1.2 The KInfoCenter Screen When you start the information center, you are presented with a window, which can be divided into three functional parts. Across the top is a toolbar. The toolbar will provide you with quick access to most of KInfoCenter’s features like get help on the current module and a help menu. Along the left hand side, is a column with a filter field at the top. This is a where you choose which module to investigate. To navigate through the various KCM modules, left click on the module in the tree view. You can also use the arrow keys to scroll though the KCM’s and pressing Enter will select the module. The module will now appear of the main panel of the KInfoCenter window. Some items within the tree view are categories, you can left click or again press Enter to expand and collapsed these items. This will show the modules under the category. You can right click on the module listing to show the following options: - **Collapse All Categories**: Collapses the tree to show only top level modules and categories. - **Expand All Categories**: Expands the tree to show modules. - **Clear Search**: This will clear any filter you have applied on the module listing via the search box. The main panel shows you the system information about the selected module. ### 1.3 The KInfoCenter Toolbar This next section gives you a brief description of what each toolbar item does. #### 1.3.1 Module Help button This button opens KHelpCenter with the current help page for the information module. #### 1.3.2 Help Menu button KInfoCenter has the common KDE Help menu items, for more information read the section about the Help Menu of the KDE Fundamentals. 1.4 Exiting The Information Center You can exit the info center one of two ways: • Type Ctrl+Q on the keyboard. • Click on the Close button on the frame surrounding the info center. Chapter 2 The Default KInfoCenter Modules 2.1 About System Module This page shows a brief summary about your system, i.e. your distribution, KDE Plasma Version, KDE Frameworks Version, Qt Version, Kernel Version and OS Type; and in the hardware section information about Processors, Memory and Graphics Processor. Use the information on this page if you ask for help in support channels or report a bug at KDE’s bugtracker. 2.2 Memory Information Module This module displays the current memory usage. It is updated constantly, and can be very useful for pinpointing bottlenecks when certain applications are executed. 2.2.1 Memory Types The first thing you must understand, is there are two types of ‘memory’, available to the operating system and the programs that run within it. The first type, is called physical memory. This is the memory located within the memory chips, within your computer. This is the RAM (for Random Access Memory) you bought when you purchased your computer. The second type of memory, is called virtual or swap memory. This block of memory, is actually space on the hard drive. The operating system reserves a space on the hard drive for ‘swap space’. The operating system can use this virtual memory (or swap space), if it runs out of physical memory. The reason this is called ‘swap’ memory, is the operating system takes some data that it doesn’t think you will want for a while, and saves that to disk in this reserved space. The operating system then loads the new data you need right now. It has ‘swapped’ the not needed data, for the data you need right now. Virtual or swap memory is not as fast as physical memory, so operating systems try to keep data (especially often used data), in the physical memory. The total memory, is the combined total of physical memory and virtual memory. 2.2.2 Memory Information Module This window is divided into a top and bottom section The KInfoCenter The top section shows you the total physical memory, total free physical memory, shared memory, and buffered memory. All four values are represented as the total number of bytes, and as the number of megabytes (1 megabyte = slightly more than 1,000,000 bytes) The bottom section shows you three graphs: - **Total Memory** (this is the combination of physical and virtual memory). - **Physical Memory** - **Virtual memory, or Swap Space**. The grey areas are free, and the blue and green areas are used. **Tip** The exact values of each type of memory are not critical, and they change regularly. When you evaluate this page, look at trends. Does your computer have plenty of free space (grey areas)? If not, you can increase the swap size or increase the physical memory. Also, if your computer seems sluggish: is your physical memory full, and does the hard drive always seem to be running? This suggests that you do not have enough physical memory, and your computer is relying on the slower virtual memory for commonly used data. Increasing your physical memory will improve the responsiveness of your computer. 2.3 Energy Information Module This provides information about CPU wakeups, battery percentage and consumption over a user defined history and detailed information about the battery. 2.4 Device Information Module Device Information is a device viewer module. It shows all relevant devices that are present within your PC. It has three sections, a device viewer, a information panel and a UDI listing for the currently selected device. 2.4.1 Device Viewer The device viewer displays all the current devices detected on your PC in a tree. The main topics at the beginning of the tree are the device categories, left click on a collapsed category to expand it and vice versa to collapse it. To display information about a device, left click on the device in the viewer, the information will display on the right side information panel. You can right click on the device viewer to show the following options: - **Collapse All**: Collapses the tree to show only the main categories. - **Expand All**: Expands the tree to show all the children devices. - **Show All Devices**: Show all the categories no matter if devices are present in those categories - **Show Relevant Devices**: Only show categories that have devices present. The default display is to collapse all while showing only relevant devices. Please note that the devices shown in the device listing are not all devices within your PC, they are just devices that have been detected via the Solid. The device viewer can show the following devices: - **Processors**: These are your computers CPUs (Central Processing Units). - **Storage Drives**: Devices that are used to store your PCs files and data. - **Network Interfaces**: Devices that allow you to connect to a network or to another PC. - **Audio Interfaces**: Devices that allow your PC to play Sound. They are split into 2 categories, ALSA and OSS sound architectures. - **Video Devices**: Devices that allow you to stream live video. - **Serial Devices**: Devices that are connected to your serial port in your PC. - **Smart Card Devices**: Devices that are smart card readers. - **Digital Video Broadcasting Devices**: Devices that use the open standards for digital television. - **Device Buttons**: These are buttons that are present on your PC or external devices. - **Batteries**: These are battery devices that are plugged into your laptop. - **AC Adapters**: These devices will be present when you plug in your AC Adapter. - **Multimedia Players**: Devices that play media files, like a music player. - **Camera Devices**: These are digital camera that are connected to your PC. **NOTE** Video devices do not include your video card adapter **2.4.1.1 Information Panel** The information panel is where device information is shown when you select a device. The first two information topics are always: - **Product**: The name of the device. - **Vendor**: The vendors name of the device. The following information topics are dependent on the device chosen. They are labeled with easy to understand names. The information labels have the ability to be selected and copied from. **NOTE** Processor **Max Speed**: and **Supported Instruction sets**: topics are usual not set by Solid. **NOTE** Top categories in the device listing do not show any information. 2.4.1.2 UDI Information The bottom information panel shows the current selected devices UDI. This is the unique device identifier. All labels have the ability to be selected and copied from. 2.4.2 Interrupt Request (IRQ) Information Module This page displays information about the Interrupt Request Lines in use, and the devices that use them. An IRQ is a hardware line used in a PC by (ISA bus) devices like keyboards, modems, sound cards, etc., to send interrupt signals to the processor to tell it that the device is ready to send or accept data. Unfortunately, there are only sixteen IRQ’s (0-15) available in the i386 (PC) architecture for sharing among the various ISA devices. Many hardware problems are the result of IRQ conflicts, when two devices try to use the same IRQ, or software is misconfigured to use a different IRQ from the one a device is actually configured for. **NOTE** The exact information displayed is system-dependent. On some systems, IRQ information cannot be displayed yet. On Linux®, this information is read from `/proc/interrupts`, which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. The first column, is the IRQ number. The second column, is the number of interrupts that have been received since the last reboot. The third column shows the type of interrupt. The fourth, identifies the device assigned to that interrupt. The user cannot modify any settings on this page. 2.4.3 DMA Channel Information Module This page displays information about the DMA (Direct Memory Access) Channels. A DMA channel is a direct connection that allows devices to transfer data to and from memory without going through the processor. Typically, i386-architecture systems (PC’s) have eight DMA channels (0-7). **NOTE** The exact information displayed is system-dependent. On some systems, DMA Channel information cannot be displayed yet. On Linux®, this information is read from `/proc/dma`, which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. A list of all currently-registered (ISA bus) DMA channels that are in use is shown. The first column shows the DMA channel, and the second column shows the device which uses that channel. Unused DMA channels are not listed. The user cannot modify any settings on this page. 2.4.4 USB Controller/USB Devices Information Module This module allows you to see the devices attached to your USB bus(es). This module is for information only, you cannot edit any information you see here. 2.4.5 Input/Output Port Information Module This page displays information about the I/O ports. I/O Ports are memory addresses used by the processor for direct communication with a device that has sent an interrupt signal to the processor. The exchange of commands or data between the processor and the device takes place through the I/O port address of the device, which is a hexadecimal number. No two devices can share the same I/O port. Many devices use multiple I/O port addresses, which are expressed as a range of hexadecimal numbers. **NOTE** The exact information displayed is system-dependent. On some systems, I/O port information can not yet be displayed. On Linux®, this information is read from `/proc/ioports` which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. A list of all currently-registered I/O port regions that are in use is shown. The first column is the I/O port (or the range of I/O ports), the second column identifies the device that uses these I/O ports. The user cannot modify any settings on this page. 2.4.6 PCI-bus/Installed PCI Cards Information Module This page displays information about the PCI-bus and installed PCI cards, and other devices that use the Peripheral Component Interconnect (PCI) bus. **NOTE** The exact information displayed is system-dependent. On some systems, PCI-information can not yet be displayed. On Linux®, this information is read from `/proc/pci` which is only available if the `/proc` pseudo-filesystem is compiled into the kernel. A listing of all PCI devices found during kernel initialization, and their configuration, is shown. Each entry begins with a bus, device and function number. The user cannot modify any settings on this page. 2.5 Network Information Module 2.5.1 Network Interfaces Information Module This page displays information about the network interfaces installed in your computer. **NOTE** The exact information displayed is system-dependent. On some systems, this information can not yet be displayed. The user cannot modify any settings on this page. 2.5.2 Samba Status Information Module The Samba and NFS Status Monitor is a front end to the programs `smbstatus` and `showmount`. Smbsstatus reports on current Samba connections, and is part of the suite of Samba tools, which implements the SMB (Server Message Block) protocol, also called the NetBIOS or LanManager protocol. This protocol can be used to provide printer sharing or drive sharing services on a network including machines running the various flavors of Microsoft® Windows®. `showmount` is part of the NFS software package. NFS stands for Network File System and is the traditional UNIX® way to share folders over the network. In this case the output of `showmount -a localhost` is parsed. On some systems showmount is in `/usr/sbin`, check if you have showmount in your PATH. 2.5.2.1 Exports On this page you can see a big list which shows the currently active connections to Samba shares and NFS exports of your machine. The first column shows you whether the resource is a Samba (SMB) share or a NFS export. The second column contains the name of the share, the third the name of the remote host, which accesses this share. The remaining columns have only a meaning for Samba-shares. The fourth column contains the User ID of the user, who accesses this share. Note that this does not have to be equal to the UNIX® user ID of this user. The same applies for the next column, which displays the group ID of the user. Each connection to one of your shares is handled by a single process (`smbd`), the next column shows the process ID (pid) of this `smbd`. If you kill this process the connected user will be disconnected. If the remote user works from Windows®, as soon as this process is killed a new one will be created, so he will almost not notice it. The last column shows how many files this user has currently open. Here you see only, how many files he has open just now, you don’t see how many he copied or formerly opened etc. 2.5.2.2 Imports Here you see which Samba- and NFS-shares from other hosts are mounted on your local system. The first column shows whether it is a Samba- or NFS-share, the second column displays the name of the share, and the third shows where it is mounted. The mounted NFS-shares you should see on Linux® (this has been tested), and it should also work on Solaris™ (this has not been tested). 2.5.2.3 Log This page presents the contents of your local Samba log file in a nice way. If you open this page, the list will be empty. You have to press the Update button, then the Samba log file will be read and the results displayed. Check whether the Samba log file on your system is really at the location as specified in the input line. If it is somewhere else or if it has another name, correct it. After changing the file name you have to press Update again. Samba logs its actions according to the log level (see `smb.conf`). If loglevel = 1, Samba logs only when somebody connects to your machine and when this connection is closed again. If log level = 2, it logs also if somebody opens a file and if he closes the file again. If the log level is higher than 2, yet more stuff is logged. If you are interested in who accesses your machine, and which files are accessed, you should set the log level to 2 and regularly create a new Samba log file (e.g. set up a cron task which once a The KInfoCenter week moves your current Samba log file into another folder or something like that). Otherwise your Samba log file may become very big. With the four checkboxes below the big list you can decide, which events are displayed in the list. You have to press Update to see the results. If the log level of your Samba is too low, you won’t see everything. By clicking on the header of one column you can sort the list by this column. 2.5.2.4 Statistics On this page you can filter the contents of the third page for certain contents. Let’s say the Event field (not the one in the list) is set to Connection, Service/File is set to *, Host/User is set to *, Show expanded service info is disabled and Show expanded host info is disabled. If you press Update now, you will see how often a connection was opened to share * (i.e. to any share) from host * (i.e. from any host). Now enable Show expanded host info and press Update again. Now you will see for every host which matches the wildcard *, how many connections were opened by him. Now press Clear Results. Now set the Event field to File Access and enable Show expanded service info and press Update again. Now you will see how often every single file was accessed. If you enable Show expanded host info too, you will see how often every single user opened each file. In the input lines Service/File and Host/User you can use the wildcards * and ? in the same way you use them at the command line. Regular expressions are not recognized. By clicking on the header of a column you can sort the list by this column. This way you can check out which file was opened most often, or which user opened the most files or whatever. 2.5.2.5 Section Author Module copyright 2000: Michael Glauche and Alexander Neundorf neundorf@kde.org Originally written by: Michael Glauche Currently maintained by: Alexander Neundorf neundorf@kde.org CONTRIBUTORS • Conversion to KControl applet: Matthias Hölzer-Klüpfel hoelzer@kde.org • Use of K3Process instead of popen, and more error checking: David Faure faure@kde.org • Conversion to kcmodule, added tab pages 2,3,4, bug fixed: Alexander Neundorf neundorf@kde.org Documentation copyright 2000 Alexander Neundorf neundorf@kde.org Documentation translated to docbook by Mike McBride no mail 2.6 Graphical Information Module When you open the modules in this section, you are presented with some information. The left hand side of the window is organized into a tree. Some of the elements have a plus sign in front of the label. Clicking this sign opens a ‘submenu’ related to the label. Clicking on a minus sign in front of a label hides the submenu. The right hand side of the window contains the individual values for each of the parameters on the left. The information presented will vary depending on your setup. **NOTE** Some setups may not be able to determine some or all of the parameters. You can not change any values from this module. It is for information only. 2.6.1 Wayland Information Module This screen is useful for getting specific information about your Wayland Compositor. 2.6.2 X Server Information Module This screen is useful for getting specific information about your X Server and the current session of X. 2.6.3 OpenGL Information Module This page displays information about installed OpenGL implementation. OpenGL (for "Open Graphics Library") is a cross-platform, hardware independent interface for 3D graphics. GLX is the binding for OpenGL to X Window system. DRI (Direct Rendering Infrastructure) provides hardware acceleration for OpenGL. You must have a videocard with 3D accelerator and properly installed driver for this. Read more at the official OpenGL site OpenGL. Chapter 3 Credits and License KInfoCenter Program copyright 1997-2001 The KInfoCenter Developers Contributors: • Matthias Hölzer-Klüpfel hoelzer@kde.org • Matthias Elter elter@kde.org Documentation copyright 2000 Mike McBride© no mail Contributors: • Paul Campbell paul@taniwha.com • Helge Deller deller@kde.org • Mark Donohoe • Pat Dowler • Duncan Haldane duncan@kde.org • Steffen Hansen stefb@mip.ou.dk. • Matthias Hölzer-Klüpfel hoelzer@kde.org • Martin R. Jones mjones@kde.org • Jost Schenck jost@schenck.de • Jonathan Singer jsinger@leeta.net • Thomas Tanghus tanghus@earthling.net • Krishna Tateneni tateneni@pluto.njcc.com • Ellis Whitehead ewhithe@uni-freiburg.de This documentation is licensed under the terms of the GNU Free Documentation License. This program is licensed under the terms of the GNU General Public License.
{"Source-Url": "https://docs.kde.org/stable5/en/kinfocenter/kinfocenter/kinfocenter.pdf", "len_cl100k_base": 5342, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 29708, "total-output-tokens": 6093, "length": "2e12", "weborganizer": {"__label__adult": 0.00038313865661621094, "__label__art_design": 0.0008754730224609375, "__label__crime_law": 0.0002188682556152344, "__label__education_jobs": 0.0013179779052734375, "__label__entertainment": 0.00025916099548339844, "__label__fashion_beauty": 0.0001347064971923828, "__label__finance_business": 0.0002906322479248047, "__label__food_dining": 0.00019800662994384768, "__label__games": 0.0011510848999023438, "__label__hardware": 0.0223236083984375, "__label__health": 0.000301361083984375, "__label__history": 0.0002903938293457031, "__label__home_hobbies": 0.00025343894958496094, "__label__industrial": 0.0004863739013671875, "__label__literature": 0.0002586841583251953, "__label__politics": 0.00015044212341308594, "__label__religion": 0.0005402565002441406, "__label__science_tech": 0.04058837890625, "__label__social_life": 0.00015664100646972656, "__label__software": 0.4228515625, "__label__software_dev": 0.50634765625, "__label__sports_fitness": 0.00019299983978271484, "__label__transportation": 0.0002677440643310547, "__label__travel": 0.00019097328186035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24148, 0.02661]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24148, 0.3722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24148, 0.82676]], "google_gemma-3-12b-it_contains_pii": [[0, 30, false], [30, 30, null], [30, 2446, null], [2446, 2514, null], [2514, 3534, null], [3534, 5066, null], [5066, 5250, null], [5250, 7173, null], [7173, 9543, null], [9543, 11608, null], [11608, 14128, null], [14128, 16213, null], [16213, 19569, null], [19569, 21882, null], [21882, 23309, null], [23309, 24148, null]], "google_gemma-3-12b-it_is_public_document": [[0, 30, true], [30, 30, null], [30, 2446, null], [2446, 2514, null], [2514, 3534, null], [3534, 5066, null], [5066, 5250, null], [5250, 7173, null], [7173, 9543, null], [9543, 11608, null], [11608, 14128, null], [14128, 16213, null], [16213, 19569, null], [19569, 21882, null], [21882, 23309, null], [23309, 24148, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24148, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24148, null]], "pdf_page_numbers": [[0, 30, 1], [30, 30, 2], [30, 2446, 3], [2446, 2514, 4], [2514, 3534, 5], [3534, 5066, 6], [5066, 5250, 7], [5250, 7173, 8], [7173, 9543, 9], [9543, 11608, 10], [11608, 14128, 11], [14128, 16213, 12], [16213, 19569, 13], [19569, 21882, 14], [21882, 23309, 15], [23309, 24148, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24148, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
0efb143051cbaba508830945e65c9ada7045d91f
The GCC Quad-Precision Math Library Short Contents 1 Typedef and constants ........................................ 1 2 Math Library Routines ........................................ 3 3 I/O Library Routines ........................................... 7 GNU Free Documentation License ............................... 9 4 Reporting Bugs .................................................. 17 Table of Contents 1 Typedef and constants ........................................... 1 2 Math Library Routines ............................................. 3 3 I/O Library Routines ................................................ 7 3.1 strtoflt128 — Convert from string .............................. 7 3.2 quadmath_snprintf — Convert to string ....................... 7 GNU Free Documentation License .............................. 9 ADDENDUM: How to use this License for your documents ........ 16 4 Reporting Bugs ..................................................... 17 1 Typedef and constants The following data type has been defined via `typedef`. `__complex128`: `__float128`-based complex number The following macros are defined, which give the numeric limits of the `__float128` data type. - `FLT128_MAX`: largest finite number - `FLT128_MIN`: smallest positive number with full precision - `FLT128_EPSILON`: difference between 1 and the next larger representable number - `FLT128_DENORM_MIN`: smallest positive denormalized number - `FLT128_MANT_DIG`: number of digits in the mantissa (bit precision) - `FLT128_MIN_EXP`: maximal negative exponent - `FLT128_MAX_EXP`: maximal positive exponent - `FLT128_DIG`: number of decimal digits in the mantissa - `FLT128_MIN_10_EXP`: maximal negative decimal exponent - `FLT128_MAX_10_EXP`: maximal positive decimal exponent The following mathematical constants of type `__float128` are defined. - `M_Eq`: the constant e (Euler’s number) - `M_LOG2Eq`: binary logarithm of 2 - `M_LOG10Eq`: common, decimal logarithm of 2 - `M_LN2q`: natural logarithm of 2 - `M_LN10q`: natural logarithm of 10 - `M_PIq`: pi - `M_PI_2q`: pi divided by two - `M_PI_4q`: pi divided by four - `M_1_PIq`: one over pi - `M_2_PIq`: one over two pi - `M_2_SQRTPIq`: two over square root of pi - `M_SQRT2q`: square root of 2 - `M_SQRT1_2q`: one over square root of 2 2 Math Library Routines The following mathematical functions are available: acosq: arc cosine function acoshq: inverse hyperbolic cosine function asinq: arc sine function asinhq: inverse hyperbolic sine function atanq: arc tangent function atanhq: inverse hyperbolic tangent function atan2q: arc tangent function cbrtq: cube root function ceilq: ceiling value function copysignq: copy sign of a number coshq: hyperbolic cosine function cosq: cosine function erfq: error function erfcq: complementary error function exp2q: base 2 exponential function expq: exponential function expmq: exponential minus 1 function fabsq: absolute value function fdimq: positive difference function finiteq: check finiteness of value floorq: floor value function fmq: fused multiply and add fmaxq: determine maximum of two values fminq: determine minimum of two values fmodq: remainder value function frexpq: extract mantissa and exponent hypotq: Eucledian distance function ilogbq: get exponent of the value isinfq: check for infinity isnanq: check for not a number issignalingq: check for signaling not a number j0q: Bessel function of the first kind, first order j1q: Bessel function of the first kind, second order jnqu: Bessel function of the first kind, \( n \)-th order ldexpq: load exponent of the value lgammaq: logarithmic gamma function llrintq: round to nearest integer value llroundq: round to nearest integer value away from zero logbq: get exponent of the value logq: natural logarithm function log10q: base 10 logarithm function log1pq: compute natural logarithm of the value plus one log2q: base 2 logarithm function lrintq: round to nearest integer value lroundq: round to nearest integer value away from zero modfq: decompose the floating-point number nanq: return quiet NaN nearbyintq: round to nearest integer nextafterq: next representable floating-point number powq: power function remainderq: remainder function remquoq: remainder and part of quotient rintq: round-to-nearest integral value roundq: round-to-nearest integral value, return __float128 scalblnq: compute exponent using FLT_RADIX scalbnq: compute exponent using FLT_RADIX signbitq: return sign bit sincosq: calculate sine and cosine simultaneously sinhq: hyperbolic sine function sinq: sine function sqrtq: square root function tanq: tangent function tanhq: hyperbolic tangent function tgammaq: true gamma function truncq: round to integer, towards zero y0q: Bessel function of the second kind, first order y1q: Bessel function of the second kind, second order ynq: Bessel function of the second kind, n-th order cabsq complex absolute value function cargq: calculate the argument cimagq imaginary part of complex number crealq: real part of complex number cacoshq: complex arc hyperbolic cosine function cacoshq: complex arc hyperbolic cosine function cacosq: complex arc cosine function cacoshq: complex arc hyperbolic sine function casinhq: complex arc hyperbolic sine function catanhq: complex arc hyperbolic tangent function catanq: complex arc tangent function ccosq complex cosine function: coshq complex hyperbolic cosine function cexpq: complex exponential function **cexpiq**: computes the exponential function of “i” times a real value **clogq**: complex natural logarithm **clog10q**: complex base 10 logarithm **conjg**: complex conjugate function **cpowq**: complex power function **cprojq**: project into Riemann Sphere **csinq**: complex sine function **csinhq**: complex hyperbolic sine function **csqrtq**: complex square root **ctanq**: complex tangent function **ctanhq**: complex hyperbolic tangent function 3 I/O Library Routines 3.1 strtoflt128 — Convert from string The function `strtoflt128` converts a string into a `__float128` number. Syntax: ```c __float128 strtoflt128 (const char *s, char **sp) ``` Arguments: - `s` input string - `sp` the address of the next character in the string The argument `sp` contains, if not NULL, the address of the next character following the parts of the string, which have been read. Example: ```c #include <quadmath.h> int main () { __float128 r; r = strtoflt128 ("1.2345678", NULL); return 0; } ``` 3.2 quadmath_snprintf — Convert to string The function `quadmath_snprintf` converts a `__float128` floating-point number into a string. It is a specialized alternative to `snprintf`, where the format string is restricted to a single conversion specifier with Q modifier and conversion specifier e, E, f, F, g, G, a or A, with no extra characters before or after the conversion specifier. The %m$ or *m$ style must not be used in the format. Syntax: ```c int quadmath_snprintf (char *s, size_t size, const char *format, ... ``` Arguments: - `s` output string - `size` byte size of the string, including trailing NUL - `format` conversion specifier string Note: On some targets when supported by the C library hooks are installed for `printf` family of functions, so that `printf("%Qe", 1.2Q);` etc. works too. Example: ```c #include <quadmath.h> #include <stdlib.h> #include <stdio.h> int main () { __float128 r; int prec = 20; ``` int width = 46; char buf[128]; r = 2.0q; r = sqrtq (r); int n = quadmath_snprintf (buf, sizeof buf, "%+-##.20Qe", width, r); if ((size_t) n < sizeof buf) printf("%s\n", buf); /* Prints: +1.41421356237309504880e+00 */ quadmath_snprintf (buf, sizeof buf, "%Qa", r); if ((size_t) n < sizeof buf) printf("%s\n", buf); /* Prints: 0x1.6a09e667f3bcc908b2fb1366ea96p+0 */ n = quadmath_snprintf (NULL, 0, "%+-##46.*Qe", prec, r); if (n > -1) { char *str = malloc (n + 1); if (str) { quadmath_snprintf (str, n + 1, "%+-##46.*Qe", prec, r); printf("%s\n", str); /* Prints: +1.41421356237309504880e+00 */ } free (str); } return 0; GNU Free Documentation License Version 1.3, 3 November 2008 https://www.fsf.org Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document free in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or non-commercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of “copyleft”, which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The “Document”, below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as “you”. You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A “Modified Version” of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A “Secondary Section” is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document’s overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The “Invariant Sections” are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The “Cover Texts” are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A “Transparent” copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not “Transparent” is called “Opaque”. Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The “Title Page” means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, “Title Page” means the text near the most prominent appearance of the work’s title, preceding the beginning of the body of the text. The “publisher” means any person or entity that distributes copies of the Document to the public. A section “Entitled XYZ” means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as “Acknowledgements”, “Dedications”, “Endorsements”, or “History”.) To “Preserve the Title” of such a section when you modify the Document means that it remains a section “Entitled XYZ” according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document’s license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document’s license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled “History”, Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled “History” in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the “History” section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled “Acknowledgements” or “Dedications”, Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled “Endorsements”. Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled “Endorsements” or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version’s license notice. These titles must be distinct from any other section titles. You may add a section Entitled “Endorsements”, provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled “History” in the various original documents, forming one section Entitled “History”; likewise combine any sections Entitled “Acknowledgements”, and any sections Entitled “Dedications”. You must delete all sections Entitled “Endorsements.” 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an “aggregate” if the copyright resulting from the compilation is not used to limit the legal rights of the compilation’s users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document’s Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled “Acknowledgements”, “Dedications”, or “History”, the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See https://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License “or any later version” applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy’s public statement of acceptance of a version permanently authorizes you to choose that version for the Document. 11. RELICENSEING “Massive Multiauthor Collaboration Site” (or “MMC Site”) means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A “Massive Multiauthor Collaboration” (or “MMC”) contained in the site means any set of copyrightable works thus published on the MMC site. “CC-BY-SA” means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization. “Incorporate” means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is “eligible for relicensing” if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing. ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (C) \textit{year your name}. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled `GNU Free Documentation License'. If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the “with...Texts.” line with this: \begin{itemize} \item with the Invariant Sections being list their titles, with \item the Front-Cover Texts being list, and with the Back-Cover Texts being list. \end{itemize} If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. 4 Reporting Bugs Bugs in the GCC Quad-Precision Math Library implementation should be reported via http://gcc.gnu.org/bugs/.
{"Source-Url": "https://gcc.gnu.org/onlinedocs/libquadmath.pdf", "len_cl100k_base": 6806, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 40588, "total-output-tokens": 7918, "length": "2e12", "weborganizer": {"__label__adult": 0.00029778480529785156, "__label__art_design": 0.0004870891571044922, "__label__crime_law": 0.0008134841918945312, "__label__education_jobs": 0.000912189483642578, "__label__entertainment": 9.566545486450197e-05, "__label__fashion_beauty": 9.09566879272461e-05, "__label__finance_business": 0.0006670951843261719, "__label__food_dining": 0.0003154277801513672, "__label__games": 0.0008149147033691406, "__label__hardware": 0.0008177757263183594, "__label__health": 0.00022995471954345703, "__label__history": 0.00016009807586669922, "__label__home_hobbies": 7.617473602294922e-05, "__label__industrial": 0.00029754638671875, "__label__literature": 0.00034332275390625, "__label__politics": 0.0002052783966064453, "__label__religion": 0.0003287792205810547, "__label__science_tech": 0.0209197998046875, "__label__social_life": 6.723403930664062e-05, "__label__software": 0.022674560546875, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.0001373291015625, "__label__transportation": 0.00025200843811035156, "__label__travel": 0.00011163949966430664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30994, 0.02541]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30994, 0.33391]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30994, 0.84525]], "google_gemma-3-12b-it_contains_pii": [[0, 36, false], [36, 36, null], [36, 391, null], [391, 391, null], [391, 978, null], [978, 978, null], [978, 2299, null], [2299, 2299, null], [2299, 3916, null], [3916, 5444, null], [5444, 5908, null], [5908, 5908, null], [5908, 7414, null], [7414, 8086, null], [8086, 11094, null], [11094, 14420, null], [14420, 17738, null], [17738, 20702, null], [20702, 23831, null], [23831, 27117, null], [27117, 29551, null], [29551, 30869, null], [30869, 30994, null]], "google_gemma-3-12b-it_is_public_document": [[0, 36, true], [36, 36, null], [36, 391, null], [391, 391, null], [391, 978, null], [978, 978, null], [978, 2299, null], [2299, 2299, null], [2299, 3916, null], [3916, 5444, null], [5444, 5908, null], [5908, 5908, null], [5908, 7414, null], [7414, 8086, null], [8086, 11094, null], [11094, 14420, null], [14420, 17738, null], [17738, 20702, null], [20702, 23831, null], [23831, 27117, null], [27117, 29551, null], [29551, 30869, null], [30869, 30994, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30994, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30994, null]], "pdf_page_numbers": [[0, 36, 1], [36, 36, 2], [36, 391, 3], [391, 391, 4], [391, 978, 5], [978, 978, 6], [978, 2299, 7], [2299, 2299, 8], [2299, 3916, 9], [3916, 5444, 10], [5444, 5908, 11], [5908, 5908, 12], [5908, 7414, 13], [7414, 8086, 14], [8086, 11094, 15], [11094, 14420, 16], [14420, 17738, 17], [17738, 20702, 18], [20702, 23831, 19], [23831, 27117, 20], [27117, 29551, 21], [29551, 30869, 22], [30869, 30994, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30994, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
fcfa72fa3fa7f6fe71b31565d11d1230ffcc2329
Roni Khardon TR-10-97 May 1997 Center for Research in Computing Technology Harvard University Cambridge, Massachusetts 1 Introduction This note describes the system L2Act, the options it includes, and how to use it. We assume knowledge of the general ideas behind the system [1], as well as some details on the implementation described in [2]. The system includes several on-line options, and several compile options. We start by describing these and then describe the structure of the input files, and how to use the system. 2 Soft Options These are options that can be given on the command line. A typical invocation of the program looks like ``` learntoact input_file -MAXRUNS313 -LEVELWISE1 -SIGMA0.01 ``` where `learntoact` is the name of the compiled program. All arguments apart from the input file start with a minus sign, then the option name follows in capital letters, and then the option value is a number. In the description of the option we give the range of values and (current) default value in parenthesis. 2.1 General Parameters These options control whether the program will learn, test, and in what way: **TPRS (binary,0)** When on, the program goes into test mode and expects a PRS in the input file instead of runs. In this mode, the second on-line parameter is the name of the file containing the runs to be tested upon. **TESTMODE (binary,0)** In mode 0 the program will test whether the PRS solves the test problems. In addition, the average ratio of solution lengths produced by the PRS (when solved) to those that appear in the test file will be computed. In mode 1 the program will compute the percentage of “correct classification” on the actions that appear in the test runs, thus testing normal supervised learning classification accuracy. Mode 1 has not been in use for a while. *Research supported by ONR grant N00014-95-1-0550, and ARO grant DAAL03-92-G-0115.* TLOOPS (binary, 0) In test mode 0, the program quits trying after some bound on the number of steps. Normally, when the PRS fails it is in an infinite loop doing and undoing the same thing. In TLOOPS mode the program tests whether an action has been repeated in the last 5 steps, and if so quits. This implementation is only partial!!! It is good for BW since an action should not be repeated, but not good enough for the logistics domain (although one can get fast and reasonable estimates with it). TWIN (binary, 0) When on, the program goes to test mode, but expects a list of weights (and indices) as input. The indices and weights correspond to the rules in the standard enumeration. A weighted sum of votes is used to try to solve the runs. TESTMODE, and TLOOPS apply here as well. WINNOw (integer, 0) Only values 3,4 are relevant and they indicate that the corresponding version of winnow should be run (in addition to the PRS algorithm). Currently, can only be used with standard enumeration. LEVELWISE (binary, 0) When on, the levelwise algorithm is run instead of the standard one. SIGMA (real, 0.01) When running the levelwise algorithm this parameter is used as the threshold. 2.2 Other General Parameters DEBUG (integer,0) This flag is used to print various information useful for debugging. Use 17 to print the actions chosen in test mode, that is, the solutions produced by the PRS. MAXRUNS (integer,0) The input files may contain many runs; only MAXRUNS is used in learning. When 0 all runs are used. NODUPBIND (binary,0) When on, binding enumeration is restricted so that different variables are bound to different objects. (Note that this saves time for a given rule, but may require more rules.) PREFMODE (integer,0) Several preference modes for choosing between rules are coded. Mode 0 and 2 are discussed in [2]. Others are variations. NOBNDORDER (binary,0) Normally, the first binding that matches a rule is the one that determines which action it suggests. When on, this flag changes the semantics for learning. First, it is tested whether the rule can predict the correct action for some binding; if so it is considered correct. Otherwise, if it matches for any binding it is marked incorrect. This intuitively builds on the hope that all the bindings are good (and tries to avoid penalizing a rule just because of the binding order), but may reduce the number of negative examples for a rule. PLAN (binary,1) When on, each rule includes all preconditions of the action. Mode 0 may not work with some options. COUNTRULES (binary,1) Prints a count of rules in standard enumeration before starting. CUTNWGT (integer, 0) This flag is used in Winnow to bound the number of weights that are printed as output (weights are sorted and larger is printed first). It is also used for debugging in various places; for normal operation it should be kept at 0. 2.3 Enumeration of Rules Several parameters control the enumeration of rules; some refer to the standard enumeration (not using the levelwise algorithm). **KRULE (integer, 2)** The width of the rules that are enumerated. **KSRULE (integer, 3)** The width of rules for support predicates (must match the input, and all rules must be of same width). **KBIND (integer, 3)** The maximal number of free variables in the rules. The same parameter is used for both support predicates and rules on the PRS. **KNRULE (integer, 2)** In standard enumeration KNRULE is the number of positions which are allowed to use support predicates. The other positions only use the original predicates. 2.4 Filtering Rules Several parameters control filtering of enumerated rules: **FILTER (binary, 1)** General flag to control filtering. In PLAN mode it is tested that the body of the rule does not contradict any of the preconditions of the action. **NODUPVAR (binary, 1)** filter a rule if it includes predicates that refer to the same variable, e.g. `on(1, 1)`. **SFILT (binary, 1)** In PLAN mode filter a rule if it includes a predicate that is a duplicate of a precondition of the action. **NOSKIPVAR (binary, 1)** filter a rule if its variables are not a continuous set; e.g. filter rule if it only uses the variables 1,3 (since an equivalent rule that uses 1,2 is enumerated). **CONNECTED (binary, 1)** filter a rule if its variables are not “connected”. Connection between variables is introduced if they are mentioned in the same predicate. For example `on(1, 2)clear(3)` is not connected, but `on(1, 2)on(2, 3)clear(3)` is. **FILTYPES (integer, 0)** Use either 0 or 2. In this mode strong type checking is used, by identifying in the examples the types of the parameters of predicates, and filtering rules that contradict these type constraints. For example a rule with `AIRPORT(1)in(1,2)` will be filtered in the logistics domain since it is found that it never appeared with any object of type `AIRPORT` in the first parameter. The types are also used for binding enumeration. For each rule, the program computes which objects might bind to each variable and enumerates only these combinations. In general type predicates (see below) will not be used in the rules, and this supplies a simple form of pruning the number of rules. When using FILTYPES there is a tension between using a predicate in the rules and using it as a type predicate. Due to implementation, when FILTYPES is used, type predicates cannot be used in the rules (even if hand coded). Namely, even if two types are not disjoint conjoining them is impossible. If the use of these predicates in the rules is desired one can produce copy of a predicate via the support rules (cf. `ap()` in the logistics domain). 3 Compile Options Several options are controllable in compile time. To activate #ifdef option the compiler’s command line should include -Dparamname. To set a value use -Dparamname=number. - The possibility of using the levelwise algorithm is controlled by #ifdef LWISE. - The standard enumeration algorithm uses a large table that saves the correct/cover results for each example-rule combination. This is controlled by #ifdef RTAB. When using the table, enough space must be assigned; the size is controlled by #define NORULES 3400, and #define NOEXAMPLES 5000. - In winnow mode, number 4, more information must be remembered (which action was taken for each rule). This is enabled by #ifdef WINBIG. - For the purpose of drawing learning curves, it is useful to compute an output for the learning algorithm after every so many examples. This is enabled by #ifdef STEPEX, and the particular numbers by #define STEPBASE 200, #define STEPMAX 1200, #define STEPINC 200. - To compare all preference modes in a single run use #ifdef ALLPREF, and to control the number of preference modes that are compared use #define NOPREF 6 - For winnow we allow two parameters. First the number of versions of promotion parameters to use; by using #define NOVERS 2 we get 3 versions starting with 0.5 and equally spaced to 1 (not including 1). We also allow to repeat winnow on the same set of examples. This is done by #define ITER 10. 3.1 Compilation Examples The code is contained in 5 files: 12a.c rules.c learn.c parseinp.c mydefs.h. The file 12a.c handles the command line options, rules.c includes the code and learning algorithm for the levelwise mode, learn.c contain the standard learning algorithm as well as many routines, and parseinp.c contains the parser as well as the code of the test programs. - To run with 1200 examples in 4 steps (for learning curve) using the levelwise algorithm with an enumeration that uses no more than 11000 rules use: gcc -DSTEPBASE=300 -DSTEPMAX=1200 -DSTEPINC=300 -DLWISE -DRTAB -DNORULES=11000 -DNOEXAMPLES=1230 12a.c rules.c learn.c parseinp.c -O3 -o learntoact When running, one has to use a number of runs that supply at least STEPMAX steps and at most NOEXAMPLES steps. - The above option is the general one and it allows to run the standard algorithm as well as Winnow3. If the levelwise algorithm is not used then (to decrease the size of the code) just omit -DLWISE and rules.c in the above. - To use Winnow4 with 10 iterations, and 2 versions use: gcc -DITER=10 -DNOVERS=2 -DWINBIG -DRTAB -DNORULES=9000 -DNOEXAMPLES=370 12a.c learn.c parseinp.c -O3 -o learntoact • For testing we do not need the big table or rules.c; use: gcc 12a.c learn.c parseinp.c -O3 -o test12a 4 The Parser and the Input File Examples for input files for learning and testing appear in the appendix. Here are some notes regarding the parsing: • The parser is somewhat fragile, basing its decisions not on the parenthesis in the inputs but rather on the line structure (which is sufficient). • Everything up to the first line starting with a number sign is ignored by the parser. • The file includes several sections to be parsed; each section is marked by a number sign. • The first section #predicates includes predicates’ names and parameters (the names of parameters are irrelevant here, but are needed to determine the arity). Note: each item is expected on a separate line. • The next section #typepreds is optional, and has the same structure as #predicates. In normal mode these predicates will simply not be used in the rules. In FILTYPES mode they are also used for type checking (it is tested for other predicates which type predicates are accepted as parameters). • The section #operators describes the actions in the domain, and is based on the .lisp style in GraphPlan. Note that (as in GraphPlan) only positive literals are expected as part of the operators. There are two important differences though: 1. The first part of the preconditions of actions, includes “object generation predicates” for GraphPlan. In the original format they look like (<obj> OBJECT) but we expect them in reverse order i.e. (OBJECT <ob1>). 2. The parser ignores all parenthesis etc. and parses the operators based on the line structure, and the general structure. Namely, it is expected that the operator will have the following structure: (OPERATOR OPNAME (params LIST OF PARAM NAMES ) **** only one line expected **** (preconds LIST OF (PREDICATE PRED-OBJECTS) **** each on a separate line **** the "and" construct is optional (currently ignored). (effects ONE LINE IS IGNORED AFTER THE KEYWORD effects then LIST OF EFFECTS del/add (PREDICATE PRED-OBJECTS) **** each on a separate line **** is expected. All the del effects must appear before the add effects. The section `#supportpreds` includes pairs of rules that generate new predicates. The first rule is evaluated once on all bindings, and the second is reevaluated until no more changes occur (as in [1]). Note that each rule is expected on a separate line and a line of space between pairs is expected. Then, any number of descriptions of runs appear. Each such description includes several sections. - The section `#RUN` is empty; just used to signal the boundary. - The section `#objects` includes a list of object names that appear in this run. As before each object is expected on a separate line. - The section `#start` includes a list of literals that are satisfied in the beginning of the run. Note that all other predicates are assumed to be 0, namely, we are using the so-called closed world assumption in the start state. Here again each literal is expected on a separate line. - The section `#goal` includes a list of literals that should be satisfied in the end of the run. Note that (as in GraphPlan) only positive goal literals are considered. Here, however, we are not using the closed world assumption and the rest of the predicates may take any value in the goal. As before each literal is expected on a separate line. - The section `#actions` includes a list of actions in the form: DUMMYWORD OPNAME OBJNAME OBJNAME ... that achieve the goal. Again each action is expected on a separate line. - In test mode, instead of runs, the program expects a PRS or weights from winnow. These are announced by `#TESTS`. In Normal mode the PRS just follows. - In `FILTYPES` mode the program expects information on legal types before the PRS and another marker `#TESTS` in between. (These are automatically generated so the details are omitted). - In `TWIN` mode, the program expects the names of the type predicates used to be printed before the weights, and again another marker `#TESTS` is expected. (These are automatically generated so the details are omitted). 5 Utility Programs Several utility programs to prepare examples, run learning experiments and tests, as well as for tabulating the results have been written. I will only mention one; the program `mktfiles.c` takes the output of the learning algorithm that may include several PRS from various options, and creates from it test files as needed. For learning experiments where `FILTYPES` was used the program `mktfiles-legal.c` includes the additional required information. 6 How to use Here is one example of learning and testing learntoact log_file.runs -MAXRUNS114 -LEVELWISE1 -KRULE3 -KBIND5 -FILTYPES2 > output_file mktfiles-legal output_file This creates several files called Toutput_file.p0, Toutput_file.p1, etc. test12a Toutput_file.p0 test_runs_file -TPRS1 -TL00PS0 -FILTYPES2 -KRULE3 -KBIND5 A Input Files A.1 Blocks World #predicates (arm-empty) (on-table <ob1>) (clear <ob1>) (holding <ob1>) (on <ob> <underob>) #typepreds (OBJECT <ob1>) #operators (OPERATOR PICK-UP (params <ob1>) (preconds ((OBJECT <ob1>)) (and (clear <ob1>) (on-table <ob1>) (arm-empty))) (effects (); no vars need generated in effects list ((del (on-table <ob1>)) (del (clear <ob1>)) (del (arm-empty)) (add (holding <ob1>))))) (OPERATOR PUT-DOWN (params <ob>) (preconds Base Rule: \(G(on(1,2)) \text{ G(on(1,2)) G(on(1,2))} \Rightarrow \text{ingoal(1)} Recursive Rule: \(G(on-table(1)) \text{ G(on-table(1)) G(on-table(1))} \Rightarrow \text{ingoal(1)} Base Rule: \(G(on-table(1)) \text{ on-table(1) on-table(1)} \Rightarrow \text{inplacea(1)} Recursive Rule: inplacea(2) \text{ G(on(1,2)) on(1,2)} \Rightarrow \text{inplacea(1)} Base Rule: \(G(on(1,2)) \text{ on(1,2) ^_ingoal(2)} \Rightarrow \text{inplaceb(1)} Recursive Rule: inplaceb(2) \text{ G(on(1,2)) on(1,2)} \Rightarrow \text{inplaceb(1)} Base Rule: \(on(1,2) \text{ on(1,2) on(1,2)} \Rightarrow \text{above(1,2)} Recursive Rule: above(2,3) \text{ on(1,2) on(1,2)} \Rightarrow \text{above(1,3)} #RUN #objects 1 2 3 4 5 6 7 8 #start (preconds (OBJECT 1) (OBJECT 2) (OBJECT 3) (OBJECT 4) (OBJECT 5) (OBJECT 6) (OBJECT 7) (OBJECT 8) (on 1 5) (on-table 2) (clear 2) (on 3 7) (on 4 6) (on 5 3) (on 6 1) (on-table 7) (on 8 4) (clear 8) (arm-empty)) #goal (effects (on 1 4) (on 2 3) (on 4 7) (on 5 2) (on 7 5) (on-table 8) ) #actions 1 UNSTACK_8_4 2 PUT-DOWN_8 3 UNSTACK_4_6 4 PUT-DOWN_4 5 UNSTACK_6_1 6 PUT-DOWN_6 7 UNSTACK_1_5 8 PUT-DOWN_1 9 UNSTACK_5_3 10 PUT-DOWN_5 11 UNSTACK_3_7 12 PUT-DOWN_3 13 PICK-UP_2 14 STACK_2_3 15 PICK-UP_5 16 STACK_5_2 17 PICK-UP_7 18 STACK_7_5 19 PICK-UP_4 20 STACK_4_7 21 PICK-UP_1 22 STACK_1_4 #RUN ... A.2 The Logistics Domain #predicates (at <truck> <loc>) (in <obj> <truck>) (loc-at <loc-from> <city>) #typepreds (OBJECT <obj>) (TRUCK <truck>) ,LOCATION <loc>) (AIRPLANE <airplane>) (AIRPORT <loc-from>) (CITY <city>) #operators (OPORT LOAD-TRUCK (params <obj> <truck> <loc>) (preconds ((OBJECT <obj> ) (TRUCK <truck> ) ,LOCATION <loc> )) (and (at <truck> <loc>) (at <obj> <loc>) (effects () ((del (at <obj> <loc>)) (add (in <obj> <truck>)))))) (OPORT LOAD-AIRPLANE (params <obj> <airplane> <loc>) (preconds ((OBJECT <obj>) (AIRPLANE <airplane>) ,LOCATION <loc> )) (and (at <obj> <loc>) (at <airplane> <loc>)) (effects ()) ((del (at <obj> <loc>)) (add (in <obj> <airplane>)))) (OPERATOR UNLOAD-TRUCK (params <obj> <truck> <loc>) (preconds ((OBJECT <obj>) (TRUCK <truck>) (LOCATION <loc>)) (and (at <truck> <loc>) (in <obj> <truck>)) ) (effects () ((del (in <obj> <truck>)) (add (at <obj> <loc>)))) ) (OPERATOR UNLOAD-AIRPLANE (params <obj> <airplane> <loc>) (preconds ((OBJECT <obj>) (AIRPLANE <airplane>) (LOCATION <loc>)) (and (in <obj> <airplane>) (at <airplane> <loc>)) ) (effects () ((del (in <obj> <airplane>)) (add (at <obj> <loc>)))) ) (OPERATOR DRIVE-TRUCK (params <truck> <loc-from> <loc-to> <city>) (preconds ((TRUCK <truck>) (LOCATION <loc-from>) (LOCATION <loc-to>) (CITY <city>) (and (at <truck> <loc-from>) (loc-at <loc-from> <city>) (loc-at <loc-to> <city>)) ) (effects () ((del (at <truck> <loc-from>)) (add (at <truck> <loc-to>)))) ) (OPERATOR FLY-AIRPLANE (params <airplane> <loc-from> <loc-to>) (preconds ((AIRPLANE <airplane>)) (AIRPORT <loc-from>) (AIRPORT <loc-to>)) (at <airplane> <loc-from>)) (effects () ((del (at <airplane> <loc-from>)) (add (at <airplane> <loc-to>)))) #supportpreds Base Rule: AIRPORT(1) AIRPORT(1) AIRPORT(1) ==> ap(1) Recursive Rule: AIRPORT(1) AIRPORT(1) AIRPORT(1) ==> ap(1) #RUN #objects package1 package2 bos-truck pgh-truck la-truck airplane1 airplane2 bos-po pgh-po la-po bos-airport pgh-airport la-airport bos pgh la #start (preconds (OBJECT package1 ) (OBJECT package2 ) (TRUCK bos-truck ) (TRUCK pgh-truck ) (TRUCK la-truck ) (AIRPLANE airplane1 ) (AIRPLANE airplane2 ) (LOCATION bos-po ) (LOCATION pgh-po ) (LOCATION la-po ) (AIRPORT bos-airport ) (AIRPORT pgh-airport ) (AIRPORT la-airport ) (LOCATION la-airport ) (CITY bos ) (CITY pgh ) (CITY la ) (loc-at pgh-po pgh) (loc-at pgh-airport pgh) (loc-at bos-po bos) (loc-at bos-airport bos) (loc-at la-po la) (loc-at la-airport la) (at package1 pgh-po) (at package2 pgh-po) (at airplane1 pgh-airport) (at airplane2 pgh-airport) (at bos-truck bos-airport) (at pgh-truck pgh-po) (at la-truck la-po) ) #goal (effects (at package1 la-po) (at package2 la-po) ) #actions A.3 Input File in Test Mode The file should include everything up to #RUN in the input file. Next follows the PRS. Here is how it looks for the logistics domain (including the type predicates information). The file including test runs (the second parameter in test mode) should include runs using the same structure as above. (Domain description is not necessary, since it is read from the first input file, but it can be included since everything until the first #RUN is ignored.) #TESTS 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 15 #TESTS Rule: OBJECT(1) TRUCK(2) LOCATION(3) at(2,3) in(1,2) G(at(1,3)) G(at(1,3)) G(at(1,3)) => UNLOAD-TRUCK(1,2,3) Rule: OBJECT(1) AIRPLANE(2) LOCATION(3) in(1,2) at(2,3) G(at(1,4)) loc-at(3,5) loc-at(4,5) \Rightarrow UNLOAD-AIRPLANE(1,2,3) Rule: OBJECT(1) AIRPLANE(2) LOCATION(3) in(1,2) at(2,3) G(at(1,3)) G(at(1,3)) G(at(1,3)) \Rightarrow UNLOAD-AIRPLANE(1,2,3) Rule: OBJECT(1) AIRPLANE(2) LOCATION(3) at(1,3) at(2,3) G(at(1,4)) _loc-at(3,5) loc-at(4,5) \Rightarrow LOAD-AIRPLANE(1,2,3) Rule: OBJECT(1) TRUCK(2) LOCATION(3) at(2,3) at(1,3) G(at(1,4)) loc-at(3,5) loc-at(4,5) \Rightarrow LOAD-TRUCK(1,2,3) Rule: OBJECT(1) TRUCK(2) LOCATION(3) at(2,3) at(1,3) G(at(1,4)) loc-at(4,5) _ap(3) \Rightarrow LOAD-TRUCK(1,2,3) Rule: TRUCK(1) LOCATION(2) LOCATION(3) CITY(4) at(1,2) loc-at(2,4) loc-at(3,4) G(at(5,3)) in(5,1) in(5,1) \Rightarrow DRIVE-TRUCK(1,2,3,4) Rule: OBJECT(1) TRUCK(2) LOCATION(3) at(2,3) in(1,2) G(at(1,4)) loc-at(4,5) ap(3) \Rightarrow UNLOAD-TRUCK(1,2,3) Rule: TRUCK(1) LOCATION(2) LOCATION(3) CITY(4) at(1,2) loc-at(2,4) loc-at(3,4) in(5,1) in(5,1) in(5,1) \Rightarrow DRIVE-TRUCK(1,2,3,4) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) G(at(5,3)) _in(4,1) in(5,1) \Rightarrow FLY-AIRPLANE(1,2,3) Rule: TRUCK(1) LOCATION(2) LOCATION(3) CITY(4) at(1,2) loc-at(2,4) loc-at(3,4) G(at(5,2)) at(5,3) at(5,3) \Rightarrow DRIVE-TRUCK(1,2,3,4) Rule: TRUCK(1) LOCATION(2) LOCATION(3) CITY(4) at(1,2) loc-at(2,4) loc-at(3,4) at(5,3) ap(2) ap(2) \Rightarrow DRIVE-TRUCK(1,2,3,4) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) at(4,3) G(at(4,5)) _ap(5) \Rightarrow FLY-AIRPLANE(1,2,3) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) at(4,3) G(at(4,5)) G(at(4,5)) \Rightarrow FLY-AIRPLANE(1,2,3) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) G(at(4,2)) at(4,3) at(4,3) \Rightarrow FLY-AIRPLANE(1,2,3) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) ^at(5,3) G(at(5,3)) _in(4,5) \Rightarrow FLY-AIRPLANE(1,2,3) Rule: AIRPLANE(1) AIRPORT(2) AIRPORT(3) at(1,2) ^at(4,3) in(5,1) in(5,1) \Rightarrow FLY-AIRPLANE(1,2,3) References
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/25235127/tr-10-97.pdf?sequence=1", "len_cl100k_base": 6794, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 42327, "total-output-tokens": 8073, "length": "2e12", "weborganizer": {"__label__adult": 0.0002849102020263672, "__label__art_design": 0.0002357959747314453, "__label__crime_law": 0.0002048015594482422, "__label__education_jobs": 0.0014514923095703125, "__label__entertainment": 6.514787673950195e-05, "__label__fashion_beauty": 0.00011718273162841796, "__label__finance_business": 0.00015532970428466797, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0008931159973144531, "__label__hardware": 0.0012407302856445312, "__label__health": 0.0002067089080810547, "__label__history": 0.0001493692398071289, "__label__home_hobbies": 8.535385131835938e-05, "__label__industrial": 0.0003840923309326172, "__label__literature": 0.00026035308837890625, "__label__politics": 0.00017833709716796875, "__label__religion": 0.0003476142883300781, "__label__science_tech": 0.01288604736328125, "__label__social_life": 6.949901580810547e-05, "__label__software": 0.00873565673828125, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00022470951080322263, "__label__transportation": 0.0005297660827636719, "__label__travel": 0.0001423358917236328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22950, 0.08694]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22950, 0.68798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22950, 0.83561]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 142, false], [142, 1928, null], [1928, 4813, null], [4813, 7595, null], [7595, 10208, null], [10208, 12465, null], [12465, 14981, null], [14981, 15785, null], [15785, 15785, null], [15785, 16694, null], [16694, 17120, null], [17120, 17750, null], [17750, 18734, null], [18734, 19317, null], [19317, 20004, null], [20004, 20612, null], [20612, 20612, null], [20612, 20789, null], [20789, 22659, null], [22659, 22950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 142, true], [142, 1928, null], [1928, 4813, null], [4813, 7595, null], [7595, 10208, null], [10208, 12465, null], [12465, 14981, null], [14981, 15785, null], [15785, 15785, null], [15785, 16694, null], [16694, 17120, null], [17120, 17750, null], [17750, 18734, null], [18734, 19317, null], [19317, 20004, null], [20004, 20612, null], [20612, 20612, null], [20612, 20789, null], [20789, 22659, null], [22659, 22950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22950, null]], "pdf_page_numbers": [[0, 0, 1], [0, 142, 2], [142, 1928, 3], [1928, 4813, 4], [4813, 7595, 5], [7595, 10208, 6], [10208, 12465, 7], [12465, 14981, 8], [14981, 15785, 9], [15785, 15785, 10], [15785, 16694, 11], [16694, 17120, 12], [17120, 17750, 13], [17750, 18734, 14], [18734, 19317, 15], [19317, 20004, 16], [20004, 20612, 17], [20612, 20612, 18], [20612, 20789, 19], [20789, 22659, 20], [22659, 22950, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22950, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
2f4e9b9a02fe599f77e60f29c0f316ef85bf7553
Inference in Probabilistic Programming II Xin Zhang Peking University Part of the content is from “An Introduction to Probabilistic Programming” by Jan-Willem van de Meent, Brooks Paige, Hongseok Yang, and Frank Wood And “An Introduction to Sequential Monte Carlo Methods” by Arnaud Doucet, Nando De Freitas, and Neil Gordon Recap of Last Lecture • Graph-based inference • Static • Cannot deal with programs with unbounded loops Graph Translation: Example \[ \begin{align*} x &= \text{bernoulli}(0.2) \\ \text{if}(x) \{ & \quad y_1 = \text{uniform}(0, 2) \\ \} & \quad \text{else} \\ & \quad y_2 = \text{gaussian}(0, 5) \\ y_3 &= \text{phi}(x, y_1, y_2) \\ z &= \text{gaussian}(y_3, 1) \\ \text{condition}(z > 10) \end{align*} \] Inference on Translated Graphs • Loopy belief propagation • Sampling • Gibbs • Hamiltonian Monte Carlo Gibbs Sampling - Proposal distribution - Change one assignment at a time - $p(x \mid Y, X\setminus\{x\})$, where $Y$ are observed variables - When we cannot evaluate $p(x \mid Y, X\setminus\{x\})$, we can turn to Metropolis-Hasting while using $q(x \mid Y, X\setminus\{x\})$ as the proposal distribution Hamiltonian Monte Carlo (HMC) • An more scalable MCMC algorithm \[ H(z, r) = E(z) + K(r) \] Potential energy, \( z \) are the random variables to sample from Kinetic energy, \( r \) are auxiliary variables, provides momentum Intuition Behind HMC • https://arogozhnikov.github.io/2016/12/19/markov_chain_monte_carlo.html Put Things Together: HMC • Augment distribution $p(z)$ with $p(z, r)$ • Proposal distribution: • Update $z, r$ using Hamiltonian dynamics (in practice, a discretized approximation called leapfrog integration) • Judge whether to accept $z, r$ (see below) • Update $r$ stochastically • Acceptance probability (After applying Hamiltonian dynamics): $$\min (1, \exp\{H(z, r) - H(z^*, r^*)\})$$ Account for approximation The Leapfrog Approximation \[ \hat{r}_i(\tau + \epsilon/2) = \hat{r}_i(\tau) - \frac{\epsilon}{2} \frac{\partial E}{\partial z_i}(\hat{z}(\tau)) \] \[ \hat{z}_i(\tau + \epsilon) = \hat{z}_i(\tau) + \epsilon \hat{r}_i(\tau + \epsilon/2) \] \[ \hat{r}_i(\tau + \epsilon) = \hat{r}_i(\tau + \epsilon/2) - \frac{\epsilon}{2} \frac{\partial E}{\partial z_i}(\hat{z}(\tau + \epsilon)). \] To remove biases introduced by numerical errors, the steps are sampled from $\epsilon$ and $-\epsilon$. Question 1: Is the statement right? • For any given probabilistic program with loops, it cannot be converted into a graphical model Question 2: Is the statement right? • The graph obtained by translating a probabilistic program is always a tree Question 3: Translate the program into a graph \[ x = \text{gaussian}(0, 1) \] \[ y = \text{uniform}(0, x) \] \[ \text{if } (x > 10) \{ \] \[ \quad z = x \] \[ \quad \text{condition}(y > 1.5) \] \[ \} \] \[ \text{else} \{ \] \[ \quad \text{condition}(y < 0.5) \] \[ \quad z = y \] \[ \} \] \[ w = \text{gaussian}(z, 0) \] Question 4: Is the statement right? • Gibbs sampling can be applied to sample any distribution Question 5: Is the statement right? • In HMC, the gradient is the gradient of the density function of the target distribution Question 6: Is the statement right? • HMC cannot be applied to any probabilistic programs with branches This Lecture • Evaluation-based inference • More sampling algorithms Motivation • The number of random variables is unknown at compile time • Introduce an upper bound on the number of variables • Implement inference methods that dynamically instantiate variables Likelihood Weighting - A form of importance sampling where the proposal is the prior \[ \mathbb{E}_{q(X)} \left[ \frac{p(X|Y)}{q(X)} r(X) \right] = \frac{1}{p(Y)} \mathbb{E}_{q(X)} \left[ \frac{p(Y, X)}{q(X)} r(X) \right] \] \[ \approx \frac{1}{p(Y)} \frac{1}{L} \sum_{l=1}^{L} W^l r(X^l), \] \[ W^l = \frac{p(Y, X^l)}{q(X^l)} = \frac{p(Y|X^l)p(X^l)}{p(X^l)} = p(Y|X^l) \] If we use \( p(X^l) \) as the proposal distribution Y are observed/conditioned variables Likelihood Weighting • But wait, every run of the program only evaluates a subset of all variables! • It is OK: \( r(X) \) is the return value projection of all variables \( X \) Likelihood Weighting • What happens if there are no factor statements but only condition statements in the program? • How to implement it in a graph-based inference? Likelihood Weighting: Evaluation-based Implementation • Run the program to draw samples • Update the weight $W$ while running the program • Initially, $\log W = 0$ • Whenever encounter an expression $condition(b)$, update $\log W \leftarrow \log W + \log p_B(\text{true})$ Metropolis-Hasting • Similar problem: each execution only evaluates a subset of variables • Naïve method: use the prior distribution $p(X)$ as the proposal distribution: $$ \alpha = \frac{P(X'|Y)q(X|X')} {P(X'|Y)q(X|X')} = \frac{P(X',Y)q(X|X')} {P(X,Y)q(X'|X)} = \frac{P(Y|X')} {P(Y|X)} $$ Metropolis-Hasting: Single-Site Proposals - Most commonly used evaluation-based proposal - Try to only change the value of a one variable at a time - Not always possible due to dependencies Metropolis-Hasting: Single-Site Proposals • Map $\sigma(X)$, such that $X(x)$ refers to the value of $x$ (only variables in the current execution) • Map $\sigma(\log P)$, where $\log P(v)$ evaluates the density for each variable • When sampling from a distribution $d$, we have $$\sigma(\log P(x)) = \text{LOG} - \text{PROB}(d, X(x))$$ • When encounter $\text{condition}(b)$, we have $$\sigma(\log P(y)) = \text{LOG} - \text{PROB}(b, \text{true})$$ Metropolis-Hasting: Single-Site Proposals • Pick a variable $x_0 \in \text{dom}(X)$ at a random from the current sample • Construct a proposal $X', P'$ by re-running the program • For an expression $d$ that sample from a variable $x$ • If $x == x_0$, or $x \notin \text{dom}(X)$, then samples from the expression. Otherwise, reuse the value $X'(x) \leftarrow X(x)$ • Calculate the probability $P'(x) \leftarrow \text{PROB}(d, X'(x))$ • For expression $\text{condition}(b)$ with variable $y$: • Calculate the probability $P'(y) \leftarrow \text{PROB}(b, y) = 1_{b==y}$ • For expression $\text{observe}(e, v)$ with variable $y$: • Calculate the probability $P'(y) \leftarrow \text{PROB}(e, v)$ Metropolis-Hasting: Single-Site Proposals \[ \alpha = \frac{p(Y, X') q(X | X')}{p(Y, X) q(X' | X)} = \frac{p(Y, X') q(X | X', x_0)}{q(X' | X, x_0)} \frac{q(x_0 | X')}{p(Y, X) q(x_0 | X)}. \] Metropolis-Hasting: Single-Site Proposals \[ \frac{p(Y, X')}{q(X'|X, x_0)} \cdot \frac{q(X'|X', x_0)}{p(Y, X)} \cdot \frac{q(x_0|X')}{q(x_0|X)} \] \[ \frac{q(x_0|X')}{q(x_0|X)} = \frac{|X|}{|X'|}. \] \[ p(Y, X') = p(Y|X')p(X') = \prod_{y \in Y'} \mathcal{P}'(y) \prod_{x \in X'} \mathcal{P}'(x) \] \[ q(X'|X, x_0) = \prod_{x \in X'} \mathcal{P}'(x). \quad \frac{p(Y, X')}{q(X'|X, x_0)} = \prod_{y \in Y'} \mathcal{P}'(y) \prod_{x \in X'} \mathcal{P}'(x) \] We divide a sample into sampled part and reused part \[ \frac{p(Y, X)}{q(X|X, x_0)} = \prod_{y \in Y} \mathcal{P}(y) \prod_{x \in X'} \mathcal{P}(x). \] Metropolis-Hasting: Single-Site Proposals $$\alpha = \frac{|\text{dom}(\mathcal{X})|}{|\text{dom}(\mathcal{X}')|} \frac{\prod_{y \in \mathcal{Y}} \mathcal{P}'(y) \prod_{x \in X' \text{reused}} \mathcal{P}'(x)}{\prod_{y \in \mathcal{Y}} \mathcal{P}(y) \prod_{x \in X \text{reused}} \mathcal{P}(x)}$$ Example x = 0 while(bernoulli(0.5) { x += uniform(0,1) } condition(x >= 10) Sequential Monte Carlo • Problem with likelihood weighting algorithm: • Essentially a “guess-and-check” • Doesn’t work well with models where there are a lot of random variables • Sequential Monte Carlo • In probabilistic programming, sample a high-dimensional distribution by sampling a sequence of lower dimensional distributions • Also called particle filters • Used in signal processing and probabilistic inference Informal Example • See the example by Andreas Svensson • https://www.bilibili.com/video/BV1XE41177D1?share_source=copy_web • https://www.youtube.com/watch?v=aUkBa1zMKV4 Given \[ p(x_0) \text{ and } \] \[ p(x_t|x_{t-1}) \text{ and } \] \[ p(y_t|x_t) \text{ and } \] Observations \( y_{1:t} \) Estimate \[ p(x_{0:t}|y_{1:t}) \text{ or } \] \[ p(x_t|y_{1:t}) \text{ or } \] \[ I(f_t) = E_{p(x_{0:t}|y_{1:t})}[f_t(x_{0:t})] = \int f_t(x_{0:t})p(x_{0:t}|y_{1:t})dx_{0:t} \] SMC: Problem Analysis Can you compute these expressions? \[ p(x_{0:t} \mid y_{1:t}) = \frac{p(y_{1:t} \mid x_{0:t}) p(x_{0:t})}{\int p(y_{1:t} \mid x_{0:t}) p(x_{0:t}) \, dx_{0:t}}. \] \[ p(x_{0:t+1} \mid y_{1:t+1}) = p(x_{0:t} \mid y_{1:t}) \frac{p(y_{t+1} \mid x_{t+1}) p(x_{t+1} \mid x_t)}{p(y_{t+1} \mid y_{1:t})}. \] **Prediction:** \( p(x_t \mid y_{1:t-1}) = \int p(x_t \mid x_{t-1}) p(x_{t-1} \mid y_{1:t-1}) \, dx_{t-1}; \) **Updating:** \( p(x_t \mid y_{1:t}) = \frac{p(y_t \mid x_t) p(x_t \mid y_{1:t-1})}{\int p(y_t \mid x_t) p(x_t \mid y_{1:t-1}) \, dx_t}. \) SMC: Problem Analysis • Evaluation of complex high-dimensional integrals is hard • People turn to approximate methods such as sampling SMC: Approach • Use samples to deal with integrations • Effective method that leverages importance sampling SMC: Naïve Importance Sampling Let the proposal distribution be $\pi(x_{0:t} | y_{1:t})$, then we have $$I(f_t) = \frac{\int f_t(x_{0:t}) w(x_{0:t}) \pi(x_{0:t} | y_{1:t}) \, dx_{0:t}}{\int w(x_{0:t}) \pi(x_{0:t} | y_{1:t}) \, dx_{0:t}}$$ $$w(x_{0:t}) = \frac{p(x_{0:t} | y_{1:t})}{\pi(x_{0:t} | y_{1:t})}.$$ $$\hat{I}_N(f_t) = \frac{1}{N} \sum_{i=1}^{N} f_t(x_{0:t}^{(i)}) w(x_{0:t}^{(i)}) = \sum_{i=1}^{N} f_t(x_{0:t}^{(i)}) \tilde{w}_t^{(i)}, \quad \tilde{w}_t^{(i)} = \frac{w(x_{0:t}^{(i)})}{\sum_{j=1}^{N} w(x_{0:t}^{(j)})}.$$ SMC: Naïve Importance Sampling • Problem • Cannot be used for recursive estimation • One needs to get all $y_{1:t}$ before estimating $p(x_{0:t}|y_{1:t})$ • Need to re-evaluate whenever there is a new $y$ • Does not scale SMC: Sequential Importance Sampling - If we want to do recursive evaluation, the proposal distribution needs to satisfy \[ \pi \left( x_{0:t} \mid y_{1:t} \right) = \pi \left( x_{0:t-1} \mid y_{1:t-1} \right) \pi \left( x_t \mid x_{0:t-1}, y_{1:t} \right). \] - Which indicates \[ \pi \left( x_{0:t} \mid y_{1:t} \right) = \pi \left( x_0 \right) \prod_{k=1}^{t} \pi \left( x_k \mid x_{0:k-1}, y_{1:k} \right). \] SMC: Sequential Importance Sampling • Then we have \[ \tilde{w}_t^{(i)} \propto \tilde{w}_{t-1}^{(i)} \frac{p(y_t | x_t^{(i)}) p(x_t^{(i)} | x_{t-1}^{(i)})}{\pi(x_t^{(i)} | x_{0:t-1}^{(i)}, y_{1:t})}. \] • Important case \[ \pi(x_{0:t} | y_{1:t}) = p(x_{0:t}) = p(x_0) \prod_{k=1}^{t} p(x_k | x_{k-1}). \] How to Derive the Formula \[ \tilde{w}_t^{(i)} \propto \tilde{w}_t^{(i)} \frac{p(y_t | x_t^{(i)}) p(x_t^{(i)} | x_{t-1}^{(i)})}{\pi(x_t^{(i)} | x_{0:t-1}, y_{1:t})}. \] Given \[ \tilde{w}_t^{(i)} = \frac{w(x_{0:t}^{(i)})}{\sum_{j=1}^{N} w(x_{0:t}^{(j)})}. \] \[ w(x_{0:t}) = \frac{p(x_{0:t} | y_{1:t})}{\pi(x_{0:t} | y_{1:t})} \cdot \pi(x_{0:t} | y_{1:t}) = \frac{p(x_{0:t-1} | y_{1:t-1}) \pi(x_t | x_{0:t-1}, y_{1:t})}{\pi(x_{0:t-1} | y_{1:t-1}) \pi(x_t | x_{0:t-1}, y_{1:t})}. \] \[ p(x_{0:t+1} | y_{1:t+1}) = p(x_{0:t} | y_{1:t}) \frac{p(y_{t+1} | x_{t+1}) p(x_{t+1} | x_t)}{p(y_{t+1} | y_{1:t})}. \] We have \[ \omega(x_{0:t}) = \frac{p(x_{0:t} | y_{1:t})}{\pi(x_{0:t} | y_{1:t})} = \frac{p(x_{0:t-1}, y_{1:t-1}) * p(y_t | x_t) * p(x_t | x_{t-1}) / p(y_t | y_{1:t-1})}{\pi(x_{0:t-1} | y_{1:t-1}) \pi(x_t | y_{1:t-1}, y_{1:t})} \] \[ = \omega(x_{0:t-1}) * \frac{p(y_t | x_t) * p(x_t | x_{t-1})}{\pi(x_t | y_{1:t-1}, y_{1:t})} * \frac{1}{p(y_{t+1} | y_{1:t})} \] \[ = \omega(x_{0:t-1}) * \frac{p(y_t | x_t) * p(x_t | x_{t-1})}{\pi(x_t | y_{1:t-1}, y_{1:t})} * \frac{1}{p(y_{t+1} | y_{1:t})} \] \[ = \omega(x_{0:t-1}) * \frac{p(y_t | x_t) * p(x_t | x_{t-1})}{\pi(x_t | y_{1:t-1}, y_{1:t})} * \frac{1}{p(y_{t+1} | y_{1:t})} \] \[ = \omega(x_{0:t-1}) * \frac{p(y_t | x_t) * p(x_t | x_{t-1})}{\pi(x_t | y_{1:t-1}, y_{1:t})} * \frac{1}{p(y_{t+1} | y_{1:t})} \] SMC: Sequential Importance Sampling • Problem: as $t$ increases, importance weights $\tilde{\omega}_t^{(i)}$ becomes more and more skewed • Almost all weights will become 0 except 1 • Solution: the bootstrap filter SMC: Bootstrap Filter • Key idea: remove particles with low weights and keep particles with high weights • Formally replace \[ \hat{P}_N \left( dx_{0:t} \mid y_{1:t} \right) = \sum_{i=1}^{N} \tilde{w}_t^{(i)} \delta_{x_{0:t}^{(i)}} \left( dx_{0:t} \right) \] \[ P_N \left( dx_{0:t} \mid y_{1:t} \right) = \frac{1}{N} \sum_{i=1}^{N} N_t^{(i)} \delta_{x_{0:t}^{(i)}} \left( dx_{0:t} \right), \] \( \delta \) is the Dirac measure SMC: Bootstrap Filter \[ P_N \left( dx_{0:t} \mid y_{1:t} \right) = \frac{1}{N} \sum_{i=1}^{N} N_t^{(i)} \delta_{x_{0:t}^{(i)}} \left( dx_{0:t} \right), \] \[\sum_{i=1}^{N} N_t^{(i)} = 0, \text{ if } N_t^{(j)} = 0, \text{ then the particle } x_{0:t}^{j} \text{ dies}\] • How to select \( N_t^{(i)} \)? • Many methods • The most popular method: sampling \( N \) times from \( \hat{P}_N \left( dx_{0:t} \mid y_{1:t} \right) \) SMC: Bootstrap Filter Assume the proposal distribution is \( p(x_{1:t}) \) 1. Initialization. \( T = 0 \) - For \( i = 1, \ldots, N \), sample \( x_0^{(i)} \sim p(x_0) \) and set \( t = 1 \) 2. Importance sampling step. - For sample \( \tilde{x}_t^{(i)} \sim p(x_t | \tilde{x}_{t-1}^{(i)}) \) and set \( (\tilde{x}_0^{(i)}, \tilde{x}_t^{(i)}) \). - For \( i = 1, \ldots, N \), evaluate the importance weights. - Normalize the importance weights 3. Selection step - Resample with replacement \( N \) particles from the current particles according to importance weights - Set \( t \to t + 1 \) More on Bootstrap Filter • Compared to sequential importance sampling, it basically • Allows more variations under the prefixes with high weights • Throws away prefixes with low weights • Advantages: • Easy to implement • Efficient • Modular • Can be parallelized • Can be used for complex models Bootstrap Filter: Example \[ x_t = \frac{1}{2} x_{t-1} + 25 \frac{x_{t-1}}{1 + x^2_{t-1}} + 8 \cos(1.2t) + v_t \] \[ y_t = \frac{x_t^2}{20} + w_t, \] \[ x_1 \sim N(0,10), v_k \sim N(0,10), w_k \sim N(0,1) \] From “An Introduction to Sequential Monte Carlo Methods” by Arnaud Doucet, Nando De Freitas, and Neil Gordon From “An Introduction to Sequential Monte Carlo Methods” by Arnaud Doucet, Nando De Freitas, and Neil Gordon From “An Introduction to Sequential Monte Carlo Methods” by Arnaud Doucet, Nando De Freitas, and Neil Gordon SMC in Probabilistic Programming \[ x_0 \rightarrow x_1 \rightarrow x_2 \rightarrow \ldots \] \[ y_1 \rightarrow y_2 \] SMC in Probabilistic Programming x’s are the program trace excluding conditions y’s are conditions SMC in Probabilistic Programming • We can evaluate intermediate densities using breakpoints • We can use the prior distribution as the proposal distribution More on Inference in Probabilistic Programming • There are other general methods • Varational Inference • No silver bullet • The general problem is a counting problem • Some researchers are exploring programmable inference frameworks: Gen: a general-purpose probabilistic programming system with programmable inference. Cusumano-Towner, M. F.; Saad, F. A.; Lew, A. K.; and Mansinghka, V. K. In PLDI 2019: More on Inference in Probabilistic Programming • Implementation issues • How can we avoid re-running programs • Fork at sampling statements and conditions • Can be Implemented through program transformation • For a comprehensive understanding, read http://dippl.org/chapters/03-enumeration.html Next Lecture • Learning in probabilistic programming
{"Source-Url": "https://xinpl.github.io/slides/courses/pp/lecture9.pdf", "len_cl100k_base": 5861, "olmocr-version": "0.1.50", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 62607, "total-output-tokens": 7966, "length": "2e12", "weborganizer": {"__label__adult": 0.0005106925964355469, "__label__art_design": 0.0005440711975097656, "__label__crime_law": 0.0006852149963378906, "__label__education_jobs": 0.003204345703125, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.0002455711364746094, "__label__finance_business": 0.0003311634063720703, "__label__food_dining": 0.0007448196411132812, "__label__games": 0.0010700225830078125, "__label__hardware": 0.001590728759765625, "__label__health": 0.00222015380859375, "__label__history": 0.0004270076751708984, "__label__home_hobbies": 0.0002655982971191406, "__label__industrial": 0.0010843276977539062, "__label__literature": 0.0004229545593261719, "__label__politics": 0.0004346370697021485, "__label__religion": 0.0008101463317871094, "__label__science_tech": 0.24267578125, "__label__social_life": 0.00023758411407470703, "__label__software": 0.0075531005859375, "__label__software_dev": 0.73291015625, "__label__sports_fitness": 0.0007152557373046875, "__label__transportation": 0.0008606910705566406, "__label__travel": 0.0003154277801513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15859, 0.01411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15859, 0.75592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15859, 0.68096]], "google_gemma-3-12b-it_contains_pii": [[0, 327, false], [327, 436, null], [436, 766, null], [766, 875, null], [875, 1185, null], [1185, 1414, null], [1414, 1510, null], [1510, 1938, null], [1938, 2428, null], [2428, 2561, null], [2561, 2675, null], [2675, 2998, null], [2998, 3094, null], [3094, 3221, null], [3221, 3326, null], [3326, 3397, null], [3397, 3595, null], [3595, 4063, null], [4063, 4244, null], [4244, 4412, null], [4412, 4692, null], [4692, 4985, null], [4985, 5179, null], [5179, 5645, null], [5645, 6361, null], [6361, 6553, null], [6553, 7169, null], [7169, 7469, null], [7469, 7552, null], [7552, 7983, null], [7983, 8157, null], [8157, 8458, null], [8458, 9035, null], [9035, 9172, null], [9172, 9282, null], [9282, 9818, null], [9818, 10049, null], [10049, 10476, null], [10476, 10786, null], [10786, 12155, null], [12155, 12374, null], [12374, 12806, null], [12806, 13238, null], [13238, 13852, null], [13852, 14165, null], [14165, 14486, null], [14486, 14595, null], [14595, 14704, null], [14704, 14826, null], [14826, 14927, null], [14927, 15086, null], [15086, 15499, null], [15499, 15806, null], [15806, 15859, null]], "google_gemma-3-12b-it_is_public_document": [[0, 327, true], [327, 436, null], [436, 766, null], [766, 875, null], [875, 1185, null], [1185, 1414, null], [1414, 1510, null], [1510, 1938, null], [1938, 2428, null], [2428, 2561, null], [2561, 2675, null], [2675, 2998, null], [2998, 3094, null], [3094, 3221, null], [3221, 3326, null], [3326, 3397, null], [3397, 3595, null], [3595, 4063, null], [4063, 4244, null], [4244, 4412, null], [4412, 4692, null], [4692, 4985, null], [4985, 5179, null], [5179, 5645, null], [5645, 6361, null], [6361, 6553, null], [6553, 7169, null], [7169, 7469, null], [7469, 7552, null], [7552, 7983, null], [7983, 8157, null], [8157, 8458, null], [8458, 9035, null], [9035, 9172, null], [9172, 9282, null], [9282, 9818, null], [9818, 10049, null], [10049, 10476, null], [10476, 10786, null], [10786, 12155, null], [12155, 12374, null], [12374, 12806, null], [12806, 13238, null], [13238, 13852, null], [13852, 14165, null], [14165, 14486, null], [14486, 14595, null], [14595, 14704, null], [14704, 14826, null], [14826, 14927, null], [14927, 15086, null], [15086, 15499, null], [15499, 15806, null], [15806, 15859, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15859, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15859, null]], "pdf_page_numbers": [[0, 327, 1], [327, 436, 2], [436, 766, 3], [766, 875, 4], [875, 1185, 5], [1185, 1414, 6], [1414, 1510, 7], [1510, 1938, 8], [1938, 2428, 9], [2428, 2561, 10], [2561, 2675, 11], [2675, 2998, 12], [2998, 3094, 13], [3094, 3221, 14], [3221, 3326, 15], [3326, 3397, 16], [3397, 3595, 17], [3595, 4063, 18], [4063, 4244, 19], [4244, 4412, 20], [4412, 4692, 21], [4692, 4985, 22], [4985, 5179, 23], [5179, 5645, 24], [5645, 6361, 25], [6361, 6553, 26], [6553, 7169, 27], [7169, 7469, 28], [7469, 7552, 29], [7552, 7983, 30], [7983, 8157, 31], [8157, 8458, 32], [8458, 9035, 33], [9035, 9172, 34], [9172, 9282, 35], [9282, 9818, 36], [9818, 10049, 37], [10049, 10476, 38], [10476, 10786, 39], [10786, 12155, 40], [12155, 12374, 41], [12374, 12806, 42], [12806, 13238, 43], [13238, 13852, 44], [13852, 14165, 45], [14165, 14486, 46], [14486, 14595, 47], [14595, 14704, 48], [14704, 14826, 49], [14826, 14927, 50], [14927, 15086, 51], [15086, 15499, 52], [15499, 15806, 53], [15806, 15859, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15859, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
3d0fb6cf76ece9d27938533b2335e7a107f68fc7
A Fundamental View on the Act of Modeling H.A. (Erik) Proper, P. van Bommel, S.J.B.A. Hoppenbrouwers, Th.P. van der Weide Institute for Computing and Information Sciences Radboud University Nijmegen Toernooiveld 1, 6525 ED Nijmegen, The Netherlands, EU E.Proper@cs.ru.nl November 13, 2006 Abstract This paper is part of an ongoing research effort to better understand the role of models and modeling in the information system development life-cycle. During this life-cycle, several models are produced, ranging from high level sketches, via conceptual models to source code. This paper is part of an ongoing research effort to better understand the act of modeling. We describe a formal framework by which the process of modeling can be regarded as involving the selection of more and more refined interpretations in terms of the underlying meta-model of the modeling language used. The resulting framework will be used to create a laboratory setup in which we can consequently more closely study (and support) modeling processes. 1 Introduction Modeling is at the core of information systems engineering. In [Myl98] a distinction is made between usage world, subject world, system world and development world, when producing deliverables during information systems engineering. Understanding each of these worlds require considerable modeling efforts, be it to define the requirements on the system, or be it to produce the design of a system. The work reported in this paper is part of an ongoing effort to better understand the act of modeling [HP05, HPW05b, HPW05c, HPW05d, PVH05, HPR05, PHV05, PW05, PHW05, BHPW06] in the context of information system engineering. One of our longer term goals is to turn the art of modeling into a science of modeling. This research effort is one of three focal areas in our research: 1. Syntax and semantics of modeling languages [BHW91, HW93, HPW93, PW94, BBMP95, CHP96, CP96, HVH97, BFW98, HPW05a]. 2. The process of modeling [DFW96, FW04, BW04a, BW04b, BPH04, HBP05, PBH04, PH04, HP05, HPW05d, PW05, PHW05, BHPW06, HPW05c]. 3. The use of models in information systems engineering [HPW05d, HP04, PVH05, VHP04, HPR05, PHV05]. In the past our focus was mainly on the formal definition of syntax and semantics of modeling languages. We have recently expanded this focus to include the process of modeling and the usage of models in information systems engineering. This expansion was inspired by a desire to better understand the modeling process itself, as well as the requirements on the languages used to express these models by the context in which they are to be used [PVH05]. The primary concern of this paper is therefore a further elaboration of a hypothesis put forward in [PHW05]. We argue that one can observe how many modeling techniques are in use to model several aspects of domains, such as processes, objects, information being processed, the flow of information, the flow of control, etc. Scholars and practitioners have produced numerous modeling techniques [Bub86, AW91, Avi95, BMS98]. The resulting plethora of techniques has, in the past, already been referred to as “a methodology jungle” [Avi95]. Each of these modeling techniques focuses on specific aspects of a domain, and is especially geared towards the representation, study, analysis or design of such aspects. Nevertheless, all of these techniques deal with facts about a domain describing how (from the perspective of a specific aspect) concepts in the domain relate to each other. Put more operationally, we argue that any activity model, sequence diagram, information model, etc. has an accompanying domain model [BPH04, PBH04] of the underlying concepts and their relations. Such a domain model could be expressed in terms of a general purpose domain modeling language such as ORM [Hal01], but also using ontology modeling languages such as OWL [MH03] and KL-ONE [WS92]. This leads to the situation as depicted in Figure 1. On the right hand side we find the meta-models of the modeling techniques used, while on the left hand side we find the actual models. The ‘XXX’ represents an aspect of the domain that is being modeled. The ‘XXX’ model is a reinterpretation of the original model in terms of the refined ‘XXX’ meta-model. To illustrate this point, consider the example depicted in Figure 2. In this example, we have used the ORM domain modeling technique [Hal01] to represent a general domain model of a small sample domain dealing with involvement of people with a University department. The involvement starts with candidature, then might move on to the coworkership level, and will typically end in the alumnus status. In the example, we have (partially) re-interpreted the underlying domain model into two directions: an UML class diagram focussing on the core concepts in the domain, and a state-transition diagram focusing on the state changes of the involvement of people with departments. As another example, consider the compacted version, as depicted in Figure 3 of the case study used in [PHW05]. This example focusses on workflow modeling and shows two interpretation steps. The first step, moving from A) to B), requires modelers to select which object types are really actor and actand types. The second interpretation step, from B) to C), can actually be done automatically given a pre-defined mapping between the meta-models of the modeling techniques involved. The modeler does not need to add additional information to the model. Note that the situation depicted in A) is not a static view on the domain. The arrows from fills in form to examines, etc, show a temporal dependency between states, thus providing a flow of states and activities. Figure 3: Activity modeling In each of the interpretation steps, modelers need to make a choice of how to re-interpret (if at all!) specific concepts in the general domain model in terms of the modeling concepts in the refined meta-model. We argue that modeling can be regarded as a process of (iteratively!) refining ones view on the world in terms of more and more refined modeling concepts (the types in the meta-model). This process is driven by the motivations for producing the model in the first place. Using the framework presented, one could actually experiment with situations in which the meta-model is defined during the modeling process versus situations in which the meta-models are pre-defined and standards-based. One may also argue that in practice, modelers will quite often directly produce UML class diagrams, workflow diagrams, etc. In our view, doing so leaves implicit numerous interpretive decisions about the domain. If one were to first produce a domain model as depicted in Figure 3, one could argue that the understanding of the domain being modeled would be deeper, providing a better base from which to then produce model C) via B). Note that it is not our goal not to cast judgement on how to best model. Our goal is rather to better understand the actual act of modeling, and as such, we do want to study how modelers implicitly or explicitly move from A) to C). The resulting framework will be integrated with the logbook perspective [BFW96, BW04b, HP05] on the modeling process to create a system that will allow us to conduct modeling experiments in a laboratory setting. We have structured the remainder of this paper as follows. In Section 2 we briefly explore the notion of subjectivity in relation to modeling. Section 3 then focuses on hierarchies of modeling languages, i.e. meta-model hierarchies. Given such a hierarchy, Section 4 shows how hierarchies of models as depicted in Figure 3 can be represented formally. 2 Subjectivity in Modeling The aim of this section is to define more precisely what we mean by the modeling of a domain, in other words, our fundamental way of thinking about modeling. In doing so, we will start by introducing a framework describing the essential processes that take place when an observer observes a domain. It is our assumption, based on the work of C.S. Peirce [Pei69], that observers perceive a universe and then produce a conception of that part they deem relevant. The conceptions harbored by an observer are impossible to communicate and discuss with other observers unless they are articulated somehow (the need for this ability in the context of information systems engineering is evident). In other words, a conception needs to be represented. Peirce argues that both the perception and conception of an observer are strongly influenced by their interest in the observed universe. This leads to the following set of definitions (also inspired by the ones provided in [FVV98], which are based on the work by Peirce as well): - **Universe** – the ‘world’ around the observer. - **Observer** – an actor perceiving and conceiving the universe, using their senses. - **Perception** – that what results, in the mind of an observer, when they observe the universe, using their senses. - **Conception** – that what results, in the mind of a observer, when they interpret a perception of the universe. Observers may zoom in on a particular part of the universe they observe, or to state it more precisely, they may zoom in on a particular part of their conception of the universe: - **Domain of interest** – any ‘part’ or ‘aspect’ of a conception of the universe, a observer may zoom in on. Note that when observers zoom in on a domain of interest, they produce yet another conception. In the context of information systems engineering, observers may have different domains of interest depending on their concern with regards to the information system being engineered. For example, the operators who will be required to maintain a planned information system, will regard this system in terms of costs of keeping the system up and running, costs and efforts involved in implementing the system, etc. Future users of the same planned system, however, will be more interested in the impact/support the system is likely to have on their work related tasks. In our effort to obtain a fundamental understanding of the act of modeling, we initially focus on situations where we only have one specific concern and associated domain of interest. In line with [FVV98], we define a model to be a specific kind of conception: - **Model** – a purposely abstracted and unambiguous conception of a domain of interest. Conceptions that are harbored by an observer are impossible to communicate and discuss with other observers, unless they are articulated somehow. In other words, the conception needs to be represented: **Representation** – the result of an observer representing a conception, using some language to express themselves. The resulting situation is illustrated in Figure 4 showing how an observer in observing the universe has a conception, which may be represented in terms of a representation. ![Figure 4: An observer observing a universe](image) We are now also in a position to define more precisely what we mean by modeling: **Modeling** – The act of purposely forming a model from (what is conceived to be) a part of the universe, and representing the resulting model by means of some language and medium. The same domain of interest may be regarded by different observers, which is bound to lead to different conceptions, depending on the specific observers. The fact that when referring to the same universe, people are likely to refer to different models is, as reported in e.g. [FVV+98], one serious cause for the current confusion in the development of information systems. People, tend to think about a system as something that can be objectively determined [FVV+98]. An assumption that is bound to lead to serious ‘accidents’. However, at present our focus is on better understanding the act of modeling when only one observer is involved, which is difficult enough as even one observer is not likely to behave like a monotonic function when modeling. In the context of information systems engineering, observers will approach a domain with the aim of expressing the domain in terms of some set of modeling constructs, such as classes, activity (types), event (types), constraints, etc. The set of modeling constructs a observer is used to employ (or trained to use) when modeling a domain, will strongly influence his/her conceptions. For example, when viewing a domain of interest from the perspective of UML class diagrams, this is bound to lead to a different model than when the same domain is viewed from the perspective of UML sequence diagrams. To make this explicit, we therefore presume that when observers model a domain, they do so from a certain perspective; their Weltanschauung [WAA85]. Figure 3 also illustrates how an observer observes (a domain of interest within) a universe from the perspective of different meta-models ($M_1, \ldots, M_n$), leading to equally many models ($m_1, \ldots, m_n$) and model representations ($r_1, \ldots, r_n$). The remainder of this paper is primarily concerned with the development of a precise understanding of the relationships between these meta-models, the corresponding models (or rather their representations), as well as their evolution during a modeling process. Here we will operate under the hypothesis that modeling can be viewed as an iterative process of: - Defining an (unspecific) model of a domain using some suitable (suitable; not necessarily the) generic meta-model, focussing on domain concepts and their relationships in a general sense. In the examples of the previous section, we used ORM (with temporal extensions) as an example of such a generic meta-model. • Selecting more specific interpretations of the concepts identified in the initial model, using more refined meta-models. In the previous section we showed examples of interpretations in terms of a UML class diagram, a state-transition diagram, and a workflow model. The latter step, selection of interpretation, is an essential aspect of our way of thinking with regards to modeling. 3 Meta-model Hierarchies The foundation of our modeling framework is formed by a hierarchy of meta-models. The concept of a meta-model hierarchy is not new. It was already introduced in [OHFB92, FO94] as a way of comparing modeling techniques, and to some extent refined further in [FVV+98]. Our goals of viewing the act of modeling as a process of stepwise selection of interpretations over a hierarchy of meta-models is a way to operationalise the ‘old’ notion of a meta-model hierarchy. A meta-model is seen as a formal system [Men87]. Such a system consists of (1) a signature that specifies its concepts, providing a base for the definition of well-formed formulae, and (2) a set of such well-formed formulae (also referred to as axioms) that are assumed/required to hold for concrete systems that realize the formal system. In this context we shall refer to the concepts of the formal system as the (modeling) types of the meta-model. We will denote a meta-model by its signature and its axioms. We will use $\langle T, A \rangle$ to denote the system with signature $T$ and axioms $A$. Let $\mathcal{MT}$ be the set of all meta-types from some class of modeling techniques, $\mathcal{MA}$ be the set of all axioms, and $\mathcal{MM} \subseteq \mathcal{MT} \times \mathcal{MA}$ the set of all meta-models. We focus on meta-models that satisfy the following rules. Each meta-model is consistent, meaning that the axioms are not contradictory. [M1] If $\langle T, A \rangle \in \mathcal{MM}$, then $A$ is a consistent set of well-formed formula’s over $T$. Each meta-model is required to have different modeling types. [M2] If $M_1 = \langle T_1, A_1 \rangle$ and $M_2 = \langle T_2, A_2 \rangle$, such that $M_1, M_2 \in \mathcal{MM}$, then: $$M_1 \neq M_2 \Rightarrow T_1 \cap T_2 = \emptyset$$ This latter requirement is added to allow us to study relations between modeling concepts in more depth. A model is regarded as an instantiation of a formal system; the associated meta-model. This model thus contains instantiations of the meta-types contained in that meta-model. Let $\mathcal{EL}$ be the set of all those instantiations, which are referred to as model elements. We define the possible interpretation of these elements in terms of the meta-types: $\mathcal{IN} = \mathcal{EL} \times \mathcal{MT}$. In other words, an interpretation is the combination of a model element and a meta-type. Since meta-models may contain sub-types, elements may be associated to multiple meta-types. If $m$ is a model with associated meta-model $M$, we will also say that $m$ is an $M$-model. An $M$-model $m$ can be regarded as a set of interpretations $m \subseteq \mathcal{IN}$ that meet the axioms of meta-model The set of valid $M$-models for a given meta-model $M = \langle T, A \rangle$ is therefore defined as: $$\mathcal{M}(M) \triangleq \{ m \subseteq \mathcal{EL} \times T \mid m \models A \}$$ The set of interpretations fitting a meta-model is defined as: $$I(M) \triangleq \bigcup \mathcal{M}(M).$$ The next step is to introduce hierarchies of meta-models. Such a hierarchy is composed of refinement relations between meta-models. Let $\mathcal{RF}$ be the set of possible refinement relations for the considered class of meta-models and let $\text{From}, \text{To} : \mathcal{RF} \rightarrow \mathcal{MM}$ be functions returning the start and destination meta-model of a refinement respectively. Then $\mathcal{RF}$, $\text{From}$ and $\text{To}$ together span a space in which we will be able to identify meta-model hierarchies to be used in modeling. We do require $\mathcal{RF}$ to be acyclic: $[M3]$ The graph spanned over $\mathcal{MM}$ by $\text{From}$ and $\text{To}$ is acyclic. A specific meta-model hierarchy is a set of refinements, so we can define the set of possible meta-model hierarchies as $\mathcal{MH} \subseteq \mathcal{RF}$, where we do require: $[M4]$ If $R \in \mathcal{MH}$ then $R$ is a tree. Let $\text{Top}(R)$ denote the top of such a tree. We will write $R_{\mathcal{MM}}$ as an abbreviation for the set of meta-models involved in $R$. To really capture the notion of refinement between meta-models, we must be able to map models upward in the hierarchy. We therefore need a function that is able to ground models stated in a refined meta-model in terms of the more general meta-model: $$\text{Ground}_r : \mathcal{RF} \rightarrow (\mathcal{I}(\mathcal{IN}) \rightarrow \mathcal{I}(\mathcal{IN}))$$ In terms of the example shown in Figure 3, the grounding function would have to map any actor type and actand type in a workflow model onto an object type in an ORM model, and each activity type onto an ORM relationship type. The working of the grounding function is illustrated in Figure 6. Models are grounded by grounding the interpretations they are made of. Multiple models conform a refined meta-model may be grounded onto the same generalized model. For example, in Figure 3 we might have selected a person being examined to be an actand (i.e. passive) in the examination, rather than considering it to be an actor as well (as is currently shown in B). In either case, the grounding of model B) would still be the model shown in A). For a given refinement $r$, the grounding function should limit itself to interpretations associated to the meta-models involved in the refinement: $[M5]$ $x \in \text{dom}(\text{Ground}_r) \Rightarrow x \subseteq I(\text{To}(r))$ and $y \in \text{ran}(\text{Ground}_r) \Rightarrow y \subseteq I(\text{From}(r))$ Empty models have an empty grounding: $[M6]$ $\text{Ground}_r(\emptyset) = \emptyset$ Even more, the grounding function should behave strict monotonous in terms of inclusion of sets of interpretations: $[M7]$ $m_1 \subset m_2 \subseteq \mathcal{IN} \Rightarrow \text{Ground}_r(m_1) \subset \text{Ground}_r(m_2)$ Figure 6: Grounding of models and interpretations where \( \subset \) is used as a proper subset. This allows us to ground any non-empty fragment of a re-interpreted model back to (a non-empty) fragment at the more generic level: **Corollary 3.1** \( m \neq \emptyset \Rightarrow \text{Ground}_r(m) \neq \emptyset \) ### 4 Model Hierarchies In this section we extend the meta-model hierarchy of the previous section to a hierarchy of models over such meta-model hierarchies. First we follow the interpretation of a single model element in a hierarchy. When modeling, decisions are made pertaining to interpretations of the domain. These modeling decisions are almost as important as the resulting model. Let \( \mathcal{M} \mathcal{V} \) be a carrier set for motivations of such decisions, then we can define an interpretation hierarchy as a partial function: \[ h : \text{MM} \to \wp(\mathcal{I}) \times \mathcal{M} \mathcal{V} \] where \( \wp(X) = \wp(X) - \{\emptyset\} \). Let \( \mathcal{I} \mathcal{H} \) be the set of all such interpretation hierarchies. \[ \mathcal{I} \mathcal{H} \triangleq \text{MM} \to \wp(\mathcal{I}) \times \mathcal{M} \mathcal{V} \] If we are only interested in the set of interpretations, we will use: \[ h!(M) \triangleq I \text{ such that } h(M) = \langle I, v \rangle \] An interpretation hierarchy should follow a meta-model hierarchy. This is laid down in three rules. We consider \( h \) to be an interpretation hierarchy fitting a meta-model hierarchy \( R \), written as \( h \in I(R) \), iff: 1. The first condition requires that an interpretation hierarchy can only contain interpretations for the meta-models present in \( R \). Formally: \( \text{dom}(h) \subseteq R_{\text{MM}} \). 2. The second condition requires the top of the interpretation hierarchy to contain one interpretation only; the root. Formally: \( |h!(\text{Top}(R))| = 1 \). 3. The third condition requires the interpretation hierarchy to obey the grounding function. Formally this is enforced by: \[ \forall r \in R \ [\text{Ground}_r(h!(\text{To}(r))) \subseteq h!(\text{From}(r))] \] Note that in a refinement step, one is allowed to exclude elements from the original model. If we would want to forbid this, the third condition would have to read: \[ \forall r \in R \ [\text{Ground}_r(h!(\text{To}(r))) = h!(\text{From}(r))] \] Two interpretation hierarchies are disjoint iff they do not overlap for any meta-model: \[ h \otimes i \triangleq \forall M \in \text{MM} \ [h!(M) \cap i!(M) = \emptyset] \] A model hierarchy is a set \( H \) of interpretation hierarchies. The set of possible model hierarchies is therefore given as: \[ \mathcal{M} \mathcal{H} \triangleq \wp(\mathcal{I} \mathcal{H}) \] If \( H \) is a model hierarchy, then for any meta-model \( M \), the complete model is defined as the union of the interpretations in the interpretation hierarchies (as illustrated in [Figure 7]): \[ H!(M) \triangleq \bigcup_{h \in H} h!(M) \] For a given meta-model hierarchy \( R \), the set of valid model hierarchies consists of those interpretation hierarchies \( H \) such that: \[ \forall M \in \text{dom}(H) \ [H!(M) \in \mathcal{M}(M)] \land \forall h,i \in H \ [h \neq i \Rightarrow h \otimes i] \] The first condition requires that all models in the hierarchy conform to their respective meta-models, while the second condition requires interpretation hierarchies to not overlap. 5 Conclusion In this paper we have discussed a framework to study the act of modeling, where a modeling process is regarded as involving the selection of more and more refined interpretations in terms of the underlying meta-model of the modeling language used. The resulting framework will be used, in conjunction with the logbook system, to create a laboratory environment in which modeling experiments can be conducted. The logbook system \[HP05\] takes the view that a modeling process is a (controlled) dialogue between a domain expert, a modeling mediator and a model builder. This process is regarded as a questioning & answering process involving these three roles. When combined with the theory as presented in this paper, the goal of such a questioning & answering process can be made explicit as the creation of a model hierarchy on top of a pre-determined (dictated by the modeling goals at hand \[PVH05\] PHV05\) meta-model hierarchy. In future versions of our framework we also intend to refine it such that we are able to deal with multiple views and concerns, as well as multiple (contradicting!) observers. In the latter case we would like to be able to even log the negotiation that may have to take place in reconciling different views held by different observers of the same domain. References
{"Source-Url": "https://pms.cs.ru.nl/iris-diglib/src/getContent.php?id=2006-Hoppenbrouwers-ActOfModelling", "len_cl100k_base": 5774, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 35637, "total-output-tokens": 10654, "length": "2e12", "weborganizer": {"__label__adult": 0.0004699230194091797, "__label__art_design": 0.001422882080078125, "__label__crime_law": 0.00045990943908691406, "__label__education_jobs": 0.014892578125, "__label__entertainment": 0.00016546249389648438, "__label__fashion_beauty": 0.0003528594970703125, "__label__finance_business": 0.0013017654418945312, "__label__food_dining": 0.0006275177001953125, "__label__games": 0.0007715225219726562, "__label__hardware": 0.0009508132934570312, "__label__health": 0.0012178421020507812, "__label__history": 0.0007596015930175781, "__label__home_hobbies": 0.0002772808074951172, "__label__industrial": 0.0010881423950195312, "__label__literature": 0.0019989013671875, "__label__politics": 0.0004150867462158203, "__label__religion": 0.0008602142333984375, "__label__science_tech": 0.39404296875, "__label__social_life": 0.00031828880310058594, "__label__software": 0.0161895751953125, "__label__software_dev": 0.56005859375, "__label__sports_fitness": 0.0003197193145751953, "__label__transportation": 0.0010156631469726562, "__label__travel": 0.00028061866760253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36818, 0.02829]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36818, 0.53206]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36818, 0.83183]], "google_gemma-3-12b-it_contains_pii": [[0, 2833, false], [2833, 4586, null], [4586, 6637, null], [6637, 10414, null], [10414, 13669, null], [13669, 16781, null], [16781, 19941, null], [19941, 23336, null], [23336, 25949, null], [25949, 29302, null], [29302, 32589, null], [32589, 35999, null], [35999, 36818, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2833, true], [2833, 4586, null], [4586, 6637, null], [6637, 10414, null], [10414, 13669, null], [13669, 16781, null], [16781, 19941, null], [19941, 23336, null], [23336, 25949, null], [25949, 29302, null], [29302, 32589, null], [32589, 35999, null], [35999, 36818, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36818, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36818, null]], "pdf_page_numbers": [[0, 2833, 1], [2833, 4586, 2], [4586, 6637, 3], [6637, 10414, 4], [10414, 13669, 5], [13669, 16781, 6], [16781, 19941, 7], [19941, 23336, 8], [23336, 25949, 9], [25949, 29302, 10], [29302, 32589, 11], [32589, 35999, 12], [35999, 36818, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36818, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
2e383a36cd1aabda15024b3ab7ae3d4159f7c5d2
Review • PC vs. IPC? • How to find $[[P(j)]]^{M,w,i,g,c}$? • How to find $[[\forall xP(x)]]^{M,w,i,g,c}$? • How to find $[[\exists xP(x)]]^{M,w,i,g,c}$? • How to find $[[\Box P(x)]]^{M,w,i,g,c}$? • How to find $[[\Diamond P(x)]]^{M,w,i,g,c}$? • How to find $[[FP(x)]]^{M,w,i,g,c}$? • How to find $[[PP(x)]]^{M,w,i,g,c}$? • Predicate expressions: formalizing semantic relations and their representation • Predicate names: lexical item; entities: variables (e.g. x, y, z) Dave eats a clam.: dave(x) & clam(y) & eats(x, y) What’s the difference? - fred - \( \lambda x. \text{fred}(x) \) - \( \lambda P.P(x)[\text{fred}] \) LAMBDA OPERATIONS Motivation • Sentence: subject + predicate • Something’s shared between them • John is hungry: hungry(John) John sneezed: \( P \text{sneeze}(John) \) John might sneeze: \( \Box F \text{sneeze}(John) \) • Complex NP: head noun + relative clause • Something’s shared between them • the man who sneezed…: \( \text{man}(x) \& P \text{sneeze}(x) \) the man who Fred saw…: \( \text{man}(x) \& P \text{see}(Fred,x) \) the man who was given a ticket…: \( \text{man}(x) \& \text{ticket}(y) \& \text{give}(z, y, x) \) • Lambdas allow us to specify what’s shared where, and to keep available slots “open” until we’re ready to fill them. What is a lambda? • \( \lambda \) • Basic building block in several linguistic theories • We’ll use it in two different theories this term • Operator that associates, combines semantic items compositionally • Predicates, entities • Variables • If \( \psi \) is a well-formed formula and \( x \) a variable, \( \lambda x[\psi] \) is a Pred\(_1\). • \( \lambda x[\neg \text{married}(x) \& \text{male}(x) \& \text{adult}(x)] \) Lexical items and predication • ...sneezed ➔ \( \lambda x. [\text{sneeze}(x)] \) • ...saw... ➔ \( \lambda y. \lambda x. [\text{see}(x, y)] \) • ... laughed and is not a woman ➔ \( \lambda x. [\text{laugh}(x) \& \neg \text{woman}(x)] \) • ... respects himself ➔ \( \lambda x. \text{respect}(x, x) \) • ...respects and is respected by... ➔ \( \lambda y. \lambda x. [\text{respect}(x, y) \& \text{respect}(y, x)] \) Why and what? - Mechanism for spreading/collapsing information - \( \lambda \)-conversion operation - Syntax: \( \lambda x \) prefixed to any wff, where \( x \) is a variable - Semantics: \( \lambda x[\varphi](t) \leftrightarrow \varphi[t/x] \) - i.e. substitute \( t \) for all occurrences of \( x \) - \( \lambda \)-reduction: left-to-right - \( \lambda \)-abstraction: right-to-left \[ [[\lambda x[\psi]]]^{M,w,i,g} = \{ u \in U : [[\psi]]^{M,w,i,g}[u/x] = 1 \} \] - Builds the set that constitutes the extension of the predicate How lambdas operate • Lambdas fill open predicates’ variables with content • John sneezed. john’, λx.[P[sneeze’(x)] λx.[P[sneeze’(x)] (john’) ✗ P[sneeze’(john’)] The basic op: \( \lambda \)-conversion • In an expression \((\lambda x. W)(z)\), replace all occurrences of the variable \(x\) in the expression \(W\) with \(z\). • \((\lambda x. \text{hungry}(x))(\text{John}) \rightarrow \text{hungry(John)}\) • \((\lambda x. [\neg \text{married}(x) \& \text{male}(x) \& \text{adult}(x)])(\text{John}) \rightarrow \neg \text{married(John)} \& \text{male(John)} \& \text{adult(John)}\) Hints • Work one step at a time • Pay particular attention to scope in $\lambda$-expressions • Special considerations (but don’t worry about them for this class) • Intensionality is messy unless rigid designators are being considered • Starting at p. 400 or so, the text uses tick/prime marks to differentiate semantic objects (predicate names, constants, etc.) from lexical objects • hungry'(fred') Reduce this \[(4) \, \exists y[\exists z[\exists x[B(x) \rightarrow \exists w[R(x, w)](j)] \land \exists x[B(x) \lor Q(x)](z)](y)]\] Let $M_8$ be such that the following hold: (a) $U_8 = \{\text{Pavarotti, Bond, Loren}\}$ (b) $V_8(P) = \{\text{Pavarotti, Bond}\}$ (c) $V_8(m) = \text{Pavarotti}$ (d) $V_8(j) = \text{Bond}$ And let $g$ be an arbitrary value assignment. (e) $\left[\lambda x[P(x) \land \neg[x = m]]\right]^{M_8.g} = \{u : \left[\left[ P(x) \land \neg[x = m]\right]^{M_8.g[u/x]} = 1\right\}$ By (11) (f) $\left[P(x) \land \neg[x = m]\right]^{M_8.g[\text{Pavarotti}/x]} = 0$ By (a) to (d) and the semantics of PC (g) $\left[P(x) \land \neg[x = m]\right]^{M_8.g[\text{Bond}/x]} = 1$ By (a) to (d) and the semantics of PC (h) $\left[P(x) \land \neg[x = m]\right]^{M_8.g[\text{Loren}/x]} = 0$ By (a) to (d) and the semantics of PC (i) $\left[\lambda x[P(x) \land \neg[x = m]]\right]^{M_8.g} = \{\text{Bond}\}$ By (e) to (h) The new semantics - Two components - Compositional translation into logical calculus (representation) - Truth-conditional and model-theoretic evaluation - $\lambda$ operator brings them together - Treatment is systematic, modular, compositional, principled Syntax of $F_3$ a. i. $TP \rightarrow NP \bar{T}$ ii. $\bar{T} \rightarrow T \ VP$ b. i. $TP \rightarrow TP \ conj \ TP$ ii. $TP \rightarrow \ neg \ TP$ c. $VP \rightarrow V_t \ NP$ d. $VP \rightarrow V_i$ e. $VP \rightarrow V_{dt} \ NP \ PP[to]$ f. $T \rightarrow \ PAST, \ PRES, \ FUT$ g. $NP \rightarrow \ Det \ N_c$ h. $PP[to] \rightarrow \ to \ NP$ i. $Det \rightarrow \ the, \ a, \ every$ j. $N_p \rightarrow \ Pavarotti, \ Loren, \ Bond, \ldots, \ he_n, \ldots$ k. $N_c \rightarrow \ book, \ fish, \ man, \ woman, \ldots$ l. $V_i \rightarrow \ be \ boring, \ be \ hungry, \ walk, \ talk, \ldots$ m. $V_t \rightarrow \ like, \ hate, \ kiss, \ldots$ n. $V_{dt} \rightarrow \ give, \ show, \ldots$ o. $\ conj \rightarrow \ and, \ or$ p. $\ neg \rightarrow \ not$ q. $NP \rightarrow N_p$ r. $CP \rightarrow C \ TP$ s. $VP \rightarrow V_s \ CP$ t. $V_s \rightarrow \ believe, \ know, \ regret, \ldots$ u. $C \rightarrow \ that$ a. i. $[A \ B]' = B'$ ii. $[A \ to \ B] = B'$ example: $[to \ Lee]' = \text{Lee'}$ b. $[TP \ NP \ T]' = T'(NP')$ e. $[V \ NP \ PP]' = \lambda x[V'(x, NP')] example: $[VP \ like \ Kim]' = \lambda x[\text{like'}(x, Kim')] d. $[VP \ V \ NP]' = \lambda x[V'(x, NP')]$ e. $[V \ NP \ PP]' = \lambda x[V'(x, NP', PP')] example: $[VP \ introduce \ him, \ to \ Lee ]'$ f. $[CP \ C \ TP]' = C' \ TP'$ example: $[that \ Lee \ smokes]' = \wedge \text{smoke'}(\text{Lee}')$ g. $[X \ TP]' = X' \ TP'$ examples: $[PAST \ Lee \ smoke]' = P \text{smoke'}(\text{Lee}')$ h. $[NP_i \ TP]$ structures i. if $NP_i = [every \beta]$, then $[NP_i \ TP]' = \forall x_i[\beta'(x_i) \rightarrow TP']$ example: $[every \ dog, \ t_i \ barks]'$ j. $= \forall x_i[\text{dog'}(x_i) \rightarrow \text{bark'}(x_i)]$ ii. if $NP_i = [a \ beta]$, then $[NP_i \ TP]' = \exists x_i[\beta'(x_i) \wedge TP']$ example: $[a \ dog_i[t_i \ barks]' = \exists x_i[\text{dog'}(x_i) \wedge \text{bark'}(x_i)]$ iii. if $NP_i = [the \ beta]$, then $[NP_i \ TP]' = \exists x_i[\beta'(x_i) \wedge \forall y[\beta'(y) \rightarrow x_i = y] \wedge TP']$ example: $[the \ dog_i[t_i \ barks]'$ $= \exists x_i[\text{dog'}(x_i) \wedge \forall y[\text{dog'}(y) \rightarrow x_i = y] \wedge \text{bark'}(x_i)]$ (a) S-structure \[ TP \left[ NP \left[ \text{the fish} \right] \right] \left[ \neg \left[ VP \left[ \text{not introduce Pavarotti to Loren} \right] \right] \right] \] (b) LF \[ \begin{array}{c} \text{NP}_2 & \text{TP} [2] \\ \text{Det} & \text{N [3]} & \text{NEG [4]} & \text{TP [5]} \\ \text{the} & \text{fish} & \text{not} & \text{T [6]} & \text{TP [7]} \\ \text{PAST} & \text{NP [8]} & \text{T [9]} \\ \theta_2 & \text{VP [10]} \\ \text{V [11]} & \text{NP [12]} & \text{PP [13]} \\ \text{introduce} & \text{Pavarotti} & \text{P} & \text{NP [14]} \\ \text{to} & \text{Loren} \\ \end{array} \] (c) Node-by-node translation (from bottom up) [11] \Rightarrow \text{introduce}' \text{ By (25a)} [12] \Rightarrow \text{Pavarotti}' \text{ By (25a)} [14] \Rightarrow \text{Loren}' \text{ By (25a)} [13] \Rightarrow \text{Loren}' \text{ By (25a)} [10] \Rightarrow \lambda x \left[ \text{introduce}' \left( x, \text{Pavarotti}', \text{Loren}' \right) \right] \text{ By (25c)} [9] \Rightarrow \lambda x \left[ \text{introduce}' \left( x, \text{Pavarotti}', \text{Loren}' \right) \right] \text{ By (25a)} [8] \Rightarrow x_2 \text{ By (25a)} [7] \Rightarrow \lambda x \left[ \text{introduce}' \left( x, \text{Pavarotti}', \text{Loren}' \right) \right] (x_2) \text{ By (25b)} \Rightarrow \lambda x \left[ \text{introduce}' \left( x_2, \text{Pavarotti}', \text{Loren}' \right) \right] \text{ By $\lambda$-conv.} [6] \Rightarrow P \text{ By (25a)} [5] \Rightarrow P \text{ introduce}' \left( x_2, \text{Pavarotti}', \text{Loren}' \right) \text{ By (25g)} [4] \Rightarrow \neg P \text{ By (25a)} [2] \Rightarrow \neg P \text{ introduce}' \left( x_2, \text{Pavarotti}', \text{Loren}' \right) \text{ By (25g)} [3] \Rightarrow \text{fish}' \text{ By (25a)} [1] \Rightarrow \exists x_2 \left[ \text{fish}' \left( x_2 \right) \land \forall y \left[ \text{fish}' \left( y \right) \rightarrow y = x_2 \right] \right] \text{ By (25h.i)} \Rightarrow \neg P \text{ introduce}' \left( x_2, \text{Pavarotti}', \text{Loren}' \right) \text{ By (25h.ii)} Translating syntax to semantics Analysis of (28b), “Every man is hungry or is boring” (a) S-structure \[ S \left[ \text{NP every man, VP [VP is hungry \text{ or } VP is boring]} \right] \] (b) LF \[ \begin{array}{c} S \left[ 1 \right] \\ \downarrow \\ NP_1 \quad S \left[ 2 \right] \\ \downarrow \\ \text{Det} \quad \text{N} \quad \text{NP} \left[ 3 \right] \quad \text{VP} \left[ 4 \right] \\ \downarrow \quad \downarrow \quad \downarrow \\ \text{every} \quad \text{man} \quad e_1 \quad \text{VP} \left[ 5 \right] \quad \text{conj} \quad \text{VP} \left[ 6 \right] \\ \downarrow \quad \downarrow \\ is \text{hungry} \quad \text{or} \quad is \text{boring} \end{array} \] (c) Compositional interpretation (each numbered node is associated with its translation) i. [5] ⇒ hungry’ ii. [6] ⇒ boring’ iii. [3] ⇒ x₁ iv. [4] ⇒ [hungry’ v boring’] By (32) v. [2] ⇒ [hungry’ v boring’](x₁) By (25b) vi. [1] ⇒ ∀x₁[man’(x₁) → [hungry’ v boring’](x₁)] By (25h,i) = ∀x₁[man’(x₁) → \lambda y[hungry’(y) v boring’(y)](x₁)] By (31b) = ∀x₁[man’(x₁) → [hungry’(x₁) v boring’(x₁)]] By \lambda \text{-conv.} Refining previous notions • LF: syntactically-interpreted semantic form • lf: logical interpretation of propositional content • Model-theoretic • Syntactic framework: as before • Semantic framework: as before, except that VP’s become λ-expressions Linguistic applications of $\lambda$ - Relative clauses - Assume DS/SS dichotomy, transformational mapping, constraints - Fronted relative pronoun is (in some sense) the $\lambda$ term, derivation proceeds as usual - Disjunction and conjunction - Combining subject and compound predicate Linguistic applications of $\lambda$ - VP anaphora - Often ambiguous: sloppy/strict identity - Deletion vs. generated empty category - Generality: require semantic identity - Empty “placeholder predicate” substituted with $\lambda$-expression - Very interesting scopal interactions, fine-grained predictions for acceptability VISH and coordinated VP’s - Assume subject originates in spec-VP in DS - It moves to spec-TP (overtly, i.e. before SS) - Every student is tired and didn’t enjoy the show. - Lambda opens, “saves” a slot in each conjunct Relative clauses - Head, complement - Gap in complement refers to head - Can be subject, object, oblique - Assume wh- element in appropriate slot in DS Relative clauses - Overt wh- movement to spec-CP - Leaves a trace behind - Refers to head - (or, as here, Chomsky adjoin) - Lambda can bind head to trace \[ [\text{whom}_2 [\text{Mary likes } e_2]] \\ \Rightarrow \lambda x_2 \text{like}'(\text{Mary}', x_2) \] Syntax and semantics of “Pavarotti likes a fish that Loren hates” (a) D-structure \[ S [\text{Pavarotti} [\text{VP likes} [\text{NP a} [\text{fish} [\text{CP that} [S \text{Loren hates which}_1]]]]]] \] (b) S-structure \[ S [\text{Pavarotti} [\text{VP likes} [\text{NP a} [\text{fish} [\text{CP which}_1 [\text{CP that} [S \text{Loren hates} e_1]]]]]]]] \] From (a) via wh-movement (c) LF \[ [\text{a} [\text{fish} [\text{CP which}_1 [\text{CP that} [S \text{Loren hates} e_1]]]]]]_2 \] \[ S [\text{Pavarotti} [\text{VP likes} [\text{NP} e_2]]] \] From (b) via QR \[ S [1] \] \[ NP_2 \] \[ S [2] \] \[ \text{Det} \] \[ \text{N} [3] \] \[ \text{NP} \] \[ \text{VP} \] \[ \text{DP} \] \[ \text{Det} \] \[ \text{N} [4] \] \[ \text{CP} [4] \] \[ \text{Pavarotti} \] \[ \text{V} \] \[ \text{NP} \] \[ \text{C} \] \[ \text{S} [5] \] \[ \text{NP} \] \[ \text{VP} \] \[ \text{V} \] \[ \text{NP} \] \) Compositional interpretation i. \[ [5] \Rightarrow \text{hate}'(\text{Loren}', x_1) \] By (25a, b, d) ii. \[ [4] \Rightarrow \lambda x_1[\text{hate}'(\text{Loren}', x_1)] \] By (55a) iii. \[ [3] \Rightarrow \text{fish}' \land \lambda x_1[\text{hate}'(\text{Loren}', x_1)] \] By (55b) \[ = \lambda y[\text{fish}'(y) \land \lambda x_1[\text{hate}'(\text{Loren}', x_1)](y)] \] By (31a) \[ = \lambda y[\text{fish}'(y) \land \text{hate}'(\text{Loren}', y)] \] By \( \lambda \)-conversion iv. \[ [2] \Rightarrow \text{like}'(\text{Pavarotti}', x_2) \] By (25a, b, d) v. \[ [1] \Rightarrow \exists x_2[\lambda y[\text{fish}'(y) \land \text{hate}'(\text{Loren}', y)](x_2) \land \text{like}'(\text{Pavarotti}', x_2)] \] By (25h.ii) \[ = \exists x_2[\text{fish}'(x_2) \land \text{hate}'(\text{Loren}', x_2) \land \text{like}'(\text{Pavarotti}', x_2)] \] By \( \lambda \)-conversion
{"Source-Url": "http://linguistics.byu.edu/classes/Ling654dl/cm7.pdf", "len_cl100k_base": 5097, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 29278, "total-output-tokens": 6297, "length": "2e12", "weborganizer": {"__label__adult": 0.001621246337890625, "__label__art_design": 0.005382537841796875, "__label__crime_law": 0.0016965866088867188, "__label__education_jobs": 0.1875, "__label__entertainment": 0.0016965866088867188, "__label__fashion_beauty": 0.00081634521484375, "__label__finance_business": 0.001964569091796875, "__label__food_dining": 0.002223968505859375, "__label__games": 0.005245208740234375, "__label__hardware": 0.0009775161743164062, "__label__health": 0.0021724700927734375, "__label__history": 0.0025272369384765625, "__label__home_hobbies": 0.0009794235229492188, "__label__industrial": 0.001883506774902344, "__label__literature": 0.180908203125, "__label__politics": 0.002864837646484375, "__label__religion": 0.004180908203125, "__label__science_tech": 0.2010498046875, "__label__social_life": 0.0024089813232421875, "__label__software": 0.0194244384765625, "__label__software_dev": 0.366943359375, "__label__sports_fitness": 0.0011243820190429688, "__label__transportation": 0.0033588409423828125, "__label__travel": 0.000942230224609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13597, 0.01318]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13597, 0.72587]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13597, 0.54005]], "google_gemma-3-12b-it_contains_pii": [[0, 528, false], [528, 629, null], [629, 647, null], [647, 1300, null], [1300, 1737, null], [1737, 2157, null], [2157, 2700, null], [2700, 2868, null], [2868, 3296, null], [3296, 3702, null], [3702, 3836, null], [3836, 4644, null], [4644, 4906, null], [4906, 7135, null], [7135, 9182, null], [9182, 10288, null], [10288, 10541, null], [10541, 10837, null], [10837, 11174, null], [11174, 11396, null], [11396, 11553, null], [11553, 11822, null], [11822, 13597, null]], "google_gemma-3-12b-it_is_public_document": [[0, 528, true], [528, 629, null], [629, 647, null], [647, 1300, null], [1300, 1737, null], [1737, 2157, null], [2157, 2700, null], [2700, 2868, null], [2868, 3296, null], [3296, 3702, null], [3702, 3836, null], [3836, 4644, null], [4644, 4906, null], [4906, 7135, null], [7135, 9182, null], [9182, 10288, null], [10288, 10541, null], [10541, 10837, null], [10837, 11174, null], [11174, 11396, null], [11396, 11553, null], [11553, 11822, null], [11822, 13597, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13597, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13597, null]], "pdf_page_numbers": [[0, 528, 1], [528, 629, 2], [629, 647, 3], [647, 1300, 4], [1300, 1737, 5], [1737, 2157, 6], [2157, 2700, 7], [2700, 2868, 8], [2868, 3296, 9], [3296, 3702, 10], [3702, 3836, 11], [3836, 4644, 12], [4644, 4906, 13], [4906, 7135, 14], [7135, 9182, 15], [9182, 10288, 16], [10288, 10541, 17], [10541, 10837, 18], [10837, 11174, 19], [11174, 11396, 20], [11396, 11553, 21], [11553, 11822, 22], [11822, 13597, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13597, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
3bdfff41c9761ee6aef08e4c93a501f828ddf904
Lecture 18 – Lists recap, Assignment 3 At the end of this lecture, students will be able to: - understand the Assignment 3 requirements Lists Recap Lists - Elements are separated by commas and enclosed in square brackets, - Ordered sequence of items of any types Create Lists - an empty list: \texttt{list1 = []} or \texttt{list1 = list()} - a list of ints: \texttt{list2 = [2, 3, 4]} - a list of strings: \texttt{list3 = ["red", "blue"]} - an integer list using the range function: \texttt{list4 = list(range(3, 6))} - a list can include mixed types: [4, True, "Test", 34.8] [1, 2, "jim"] Lists Recap - len() function The elements of a list are the individual items in a list. The len() function can be used to get the length of a list. ``` my_list = [10, 20, 30, 40, 50] print(len(my_list)) ``` 5 Specific elements of a list can be accessed using an integer index which indicates the position of the element in the list (starting from position 0). Lists Recap - accessing list elements Specific elements in a list can be manipulated using square bracket notation with the index number of the element to be accessed. ```python my_list = [10, 20, 30, 40, 50] print(my_list[2]) my_list[3] = my_list[1] + my_list[len(my_list) - 1] print(my_list[0], my_list[3]) ``` This way of writing 'len(my_list) - ...' to access elements from the end of the lists can be avoided. Lists Recap - accessing list elements The elements of a list can be accessed from the end of the list by using a negative index value. ```python my_list = [10, 20, 30, 40, 50] print(my_list[-4]) my_list[-3] = my_list[-1] + my_list[-2] print(my_list[-3], my_list[1], my_list[-5]) ``` ``` 20 90 20, 10 ``` Index out of Range - IndexError Warning! If you try to access an element that does not exist, Python will throw an error! ``` my_list = [10, 20, 30, 40, 50] print(my_list[5]) #NO! Element at index 5 does not exist print(my_list[-6]) #NO! Element at index -6 does not exist ``` IndexError: list index out of range Lists Recap - the 'in' Operator (membership) The `in` operator returns a boolean. It returns True if the value (on the left hand side of the `in` operator) is an element of the list. Otherwise the `in` operator returns False. ``` my_list = [10, 20, 30, 40, 50] result1 = 100 in my_list print(result1) print(30 in my_list) print(40 not in my_list) ``` ``` False True False ``` Lists Recap - visiting each element in a list (iteration) We can iterate through all the elements of a list, in order, using a for ... in loop, e.g., ```python my_list = [30, 20, 10, 20, 40, 30] count = 0 for element in my_list: if element > count: count = count + 10 print(count) ``` ``` 30 ``` ```python my_list = [10, 20, 30, 40, 50] total = 0 for element in my_list: if element % 4 == 0: total = total + element print(total) ``` ``` 60 ``` Updating the elements of a list The values in the elements of a list can be visited and updated using a for … in range(...) loop, e.g., ```python my_list = [10, 20, 30, 40, 50] for index in range(len(my_list)): if index % 2 == 0: my_list[index] = my_list[index] + 5 else: my_list[index] = my_list[index] + 10 print(my_list) [10, 21, 30, 41, 50] my_list = [10, 20, 30, 40, 50] for index in range(len(my_list)): if my_list[index] % 4 == 0: my_list[index] = my_list[index] + 1 print(my_list) [10, 21, 30, 41, 50] ``` List methods position = a_list.index(...) element = a_list.pop() element = a_list.pop(...) a_list.insert(..., ...) a_list.append(...) a_list.reverse() a_list.sort() my_list = [] my_list.append(4) my_list.append(3) my_list.append(21) my_list.insert(2, 1) my_list.insert(0, 2) value = my_list.pop(1) my_list.append(value + 3) my_list.sort() my_list.reverse() my_list.pop() if 7 in my_list: pos = my_list.index(7) my_list.append(pos) else: pos = my_list.pop() print(my_list, "pos:", pos) Complete the `remove_multiples()` function from Lecture 16, Slide 18. ```python def remove_multiples(number_list, multiples_of): # Your implementation here def main(): numbers = [25, 5, 9, 10, 15, 8] print(numbers) remove_multiples(numbers, 5) # remove multiples of 5 print("Numbers left", numbers) main() ``` Output: ``` [25, 5, 9, 10, 15, 8] Numbers left [9, 8] ``` CodeRunner Assignments CompSci 101 has 5 assignments, in total worth 11% of your final mark. For two of these five assignments (a total of 4%), you are required to use the CodeRunner tool. The CodeRunner tool is designed to help you practise, by presenting you with a set of coding exercises. CodeRunner is part of the Moodle learning system: https://www.coderunner2.auckland.ac.nz/moodle/ Information about using CodeRunner is available on the CompSci 101 assignments web page: https://www.cs.auckland.ac.nz/courses/compsci101s1c/assignments/ Step 1: make sure you can log into CodeRunner2 CompSci 101 Assignment 3 Due: 4:30pm, May 8 Worth: 2% of your final mark Topic: lists This assignment is marked out of 20 Assignment 3 – Complete 7 functions For Assignment 3, I have posted a program containing the skeleton and testing code for the 7 assignment questions. Download this program from the CompSci 101 assignments website: https://www.cs.auckland.ac.nz/courses/compsci101s1c/assignments/ Develop the solution to each function in your program. Once you are happy that your function executes correctly, submit the whole function to CodeRunner2. You will receive immediate feedback from CodeRunner2 telling you if you have passed the tests for that question. You can submit as many times as you like. You can submit one function at a time. A3 Q1 - get_funny_average () **parameter** - a list of numbers **returns** – the average (to one decimal place) of **positive** non zero elements excluding the minimum and the maximum positive elements. ``` print("1. Funny average: ", get_funny_average([ 3, 2, 0, 25, 1])) print("2. Funny average: ", get_funny_average([-6, -32, 2, 0, -51, 1, 0, 0])) print("3. Funny average: ", get_funny_average([56, 32, 2, 22, 22])) print("4. Funny average: ", get_funny_average([-56, -3, 0, -21, 0, 0, 5])) print("5. Funny average: ", get_funny_average([56, 3, 2, 0, 251, 1, 41, 22])) print("6. Funny average: ", get_funny_average([-56, -3, 2, 0, -251, 1, -41, 0])) print("7. Funny average: ", get_funny_average([])) ``` <table> <thead> <tr> <th></th> <th>Funny average</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2.5</td> </tr> <tr> <td>2</td> <td>0</td> </tr> <tr> <td>3</td> <td>25.3</td> </tr> <tr> <td>4</td> <td>0</td> </tr> <tr> <td>5</td> <td>24.8</td> </tr> <tr> <td>6</td> <td>0</td> </tr> <tr> <td>7</td> <td>0</td> </tr> </tbody> </table> A3 Q2 - get_memory_score() **parameter** – a list of random numbers (0 – 10) **returns** – the score. ```python print("1. Score:", get_memory_score([3, 4, 1, 6, 3, 3, 9, 0, 0, 0])) print("2. Score:", get_memory_score([1, 2, 2, 2, 2, 3, 1, 1, 8, 2])) print("3. Score:", get_memory_score([2, 2, 2, 2, 2, 2, 2, 2, 2])) print("4. Score:", get_memory_score([1, 2, 3, 4, 5, 6, 7, 8, 9])) random_nums5 = [7, 5, 8, 6, 3, 5, 9, 7, 9, 7, 5, 6, 4, 1, 7, 4, 6, 5, 8, 9, 4, 8, 3, 0, 3] print("5. Score:", get_memory_score(random_nums5)) ``` Called number 3: Score: 0, Numbers in memory: [3] Called number 4: Score: 0, Numbers in memory: [3, 4] Called number 3: Score: 1, Numbers in memory: [3, 4] Called number 0: Score: 1, Numbers in memory: [3, 4, 0] Called number 7: Score: 1, Numbers in memory: [3, 4, 0, 7] Called number 4: Score: 2, Numbers in memory: [3, 4, 0, 7] Called number 5: Score: 2, Numbers in memory: [3, 4, 0, 7, 5] Called number 2: Score: 2, Numbers in memory: [4, 0, 7, 5, 2] Called number 1: Score: 2, Numbers in memory: [0, 7, 5, 2, 1] Called number 3: Score: 2, Numbers in memory: [7, 5, 2, 1, 3] A3 Q3 - get_most_recent parameters – a list of numbers and a list of numbers to test returns – the number from the second list which occurs most recently in the first list. Most recent is the last element of the list. ```python print("1.", get_most_recent([0, 1, 2, 0, 3, 4, 1], [2, 0, 3])) print("2.", get_most_recent([0, 1, 2, 0, 3, 4, 1], [0, 7, 2])) print("3.", get_most_recent([0, 1, 2, 8, 9, 0, 3, 4, 6], [1, 9, 2, 8])) print("4.", get_most_recent([4, 1, 4, 5, 4, 1], [0, 7, 3])) print("5.", get_most_recent([8, 1, 2, 0, 8, 4, 1], [8, 7, 3])) print("6.", get_most_recent([], [8, 1, 0, 3])) numbers_in_order = [1, 1, 1, 0, 1, 0, 2, 2, 1, 2, 0, 1, 2, 0, 3, 4, 1, 2, 4, 0, 3, 8, 8, 5, 5] print("7.", get_most_recent(numbers_in_order, [1, 0, 3, 4])) ``` A3 Q4 - is_legitimate_code() **parameter** – a string **returns** – a boolean indicating whether the parameter string denotes a legitimate code or not. The first three lines of the function are: ```python code_letters = ["S", "B", "N", "T", "P"] min_for_each_letter = [1, 3, 4, 0, 3] #inclusive max_for_each_letter = [7, 9, 6, 7, 5] #inclusive ``` ```python print("1.", is_legitimate_code('B747346')) print("2.", is_legitimate_code('N 444 454')) print("3.", is_legitimate_code('T 400 4854')) print("4.", is_legitimate_code('S 444S454')) print("5.", is_legitimate_code('P ')) print("6.", is_legitimate_code('T 0 ')) ``` 1. True 2. True 3. False 4. False 5. False 6. True A3 Q5 – get_longest_word() **parameter** – a list of strings **returns** – the longest word in the parameter list which has at least 6 letters. (If two or more are the longest then last on the right.) ``` print("1.", get_longest_word(['Melissa', 'Jessie', 'Kath', 'Amity', 'Raeann'])) print("2.", get_longest_word(['Jo', 'Jessie', 'Penelope', 'Jin', 'Raeann', 'Pamelita'])) print("3.", get_longest_word(['Alan', 'Jess', 'Amity', 'Rosalie', 'Rosetta'])) print("4.", "***", get_longest_word(['Jo', 'Jai', 'Jen', 'Jing', 'Joey', 'Jess']), "***", sep = "") print("5.", "***", get_longest_word([]), "***", sep = "") print("6.", "***" + get_longest_word([""]) + "***") ``` 1. Melissa 2. Pamelita 3. Rosetta 4. ******* 5. ******* 6. ******* A3 Q6 - remove_triplets() **parameters** – a list of integers **returns** – None The function removes triplets made up of three sequential identical elements ``` a_list = [6, 6, 6, 7, 6, 6, 6, 3, 3, 3, 8, 8, 8, 3] remove_triplets(a_list) print("1.", a_list) a_list = [6, 6, 6, 7, 6, 6, 6, 6, 6] remove_triplets(a_list) print("2.", a_list) a_list = [6, 6, 6, 7, 6, 6, 4, 3, 3, 3, 8, 8, 8, 3] remove_triplets(a_list) print("3.", a_list) a_list = [1, 1, 1, 4, 4, 4, 1, 1, 1] remove_triplets(a_list) print("4.", a_list) ``` A3 Q7 - get_hand_score() parameter – a list of dice throws returns – the value of the hand according to the rules: - A run is a sequence of dice values starting from 1, e.g., 123, 12345, 1234, 1. - Each dice which is part of a run of dice starting from a 1 has a value which is equivalent to the dice number. The value of any dice which is part of a run is added to the hand score. - If there is no 1 in a hand of dice then the whole hand scores 0. - A hand of dice can contain more than one run. - \([5, 3, 2, 5, 4, 5, 6, 4, 3]\) has value 0 - \([3, 4, 1, 5, 3, 1, 4, 6]\) has value 2 (contains one run with just the dice [1] and a second run with just [1]) - \([5, 3, 2, 2, 6, 4, 5, 1, 4]\) has value 21 (contains one run with the dice [1, 2, 3, 4, 5, 6]) - \([2, 1, 1, 1, 2, 3, 3, 1, 3, 2]\) has value 19 (contains three separate runs with the dice [1, 2, 3] and a second run with the dice [1]) - \([3, 4, 1, 5, 2, 1, 5, 1, 2, 3, 4, 6]\) has value 37 (contains one run with the dice [1, 2, 3, 4, 5, 6], a second run with [1, 2, 3, 4, 5] and a third run with the dice [1]) In a Python program: • a `for ... in` loop can be used to access each individual element of a list • a `for index in range()` loop can be used to make changes to individual element of a list Examples of Python features used in this lecture def change_list(a_list): number_of_elements = len(a_list) for i in range(number_of_elements): a_list[i] = a_list[i] * 2 def use_lists(list1, list2): list3 = [] for index in range(len(list1)): list3 = list3 + [list1[index] + list2[index]] return list3
{"Source-Url": "https://www.cs.auckland.ac.nz/courses/compsci101s1c/lectures/Adriana/L18_1Up.pdf", "len_cl100k_base": 4319, "olmocr-version": "0.1.42", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 41074, "total-output-tokens": 5527, "length": "2e12", "weborganizer": {"__label__adult": 0.0006475448608398438, "__label__art_design": 0.0008525848388671875, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.047271728515625, "__label__entertainment": 0.00014901161193847656, "__label__fashion_beauty": 0.00029397010803222656, "__label__finance_business": 0.00028634071350097656, "__label__food_dining": 0.0012683868408203125, "__label__games": 0.0014324188232421875, "__label__hardware": 0.0012073516845703125, "__label__health": 0.0007481575012207031, "__label__history": 0.0004630088806152344, "__label__home_hobbies": 0.00034999847412109375, "__label__industrial": 0.0007176399230957031, "__label__literature": 0.000629425048828125, "__label__politics": 0.0003819465637207031, "__label__religion": 0.0009431838989257812, "__label__science_tech": 0.00958251953125, "__label__social_life": 0.0003654956817626953, "__label__software": 0.00794219970703125, "__label__software_dev": 0.921875, "__label__sports_fitness": 0.0007576942443847656, "__label__transportation": 0.0008220672607421875, "__label__travel": 0.0004868507385253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12034, 0.07678]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12034, 0.78421]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12034, 0.62811]], "google_gemma-3-12b-it_contains_pii": [[0, 39, false], [39, 137, null], [137, 593, null], [593, 957, null], [957, 1375, null], [1375, 1682, null], [1682, 2009, null], [2009, 2388, null], [2388, 2861, null], [2861, 3416, null], [3416, 3928, null], [3928, 4321, null], [4321, 4917, null], [4917, 5043, null], [5043, 5676, null], [5676, 6578, null], [6578, 7698, null], [7698, 8457, null], [8457, 9135, null], [9135, 9899, null], [9899, 10426, null], [10426, 11505, null], [11505, 11697, null], [11697, 12034, null]], "google_gemma-3-12b-it_is_public_document": [[0, 39, true], [39, 137, null], [137, 593, null], [593, 957, null], [957, 1375, null], [1375, 1682, null], [1682, 2009, null], [2009, 2388, null], [2388, 2861, null], [2861, 3416, null], [3416, 3928, null], [3928, 4321, null], [4321, 4917, null], [4917, 5043, null], [5043, 5676, null], [5676, 6578, null], [6578, 7698, null], [7698, 8457, null], [8457, 9135, null], [9135, 9899, null], [9899, 10426, null], [10426, 11505, null], [11505, 11697, null], [11697, 12034, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12034, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 12034, null]], "pdf_page_numbers": [[0, 39, 1], [39, 137, 2], [137, 593, 3], [593, 957, 4], [957, 1375, 5], [1375, 1682, 6], [1682, 2009, 7], [2009, 2388, 8], [2388, 2861, 9], [2861, 3416, 10], [3416, 3928, 11], [3928, 4321, 12], [4321, 4917, 13], [4917, 5043, 14], [5043, 5676, 15], [5676, 6578, 16], [6578, 7698, 17], [7698, 8457, 18], [8457, 9135, 19], [9135, 9899, 20], [9899, 10426, 21], [10426, 11505, 22], [11505, 11697, 23], [11697, 12034, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12034, 0.02932]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
f8fc7676ac52b78e8c32ba10ef889ef1858b5572
polyLarva: Technology Agnostic Runtime Verification Christian Colombo Adrian Francalanza Ruth Mizzi Gordon J. Pace Department of Computer Science University of Malta {christian.colombo|adrian.francalanza|rmiz0015|gordon.pace}@um.edu.mt With numerous specialised technologies available to industry, it is become increasingly frequent for computer systems to be composed of heterogeneous components, built over, and using different technologies and languages. While this enables developers to use the appropriate technologies for specific contexts, it becomes more challenging to ensure the correctness of the overall system. In this paper we propose a framework to enable extensible technology agnostic runtime verification and we present an extension of polyLarva, a runtime-verification tool able to handle the monitoring of heterogeneous-component system. The approach is then applied to a case study with C and Java components. 1 Introduction Component-based approaches to software and service design is becoming more widespread, allowing for heterogeneous systems to be compartementalised into components; these components encapsulate their internal behaviour while revealing uniform interfaces through which other components may inter- act. This approach to system arrangement has facilitated the construction of large complex systems, where each component is allowed to internally employ different technologies from operating systems and hardware to programming languages. However, the sheer complexity of the systems constructed, together with the decentralised nature of how these heterogenous systems are developed, creates new potential points of failure. This, in turn, increases the need for some form of correctness verification. Runtime monitoring [12] has been shown to be a viable solution to the verification of large complex systems — by limiting the analysis to the actual current runtime path, the approach is tractable (for a reasonable choice of specification logic), while still guaranteeing detection (albeit at runtime) of property violations. Especially in the context of open systems, where correctness is also partly dependent on interaction with the environment, this approach has proved to be viable and scalable to real-life industrial systems e.g. [3, 1]. An important research question in the field of runtime verification has been how to extend the ap- proach to handle non-monolithic systems. Much of the work so far has been limited to a high-level view of such systems, treating components as black boxes and focussing instead on the verification of the component interactions and on strategies for engineering monitoring in such distributed settings. To date, there has been little work that attempts to push verification inside the components so as to verify their inner workings as part of the wider system. This poses additional challenges to runtime verifica- tion, both practical and theoretic, such as the need for a standardised framework for generating events to be monitored (irrespective of the underlying technology used by the monitored components), or the adap- tation of the monitoring logic (specifying correctness properties) to distinct underlying technologies used by different components, which may invariably lead to different semantic interpretation of said logics. A typical example would be an online betting system, which is inherently component based (e.g., the web-portal subsystem, the billing subsystem, the fraud detection subsystem etc), and where each component may use different technologies and resources. Typical operations in this system, such as an online-betting transaction, may go through different components ranging from the company’s transaction database, to an internal certified logging component kept for legal purposes, to an external bank system. Whereas existing verification technologies would typically only be able to monitor external units as black boxes, thus only referring to the interactions taking place, consistency and correctness properties of such betting transactions may depend on the inner workings of internal components. For instance, one may want to ensure that the value written in the certified logging component matches that written in the company’s transaction database. Such component-spanning properties have to be instrumented on different components, necessitating direct interaction with the underlying technologies of each component. Despite this, the present lack of tools supporting such technology heterogeneity is a major stumbling block towards a full adoption of runtime verification techniques for component-based systems. In this paper we present a novel runtime verification framework which supports the monitoring of component-based systems, possibly using different technologies. The approach has been embodied as an extension to the runtime verification tool polyLARVA [2], to support the generation and instrumentation of separate monitoring code for multiple components in a system, and from a single property specification. Furthermore, the property specification language has been designed to be technology and programming-language agnostic, and hence allows reusability across different technologies. To evaluate the proposed component-based monitoring framework, polyLARVA has been applied to OpenEmm, an open source web-based e-mail marketing tool. In the rest of the paper we start by discussing the various design options when it comes to runtime verifying component-based systems in Section 2. In Section 3 these issues are then discussed in the context of multi-technology component-based systems where the monitor needs to access the components’ internal states. Subsequently, Sections 4 and 5 describe how we support our design decisions through an extensible monitoring framework polyLARVA. In Section 6 we give account of a case-study and discuss related work and conclusions in the last section. 2 Component-Based System Monitoring From a monitoring perspective, component-based systems pose particular challenges which go beyond those presented by monolithic systems. While the well-behaviour of individual components can be dealt with locally, properties and specifications which span across components and expressing the correctness of the system as a whole raise various issues, both pragmatic and conceptual. The cross-cutting nature of monitoring, which may require access to the internal state of different components, raises issues regarding the architecture of the monitors with respect to the system components and practical challenges in systems with components built using different technologies. Two important issues, which influence choices in the monitoring architecture are: Orchestrated vs. choreographed monitoring: Different approaches have been proposed in the literature regarding the locality (physical or conceptual) of the monitors with respect to the different components in a system. In an orchestrated monitoring approach (e.g. see [8] [13] [11]), the monitor is a separate unit from the rest of the system, but with privileges to enable it to listen to the behaviour of the other parts of the system and pass judgement about them. On the other hand, a choreographed approach (e.g. see [16] [17] [15]) extends the different parts of the system such that each part locally eavesdrops on its own behaviour, ensuring correct behaviour, and communicating with other monitors (belonging to other parts of the system) whenever synchronisation or information regarding the other parts is required. Approaches to split monitors in this manner using a static [16] or dynamic [7] manner have been proposed. Although most of this work focuses on distributed systems, the classification applies equally well to general component-based systems. Orchestration is usually easier to instrument and setup, but adds dependencies between components which may not be desirable. On the other hand, a choreographed approach respects locality, but at the cost of more complex instrumentation of monitoring code and specification slicing. Especially in the case of heterogeneous component-based systems with components using different technologies, local instrumentation can prove to be particularly challenging, since the monitoring tool has to be able to instrument code built in different languages. Intrusiveness of monitoring: Another independent choice is the level of abstraction at which the monitors can eavesdrop the behaviour of the system. Much work, especially in setting of services, focuses on specifications of messages passed between components [8, 13, 11]. This black-box approach ensures that the instrumentation of monitors is relatively straightforward, since they only need to hook to the communication channels and process the behaviour appropriately. However, the approach has serious limitations when specifications refer to the components’ local states or internal events, since without re-engineering to expose such states and events their monitoring would not be possible. In a component-based system setting, such properties would, for instance, be required to ensure data-consistency across components. A more intrusive approach, to enable direct access to the state of the system components poses challenges, especially when the components are built using different technologies. Note that these two issues are largely independent of each other. For instance both black-box orchestrated monitoring [8, 13] and intrusive orchestrated monitoring [11] approaches have been investigated in the literature. In the next section we investigate how challenges in extending the existing approaches to technology-diverse component-based systems can be addressed. 3 Intrusive Monitoring of Component-Based Systems Although specifications which can be checked through eavesdropping on component communication suffices for a number of applications, there are situations where more intrusive monitoring is required. In the case of systems built from components using different technologies this poses various challenges. 3.1 Monitoring Architecture Supporting technology agnosticism, using an intrusive choreographed approach limits extensibility, since for each possible technology would have to be coupled with each other one. In contrast, with an orchestrated approach, adding support for a new technology only requires adding support for instrumenting code to communicate with the central monitor. For this reason, we have adopted a centralised monitor which receives events and processes them, thus making it a largely orchestrated approach — with the verification unit of the tool being independent of the other components. For every supported technology, the tool can intrusively instrument event generation into the components. Since a purely orchestrated yet intrusive approach may result in breaches of data and control encapsulation within components, we have built into the monitoring system manually instrument parts of the monitoring logic onto the system-side [2]. This allows for manual choreography, without sacrificing extensibility, since inter-component coordination still happens through the central orchestrated monitor. Fig. 1 shows the general architecture of the system after the monitoring parts are instrumented onto the components. 3.2 Extensible Technology Support Through the general monitoring architecture, adding support for new technologies requires the possibility of instrumenting monitors into components built using that technology. The responsibility of the monitoring code is to (i) generate events to send to the central monitor; and (ii) execute any local monitoring code specified in the properties. Since it is desirable to support cross-component properties, it is crucial to support monitoring instrumentation from a single specification script, the approach adopted in polyLARVA. Furthermore, since extensibility to further technologies is also an essential feature, we have separated the monitoring instrumentation into different parts (i) the central verification code is generated from the global parts of the specification using a language-independent part of the runtime verification tool; and (ii) for each different technology, a separate tool is provided, which generates the automated instrumentation scripts from a subpart of the specification (as tagged by the user). The workflow for the usage of polyLARVA is depicted in Fig. 2 with: (i) the specification script is passed through the language agnostic part of the tool to produce the central monitor (bottom arrow of the figure); and (ii) for each different component, the appropriate technology tool of polyLARVA processes the specification to produce the instrumentation instructions for event extraction and local monitoring on that particular component (top arrows in the figure). The actual instrumentation typically takes place by processing the scripts produced by the polyLARVA language-dependent compilers, and using additional external tools (such as aspect-oriented programming compilers). 3.3 Replicated Monitors and Language Agnosticism Since most systems describe multiple instances of an abstract concept (e.g. multiple users, accounts, sessions, etc.) frequently, one would desire to replicate a property for each instance. For instance, for each bank transaction, one may want to set up a monitor for a property which states that the incoming Event capturing and outgoing balances cancel out. In practice, the way such concepts are encoded depends on the technology being used. For instance, if the system is written in Java, a transaction may be encoded as an object, while if written in Erlang, it may correspond to a separate thread and data structures handling the transaction. While monitoring tools for single-technology monolithic systems associate such replication in combination with the technology, in a language agnostic system one needs to be more general. In component-based systems, this poses further challenges when the concept’s lifetime spans across different components. To support such replication, one solution is to demarcate concept instances’ lifetimes by identifying events marking their start and those marking their end. Furthermore, all events related to such an instance are tagged with an identifier which indicates to which instance they belong. For example, in the bank transaction example, we would identify a call to ` initialiseTransaction(transid)` to be the starting point, while `concludeTransaction(transid)` with the same parameter to be its end. Furthermore, any calls to `transferFunds(transid, . . .)` are associate with the instance of the monitor which was started with the transaction identity passed as parameter. 4 polyLarva Specifications At the lowest level, our monitoring framework, polyLarva, uses a simple guarded-command style specification language. Properties are expressed as a list of rules of the following form: \[ \text{event | condition} \rightarrow \text{action} \] Whenever an event (possibly having parameters) is generated by the system, the list of monitor rules is scanned for rule matches relating to that event. If a match is found, the expression specified in the condition of the rule is evaluated and, if satisfied, the action is triggered. **Example 4.1** Consider a scenario in which one desires to check that an online payment on a web-based system is carried out after the credit card used has been registered — tagging the customer as untrusted if this rule is violated. This may be expressed in terms of the rules enclosed within the rules block, \(\mathcal{R}\), in Program 4.1. In these rules, register and pay are system events, parametrised by the values customer and card; \(\neg \text{registeredCards}[\text{card}]\) is a condition, while \(\text{registeredCards}[\text{card}] := \text{true}\) and se- tUntrusted(customer) are actions triggered by the runtime monitor. Note that to monitor the property for each system user, we define the event newCustomerSession as the point triggering the replication of the rules (2). Consequently, the events declared within this scope must all define a variable customer which binds the event with a particular monitor instance. Finally, 3 marks the end of the context upon event endCustomerSession. The state, conditions and actions blocks define specification-specific monitor state, conditions and actions respectively that are used in the rules section as macros. The events block highlights those specific points which, during system execution, should trigger the monitoring functionality. Program 4.1 Monitoring customers attempting payments with unregistered cards ```java upon (newCustomerSession(customer)) { state { boolean[] registeredCards; } events { register(customer,card) pay(customer,card) endCustomerSession(customer) } conditions { isRegistered(card) = { registeredCards[card] } } actions { setUntrustedCustomer(customer) = ... registerCard(card) = { registeredCards[card] := true } } rules { register(customer,card) -> registerCard(card); pay(customer,card) \ !isRegistered(card) -> setUntrustedCustomer(customer); endCustomerSession(customer) -> Done; } } ``` While events are a result of the execution path being followed by a system and therefore system-specific, the conditions and actions defined in a rule are typically specific to the runtime monitoring states and are thus independent from the system. This means that evaluation of rule actions and conditions has no pre-set requirement of running on the system being monitored and can safely be moved onto separate resources. However, this is not always the case since conditions may also query the system state, and actions may alter the system state. Unfortunately, it is not straightforward to automatically delineate system-dependent elements from purely monitoring elements. For this reason, our language enables the user to explicitly specify this separation through appropriate constructs. Example 4.2 Continuing on the previous example, consider the case where instead of keeping track within the monitor state of which cards have been registered or not, we query the system state (which keeps track of card registration anyway). In this case the condition which checks whether a credit card 1 A lengthy discussion of how one could use the distinction between system-side and monitor-side monitoring to optimise efficiency can be found in [2]. is registered is performed on the system state by being tagged as \texttt{systemSide}. Note that the rest of the monitoring elements are still performed on the monitor side and are thus marked as such. **Program 4.2** Monitoring activity between two components ```plaintext upon (newCustomerSession(customer)) { state { remoteSide { int cardNo; } } events { register(customer,card) pay(customer,card) paymentservice receiveDetails(customer,card) endCustomerSession(customer) } conditions { remoteSide { validateCardDetails(card) = cardNo == card; } \texttt{systemSide} { isRegistered(card) = \{ registeredCards[card] \} } } actions { remoteSide { setUntrustedCustomer(customer) = ... } remoteSide { saveCardDetails(card) = cardNo := card.cardNo; } remoteSide { reportError = ... } } rules { pay(customer,card) \ !isRegistered(card) -> setUntrustedCustomer(customer); pay(customer,card) \ isRegistered(card) -> saveCardDetails(card); receiveDetails(customer,card) \ !validateCardDetails(card) -> reportError(); endCustomerSession(customer)->Done; } } ``` The tagging of monitor side and system side evaluation of monitoring logic suffices for a monolithic system. However, when a system is composed of heterogeneous components residing on different technologies, tagging has to be more comprehensive, i.e., it has to also distinguish across components. There are two aspects to this: (i) as before states, conditions, and actions may reside within different components; and (ii) the events may now also arise from different components. The following example demonstrates the use of extended tagging in the context of multiple components. **Example 4.3** As an example, we identify the fact that many online stores normally incorporate functionality offered by payment gateway web services to validate credit card details and accept transactions. In such a scenario we might want to ensure that the payment details input by the customer on the online system are the same details received by the payment gateway. A potential monitoring setup for such a scenario is as illustrated in Figure 3 where a system side monitor, associated with the online store system, notifies the remote runtime monitor about a payment transaction. Program 4.3 highlights how the proposed setup is facilitated through the use of the constructs supplied by the polyLARVA specification language. The payment transaction event \( \triangleright \) is identified as being an event that will occur on one particular system component through the use of a user-defined label that identifies the component as store. The event is also parametrised with the credit card details entered to the system, thus ensuring that the global monitoring component can maintain a copy of these values. On receipt of an authorisation request, the system side monitor associated with the payment gateway service, will communicate with the global monitoring component in order to trigger validation of the card details. This communication is identified through the definition of the event receiveDetails which specifies that its source is one labelled as paymentservice, and therefore a different process from that which triggered other events. Program 4.3 Monitoring activity between two components Upon (newCustomerSession(customer)) { state { remoteSide {int cardNo; } } events { ① event@store register(customer,card) ② event@store pay(customer,card) ③ event@paymentservice receiveDetails(customer,card) ④ event@store endCustomerSession(customer) } conditions { remoteSide { validateCardDetails(card) = cardNo == card; } } actions { remoteSide { setUntrustedCustomer(customer) = ... } remoteSide { saveCardDetails(card) = cardNo := card.cardNo; } remoteSide { reportError = ... } } rules { pay(customer,card) \ !isRegistered(card) -> setUntrustedCustomer(customer); pay(customer,card) \ isRegistered(card) -> saveCardDetails(card); receiveDetails(customer,card) \ !validateCardDetails(card) -> reportError(); endCustomerSession(customer)->Done; } } Figure 3: Centralised monitoring for online payments 5 Extending polyLARVA for New Technologies Due to the intrusive nature of monitoring in polyLARVA, technology-specific plugins had to be implemented to support the monitoring of OpenEmm: one for Java and another for C. The two technologies differ significantly resulting in different design choices for the two plugins: Eliciting events To extract monitoring information from the components we decided to opt for aspect-oriented programming (AOP) extensions for both Java and C. AOP has been heavily used for implementing runtime verification tools [?] and both Java and C have relatively mature AOP extensions being AspectJ and ACC — the AspeCt-oriented C compiler\(^2\) respectively. **Communication to and from monitoring component** When an event triggers at a component, this has to be communication to the global monitoring component. To this end, both Java and C plugins use standard socket connections. The same technology is also used to support communication from the monitoring component back to the other components. This is useful when monitoring conditions and actions have to access system state residing on a particular component. I suggest adding the parts below around the text above /G - The architecture/design within a language plugin. - How does one add support for a new language? How much work is required? - How similar/different are the C and Java plugins? 6 Case Study polyLarva has been used to monitor OpenEmm\(^3\) an open source, web-based tool for email marketing. The tool provides facilities to administer mailing lists, create email shots, schedule automatic email sending, track sent emails, and manage bounced emails. OpenEmm claims over hundreds of thousands of downloads to date and has a user base that includes a number of prominent large-scale companies. The tool is ideal as a case study for polyLarva’s technology-agnostic setup due to its component-based setup and the hybrid technologies it adopts: the core part is written in Java while performance-sensitive component are written in C more details go here /C. The case study focuses on monitoring the behaviour of OpenEmm to send out a customised email shot. The tool provides users with a content management interface that allows easy customisation and creation of email templates. When an email is sent out to a mailing list, each individual recipient will receive a personalised email, built by an automated process that customizes each email based on the given template. This functionality is a result of a process which flows across two components: a Java component which retrieves the email template and mailing list information from the database and collates them in an XML file, and a C component which receives this information and produces the final emails. This control flow is displayed using sequentially numbered arrows, in Figure 4. Though unlikely, this setup introduces the possibility of a discrepancy between the two components: e.g., the XML file getting corrupted, or the database gets updated resulting in outdated emails being sent. This concern has been addressed by (intrusively) monitoring the components’ states and identifying any inconsistencies which may occur. The rest of this section elaborates on how this was achieved. 6.1 Specifying Properties for OpenEmm A basic property that can be monitored to ensure the XML file has not been tampered is to ensure that the total number of recipients in the mailing list is the same within the C component as it is in the Java component. This may be expressed in terms of the following rule: \[ c_{\text{sendMails}}() \mid (\text{java\_mailCount} \neq \text{c\_mailCount}) \rightarrow \log\text{IncorrectCount}; \] \(^2\)https://sites.google.com/a/agapp.msrg.utoronto.ca/aspectc \(^3\)http://www.openemm.org/ where $c\_sendMails$ is a system event occurring on the OpenEmm C process at the point when mails are about to be sent; the rule condition specifies that the total number of mails being sent by the C process $c\_mailCount$ must be equal to the total count of mails that was specified at the Java component. $logIncorrectCount$ is the action taken by the monitor if the values are not equal. Figure 5 “process” should change to “component” and “remote” to “global” /C shows the flow of control amongst the components involved in monitoring the specified property: the monitoring listeners instrumented within the Java and C components and the global monitoring component. When the details of a mail shot are available, the Java component notifies the global monitor with the total number of subscribers. The value is stored as the monitor state $java\_mailCount$. Subsequently, when the C component receives the XML file, the total number of mail recipients is communicated to the global monitor which in turn compares it to $java\_mailCount$. Program 6.1 shows the polyLARVA specification required to generate this setup. In 6 and 7 the specification distinguishes between the events’ component-sources while the main rule is specified in 8. Another property which we monitored on OpenEmm is to ensure that users which for some reason have been “blacklisted” are never sent an email. OpenEmm adheres to this property by carrying out a filtering exercise on the mailing list recipients, leaving out any blacklisted recipients. However, if an email recipient is blacklisted while the mailing generation process is already running, there could be circumstances where the recipient is still included in the mailing list. Such an issue could be detected by setting up a polyLARVA monitor which verifies that each recipient is still non-blacklisted at the time of being sent an email. Figure 6 depicts how the global runtime monitor can be notified upon the creation of a personalised email (inCreateMail), triggering the monitoring process to query the blacklist on the Java component (isEmailBlackListed?). Program 6.2 shows how the property of verifying users to be non-blacklisted can be specified in polyLARVA. In particular note that the condition is specified as $systemSide@javaComponent$ meaning that the condition is to be executed within the Java component and the result is then communicated to the global monitor. The two properties described above have been successfully compiled by polyLARVA and applied to OpenEmm. Each specification script was processed by three compilers — (i) the standard polyLARVA com- Program 6.1 Monitoring count of mailshot recipients upon (newMailShot(mailshotID)) { state { monitorSide { int java_mailCount; } } events { event@javaComponent callMailingExecution(mailshotID, javaSubsCount) event@cComponent startXMLProcessing(mailshotID, c_mailCount) } conditions { monitorSide { invalidMailCount(c_mailCount) = java_mailCount != c_mailCount; } } actions { monitorSide { setJavaMailCount(javaSubsCount) = java_mailCount == javaSubsCount; } monitorSide { logIncorrectCount = ... } } rules { callMailingExecution(mailshotID, javaSubsCount) \ true -> setJavaMailCount(javaSubsCount); startXMLProcessing(mailshotID, c_mailCount) \ invalidMailCount -> logIncorrectCount; } } Program 6.2 Monitoring blacklisted recipients upon (newCustomer(custID)) { events { event@cComponent inCreateMail(c_custID) } conditions { systemSide@javaComponent{ isEmailBlacklisted(c_custID) = database query to check for presence of c_custID in blacklist .... } } actions { monitorSide { logBlacklisted = ... } } rules { checkEmail = inCreateMail(mailshotID, c_custID) \ isEmailBlacklisted -> logEmailBlacklisted; } } piler that creates the global monitoring component, (ii) the polyLRVA language compiler in conjunction with the Java plug-in, to create a component listener that was woven into the Java OpenEmm code, and (iii) the polyLarva language compiler in conjunction with the C plug-in, to create a component listener that was woven into the C OpenEmm code. OpenEmm was installed on an Ubuntu operating system while the global monitor component was executed on a separate Windows machine. No performance tests were carried out during this case study because the aim of this work was to study the interaction between different components running different technologies and the global runtime monitoring component. The possible performance improvements that can be achieved using the polyLarva framework are highlighted in our other work [2]. 7 Conclusions While a significant number of runtime verification frameworks have been proposed in the literature [4, 6, 14, 5, 10, 9], these tools are normally restricted to support one particular programming language or technology and the effort required to support new languages is prohibitive. An exception is the MOP framework [14] whose architecture makes provision for the addition of new language plug-ins that can generate a MOP runtime monitor for a particular programming language. However, MOP does not support the monitoring of a single property across a system with multiple technologies and does not have an inbuilt concept of components. On the other hand, a runtime verification approach has been proposed for the BIP component framework [6] which tackles issues specific to component-based systems. However, the work is positioned at a higher level of abstraction, focusing on the guarantees that are required to ensure sound and correct monitoring within BIP — i.e., the issue of multiple technologies has not been considered in this work. The non-monolithic nature of component-based systems means that verification techniques have to be adapted to be applicable. In this paper, we have presented an extension of an existing tool, polyLarva [2], to handle the runtime verification of component-based systems. In particular, we have emphasised the need for the support of multiple-technologies used in such systems, with the resulting tool being easily extensible to handle new technologies. Although we have shown its applicability by deploying it on a third-party open-source, we are currently looking into its use in an industrial setting. Furthermore, we are looking into ways of combining our runtime verification approach to the unit testing of component-based systems. References
{"Source-Url": "https://www.um.edu.mt/library/oar/bitstream/123456789/27910/3/polyLarva_technology_agnostic_runtime_verification_2013.pdf", "len_cl100k_base": 6473, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33859, "total-output-tokens": 8399, "length": "2e12", "weborganizer": {"__label__adult": 0.0002980232238769531, "__label__art_design": 0.00025582313537597656, "__label__crime_law": 0.00028634071350097656, "__label__education_jobs": 0.0003631114959716797, "__label__entertainment": 4.297494888305664e-05, "__label__fashion_beauty": 0.0001195073127746582, "__label__finance_business": 0.00021731853485107425, "__label__food_dining": 0.0002498626708984375, "__label__games": 0.0003387928009033203, "__label__hardware": 0.0007457733154296875, "__label__health": 0.0003459453582763672, "__label__history": 0.0001647472381591797, "__label__home_hobbies": 5.91278076171875e-05, "__label__industrial": 0.0003097057342529297, "__label__literature": 0.00016832351684570312, "__label__politics": 0.00022661685943603516, "__label__religion": 0.000354766845703125, "__label__science_tech": 0.0107879638671875, "__label__social_life": 6.562471389770508e-05, "__label__software": 0.005035400390625, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.0002186298370361328, "__label__transportation": 0.0004010200500488281, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36952, 0.00955]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36952, 0.51316]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36952, 0.89242]], "google_gemma-3-12b-it_contains_pii": [[0, 3528, false], [3528, 7624, null], [7624, 11227, null], [11227, 13462, null], [13462, 15913, null], [15913, 18607, null], [18607, 21548, null], [21548, 23405, null], [23405, 26688, null], [26688, 29310, null], [29310, 30542, null], [30542, 30746, null], [30746, 34299, null], [34299, 36952, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3528, true], [3528, 7624, null], [7624, 11227, null], [11227, 13462, null], [13462, 15913, null], [15913, 18607, null], [18607, 21548, null], [21548, 23405, null], [23405, 26688, null], [26688, 29310, null], [29310, 30542, null], [30542, 30746, null], [30746, 34299, null], [34299, 36952, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36952, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36952, null]], "pdf_page_numbers": [[0, 3528, 1], [3528, 7624, 2], [7624, 11227, 3], [11227, 13462, 4], [13462, 15913, 5], [15913, 18607, 6], [18607, 21548, 7], [21548, 23405, 8], [23405, 26688, 9], [26688, 29310, 10], [29310, 30542, 11], [30542, 30746, 12], [30746, 34299, 13], [34299, 36952, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36952, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f3e95c246d7a79c665eabb9679e27b088738aff4
This document specifies an XMPP protocol extension for discovering services external to the XMPP network. Legal Copyright This XMPP Extension Protocol is copyright © 1999 – 2020 by the XMPP Standards Foundation (XSF). Permissions Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the “Specification”), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation. Warranty ## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ## Liability In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages. Conformance This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA). Contents 1 Introduction 1 2 Protocol 2 3 Use Cases 3 3.1 Requesting All Services ......................................... 3 3.2 Requesting Selected Services .................................... 4 3.3 Requesting Credentials .......................................... 6 4 Extended Information 7 5 Determining Support 8 6 Internationalization Considerations 9 7 Security Considerations 9 8 XMPP Registrar Considerations 9 8.1 Protocol Namespaces ............................................. 9 8.2 Protocol Versioning ............................................. 9 8.3 External Service Types Registry ................................ 10 8.3.1 Process ..................................................... 10 8.3.2 Registration ............................................... 10 9 XML Schema 10 10 Acknowledgements 12 1 Introduction An XMPP client or other entity might need to discover services external to the XMPP network in order to complete certain XMPP-related use cases. One example is the discovery of STUN servers (see RFC 5389) and TURN relays (see RFC 5766) for the sake of negotiating media exchanges via the Jingle ICE-UDP Transport Method (XEP-0176). An XMPP entity can already discover such external services in several ways, including: 1. The service is specified in the application’s default settings. 2. The service is manually added into the application’s configuration by a human user. 3. The service is discovered via non-XMPP service discovery protocols, such as: - DNS SRV records (RFC 2782) - Service Location Protocol (SLP; RFC 2608) - The Dynamic Delegation Discovery System (DDDS; RFC 3401) - The NAPTR profile of DDDS (RFC 3403) - The S-NAPTR profile of DDDS (RFC 3958) - The U-NAPTR profile of DDDS (RFC 4848) Unfortunately, some of the foregoing methods are subject to human error and others are either not widely available or cannot be deployed in wide range of scenarios (e.g., when the administrators of an XMPP service do not have access to DNS SRV records). Therefore, this document defines a way for an XMPP server or discovery service to provide information about external services, which might include extended information such as temporary credentials for authentication at such services. This method SHOULD be used only as a fallback when the relevant service discovery technologies (DNS SRV, DDDS, SLP, S-NAPTR, U-NAPTR, etc.) are not available to the XMPP entities involved (typically a client and server). This method --- 4. The protocol specified herein is functionally equivalent to the protocol currently used in the Google Talk service for discovery of STUN servers, as documented at <http://code.google.com/apis/talk/jep_extensions/jingleinfo.html>, but has been broadened in scope to address additional use cases if desired. does not use Service Discovery (XEP-0030) \(^\text{11}\) since that technology is designed for discovery of XMPP entities, not entities outside an XMPP network. 2 Protocol In order to learn about external services known to an XMPP server or discovery service, a requesting entity (typically a client) sends an IQ-get containing an empty `<services/>` element qualified by the `urn:xmpp:extdisco:2` namespace (see Protocol Namespaces regarding issuance of one or more permanent namespaces), typically to its own server but perhaps alternatively to a dedicated discovery service. The responding entity (XMPP server or discovery service) SHOULD return the list of external services it is aware of, but MAY instead return an appropriate error, such as `<service-unavailable/>` if the responding entity does not support this protocol or `<forbidden/>` if the requesting entity does not have permission to receive the list of external services. Each service is encapsulated via a `<service/>` element. Note: The processes by which a responding entity discovers external services for "proxying" to XMPP entities are out of scope for this specification. The `<service/>` element MAY be empty or MAY include extended information about the service as described in the Extended Information section of this document. The attributes of the `<service/>` element are summarized in the following table. <table> <thead> <tr> <th>Name</th> <th>Definition</th> <th>Inclusion</th> </tr> </thead> <tbody> <tr> <td>action</td> <td>When sending a push update, the action value indicates if the service is being added or deleted from the set of known services (or simply being modified). The defined values are &quot;add&quot;, &quot;remove&quot;, and &quot;modify&quot;, where &quot;add&quot; is the default.</td> <td>OPTIONAL</td> </tr> <tr> <td>expires</td> <td>A timestamp indicating when the provided username and password credentials will expire. The format MUST adhere to the dateTime format specified in XMPP Date and Time Profiles (XEP-0082) XEP-0082: XMPP Date and Time Profiles <a href="https://xmpp.org/extensions/xep-0082.html">https://xmpp.org/extensions/xep-0082.html</a>. and MUST be expressed in UTC.</td> <td>OPTIONAL</td> </tr> <tr> <td>host</td> <td>Either a fully qualified domain name (FQDN) or an IP address (IPv4 or IPv6).</td> <td>REQUIRED</td> </tr> </tbody> </table> Name | Definition | Inclusion --- | --- | --- name | A friendly (human-readable) name or label for the service. | OPTIONAL password | A service- or server-generated password for use at the service. * | OPTIONAL port | The communications port to be used at the host. | RECOMMENDED restricted | A boolean value indicating that username and password credentials are required and will need to be requested if not already provided (see Requesting Credentials). | OPTIONAL transport | The underlying transport protocol to be used when communicating with the service (typically either TCP or UDP). | RECOMMENDED type | The service type as registered with the XMPP Registrar. The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see https://xmpp.org/registrar/.. | REQUIRED username | A service- or server-generated username for use at the service. * | OPTIONAL * Note: The processes by which an external service might generate (or an XMPP server might negotiate) the username and password are outside the scope of this specification. One possible approach is for the XMPP server to generate a short-term authentication credential based on a private key shared with the external service. 3 Use Cases 3.1 Requesting All Services A requesting entity requests all services by sending a <services/> element to its server or a discovery service. Listing 1: Entity Requests All External Services ```xml <iq from='bard@shakespeare.lit/globe'/> ``` 3 USE CASES Listing 2: XMPP Server Returns List ```xml <iq from='shakespeare.lit' id='ul2bc7y6' to='bard@shakespeare.lit/globe' type='result'> <services xmlns='urn:xmpp:extdisco:2'> <service host='stun.shakespeare.lit' port='9998' transport='udp' type='stun'/> <service host='relay.shakespeare.lit' password='jj929jkj5sadjfj93v3n' port='9999' transport='udp' type='turn' username='nb78932lkj1skjfd7b8'/> <service host='192.0.2.1' port='8888' transport='udp' type='stun'/> <service host='192.0.2.1' port='8889' password='93jn3bakj9s8321rjbbz' transport='udp' type='turn' username='auu98sjl2wk3e9fjds17'/> <service host='ftp.shakespeare.lit' name='Shakespearean_File_Server' password='guest' port='20' transport='tcp' type='ftp' username='guest'/> </services> </iq> ``` 3.2 Requesting Selected Services A requesting entity requests services of a particular type by sending a `<services/>` element including a 'type' attribute specifying the service type of interest. 3 USE CASES Listing 3: Entity Requests Selected Services ```xml <iq from='bard@shakespeare.lit/globe' id='yv2c19f7' to='shakespeare.lit' type='get'> <services xmlns='urn:xmpp:extdisco:2' type='turn'/> </iq> ``` Listing 4: XMPP Server Returns List ```xml <iq from='shakespeare.lit' id='yv2c19f7' to='bard@shakespeare.lit/globe' type='result'> <services xmlns='urn:xmpp:extdisco:2' type='turn'> <service host='turn.shakespeare.lit' password='jj929jkj5sadjfj93v3n' port='9999' transport='udp' type='turn' username='nb78932lkj1skjfd8g8'/> <service host='192.0.2.1' port='8889' password='93jn3bakj9s8321rjbbz' transport='udp' type='turn' username='auu98sjl2wk3e9fjds17'/> </services> </iq> ``` If a requesting entity requests services of a particular type, the responding service MAY as needed send an updated list of the relevant services by “pushing” the list to a requesting entity that has previously requested the list. However, it MUST NOT push updates to the requesting entity unless it has presence information about the requesting entity (e.g., because the requesting entity is connected to the XMPP server or because the requesting entity has shared presence with a remote discovery service). A push is an IQ set to the requesting entity containing a <services/> payload with updated data about services matching the requested type (e.g., new services or updated credentials). Each <service/> element SHOULD contain an 'action' attribute indicating if the service is being added, deleted, or modified. Listing 5: Services Push ```xml <iq from='shakespeare.lit' id='lh3f1vc7' to='bard@shakespeare.lit/globe' type='set'> <services xmlns='urn:xmpp:extdisco:2' ``` Upon receiving a push, the requesting entity would then send an IQ-result to the responding service in accordance with [XMPP Core](http://tools.ietf.org/html/rfc6120). ### 3.3 Requesting Credentials An entity might know about an external service via DNS or some other means, but still might need short-term credentials to use the service. The entity can request credentials by sending a special request to the server composed of a `<credentials/>` element qualified by the `urn:xmpp:extdisco:2` namespace and contains a `<service/>` element which MUST include the 'host' and 'type' attributes to identify the desired service (the 'port' attribute MAY be provided if there are multiple services with the same host and type but different ports). #### Listing 6: Entity Requests Credentials at a Service ```xml <iq from='bard@shakespeare.lit/globe' id='xi2cax48' to='shakespeare.lit' type='get'> <credentials xmlns='urn:xmpp:extdisco:2'> <service host='turn.shakespeare.lit' type='turn'/> </credentials> </iq> ``` --- The server then returns credentials if possible. Listing 7: Server Returns Credentials ``` <iq from='shakespeare.lit' id='xi2cax48' to='bard@shakespeare.lit/globe' type='get'> <credentials xmlns='urn:xmpp:extdisco:2'> <service host='turn.shakespeare.lit' type='turn' password='jj929jkj5sadjfj93v3n' username='nb78932lkj1skjfdb7g8'/> </credentials> </iq> ``` For TURN, the server might construct time-limited credentials as described in A REST API for Access to TURN Services. There MAY be multiple <service/> elements in the result if more than one service matched the requested service identity (e.g., the same host provides service on multiple ports). If the server cannot obtain credentials at the service, it returns an appropriate stanza error, such as <item-not-found/>, <remote-server-not-found/>, <remote-server-timeout/>, or <not-authorized/>. 4 Extended Information If a server or service needs to include extended information, it SHOULD do so by including each bit of information as the XML character data of the <value/> child of a distinct <field/> element, with the entire set of fields contained within an <x/> element of type "result" qualified by the 'jabber:x:data' namespace (see Data Forms (XEP-0004)); this <x/> element SHOULD be a child of the <service/> element qualified by the 'urn:xmpp:extdisco:2' namespace (see Protocol Namespaces regarding issuance of one or more permanent namespaces). Thus the IQ result SHOULD be of the following form: ``` <iq type='result'> <services xmlns='urn:xmpp:extdisco:2'> <service> <x type='result' xmlns='jabber:x:data'> <field var='[var-name]' label='[optional]'> <value>[var-value]</value> </field> </x> </service> </services> </iq> ``` Note: A `<field/>` element MAY contain more than one `<value/>` child if appropriate. If the data fields are to be used in the context of a protocol approved by the XMPP Standards Foundation, they SHOULD be registered in accordance with the rules defined in Field Standardization for Data Forms (XEP-0068)\(^{15}\), resulting in the inclusion of a `<field/>` element whose `var` attribute has a value of "FORM_TYPE" and whose `type` attribute has a value of "hidden". Note: Although Service Discovery Extensions (XEP-0128)\(^{16}\) specifies that an XMPP entity MUST NOT supply extended information about associated children communicated via the 'http://jabber.org/protocol/disco#info' namespace, that rule does not apply to External Service Discovery since services external to the XMPP network cannot communicate via XMPP. 5 Determining Support If an XMPP entity supports this protocol, it MUST report that fact by including a service discovery feature of "urn:xmpp:extdisco:2" (see Protocol Namespaces regarding issuance of one or more permanent namespaces) in response to a Service Discovery (XEP-0030)\(^{17}\) information request: ```xml <iq from='romeo@montague.lit/orchard' id='ix61z3m9' to='montague.lit' type='get'> <query xmlns='http://jabber.org/protocol/disco#info'/> </iq> ``` ### Listing 8: Service Discovery Information Request ```xml <iq from='montague.lit' id='ix61z3m9' to='romeo@montague.lit/orchard' type='result'> <query xmlns='http://jabber.org/protocol/disco#info'> <feature var='urn:xmpp:extdisco:2'/> </query> </iq> ``` ### Listing 9: Service Discovery Information Response 6 Internationalization Considerations If the requesting entity includes an ‘xml:lang’ attribute with its request, the responding entity SHOULD include appropriately internationalized text as the value of the ‘name’ attribute. No other attributes are human-readable. 7 Security Considerations Because the responding entity (XMPP server or discovery service) functions as a ”proxy” from external services to the XMPP network, it could modify the information it receives before passing it on to the requesting entity. 8 XMPP Registrar Considerations 8.1 Protocol Namespaces This specification defines the following XML namespace: - urn:xmpp:extdisco:2 Upon advancement of this specification from a status of Experimental to a status of Draft, the XMPP Registrar 18 shall add the foregoing namespace to the registry located at <https://xmpp.org/registrar/namespaces.html>, as described in Section 4 of XMPP Registrar Function (XEP-0053) 19. 8.2 Protocol Versioning If the protocol defined in this specification undergoes a revision that is not fully backwards-compatible with an older version, the XMPP Registrar shall increment the protocol version number found at the end of the XML namespaces defined herein, as described in Section 4 of XEP-0053. --- 18 The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>. 8.3 External Service Types Registry The XMPP Registrar shall maintain a registry of external service types and their associated transport protocol(s). Such service types will probably be derived from the IANA Port Numbers Registry \(^{20}\), defined DNS SRV record types, defined DDDS records for NAPTR, S-NAPTR, and U-NAPTR, and IANA Service Location Protocol, Version 2 (SLPv2) Templates \(^{21}\). 8.3.1 Process In order to submit new values to this registry, the registrant shall define an XML fragment of the following form and either include it in the relevant XMPP Extension Protocol or send it to the email address <registrar@xmpp.org>: ``` <service> <name>the XML character data of the service type</name> <desc>a natural-language description of the service type</desc> <doc>the document that best defines the service type</doc> </service> ``` The registrant can register more than one service type at a time, each contained in a separate `<service/>` element. 8.3.2 Registration ``` <service> <name>stun</name> <desc>a server that provides Session Traversal Utilities for NAT (STUN)</desc> <doc>RFC 5389</doc> </service> <service> <name>turn</name> <desc>a server that provides Traversal Using Relays around NAT (TURN)</desc> <doc>RFC 5766</doc> </service> ``` 9 XML Schema \(^{20}\) IANA registry of port numbers \(<http://www.iana.org/assignments/port-numbers>\). \(^{21}\) IANA registry of parameters related to the Service Location Protocol templates \(<http://www.iana.org/assignments/svrloc-templates.htm>\). <?xml version='1.0' encoding='UTF-8'?> <xs:schema xmlns:xs='http://www.w3.org/2001/XMLSchema' targetNamespace='urn:xmpp:extdisco:2' xmlns='urn:xmpp:extdisco:2' elementFormDefault='qualified'> <xs:import namespace='jabber:x:data' schemaLocation='http://www.xmpp.org/schemas/x-data.xsd'/> <xs:element name='services'> <xs:complexType> <xs:sequence minOccurs='0'> <xs:element ref='service'/> </xs:sequence> <xs:attribute name='type' type='xs:NCName' use='optional'/> </xs:complexType> </xs:element> <xs:element name='credentials'> <xs:complexType> <xs:sequence> <xs:element ref='service' minOccurs='0' maxOccurs='unbounded'/> </xs:sequence> </xs:complexType> </xs:element> <xs:element name='service'> <xs:complexType> <xs:sequence xmlns:xdata='jabber:x:data'> <xs:element ref='xdata:x' minOccurs='0'/> </xs:sequence> <xs:attribute name='action' use='optional' default='add'> <xs:simpleType> <xs:restriction base='xs:NCName'> <xs:enumeration value='add'/> <xs:enumeration value='delete'/> <xs:enumeration value='modify'/> </xs:restriction> </xs:simpleType> </xs:attribute> <xs:attribute name='expires' type='xs:dateTime' use='optional'/> <xs:attribute name='host' type='xs:string' use='required'/> <xs:attribute name='name' type='xs:string' use='optional'/> <xs:attribute name='password' type='xs:string' use='optional'/> <xs:attribute name='port' type='xs:string' use='required'/> </xs:complexType> </xs:element> </xs:schema> 10 Acknowledgements Thanks to Philipp Hancke, Justin Karneges, Evgeniy Khramtsov, and Unnikrishnan Vikrama Panicker for their feedback.
{"Source-Url": "https://xmpp.org/extensions/xep-0215.pdf", "len_cl100k_base": 5200, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 38185, "total-output-tokens": 6714, "length": "2e12", "weborganizer": {"__label__adult": 0.0006275177001953125, "__label__art_design": 0.0005340576171875, "__label__crime_law": 0.006755828857421875, "__label__education_jobs": 0.0008702278137207031, "__label__entertainment": 0.0001952648162841797, "__label__fashion_beauty": 0.0002486705780029297, "__label__finance_business": 0.00311279296875, "__label__food_dining": 0.00031828880310058594, "__label__games": 0.00119781494140625, "__label__hardware": 0.0081634521484375, "__label__health": 0.00033211708068847656, "__label__history": 0.0005421638488769531, "__label__home_hobbies": 9.393692016601562e-05, "__label__industrial": 0.000926494598388672, "__label__literature": 0.0005431175231933594, "__label__politics": 0.0009164810180664062, "__label__religion": 0.0005903244018554688, "__label__science_tech": 0.12451171875, "__label__social_life": 9.322166442871094e-05, "__label__software": 0.13671875, "__label__software_dev": 0.7109375, "__label__sports_fitness": 0.0003104209899902344, "__label__transportation": 0.000957012176513672, "__label__travel": 0.0002830028533935547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23574, 0.0368]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23574, 0.15603]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23574, 0.71773]], "google_gemma-3-12b-it_contains_pii": [[0, 106, false], [106, 2641, null], [2641, 3497, null], [3497, 6537, null], [6537, 8921, null], [8921, 10527, null], [10527, 11840, null], [11840, 13678, null], [13678, 14835, null], [14835, 16839, null], [16839, 18727, null], [18727, 20331, null], [20331, 21884, null], [21884, 23438, null], [23438, 23574, null]], "google_gemma-3-12b-it_is_public_document": [[0, 106, true], [106, 2641, null], [2641, 3497, null], [3497, 6537, null], [6537, 8921, null], [8921, 10527, null], [10527, 11840, null], [11840, 13678, null], [13678, 14835, null], [14835, 16839, null], [16839, 18727, null], [18727, 20331, null], [20331, 21884, null], [21884, 23438, null], [23438, 23574, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23574, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23574, null]], "pdf_page_numbers": [[0, 106, 1], [106, 2641, 2], [2641, 3497, 3], [3497, 6537, 4], [6537, 8921, 5], [8921, 10527, 6], [10527, 11840, 7], [11840, 13678, 8], [13678, 14835, 9], [14835, 16839, 10], [16839, 18727, 11], [18727, 20331, 12], [20331, 21884, 13], [21884, 23438, 14], [23438, 23574, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23574, 0.01506]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
f350c0f7fcd828e9668b80029fc1ea311f907ac3
Tutorial on Modeling VAT Rules Using OWL-DL Nielsen, Morten Ib; Simonsen, Jakob Grue; Larsen, Ken Friis Publication date: 2007 Document Version Publisher's PDF, also known as Version of record Citation for published version (APA): Tutorial on Modeling VAT rules using OWL-DL Morten Ib Nielsen, Jakob Grue Simonsen and Ken Friis Larsen Department of Computer Science University of Copenhagen Email: {mortenib|simonsen|kflarsen}@diku.dk August 28, 2007 Total number of pages: 16 Abstract This paper reports on work in progress. We present a methodology for constructing an OWL-DL model of a subset of Danish VAT rules. It is our intention that domain experts without training in formal modeling or computer science should be able to create and maintain the model using our methodology. In an ERP setting such a model could reduce the Total Cost of Ownership (TCO) and increase the quality of the system. We have selected OWL-DL because we believe that description logic is suited for modeling VAT rules due to the decidability of important inference problems that are key to the way we plan to use the model and because OWL-DL is relatively intuitive to use. 1 Introduction Imagine an ERP system where domain experts can create and implement changes in e.g. VAT rules without the help of programmers. The benefits would be shorter development time and fewer mistakes due to misinterpretation of specifications which lead to reduced TCO and increased quality of the software. On a coarse-grained scale such a system consists of three parts: A model of the rules, a tool to edit the model and the core ERP system using the model. In this paper we focus on the first part - the model. A priori two requirements exist. First the modeling language must be strong enough to express the rules in question and second it must be easy to use without training in formal modeling or computer science. In a more general setting the model can be used as a VAT knowledge system which external programs can query through an interface. In the long run we envision that authorities such as SKAT (Danish tax administration) can provide online access to the model e.g. using web services such that applications always use the newest version of the model. In this paper we describe a methodology we have used to develop a model of a subset of Danish VAT rules using the general purpose Web Ontology Language (OWL) editor Protégé-OWL\(^1\) and we report on our experiences in doing so. We selected a subset of Danish VAT rules consisting of flat VAT (25%) plus a set of exceptions where goods and services are free of VAT, chosen because they seem representative. Further the rules are accessible to us by way of an official guideline by the Danish tax administration. Our study is focusing on the feasibility of the model. \(^1\)http://protege.stanford.edu/overview/protege-owl.html. of using OWL to model VAT rules and not on the usability of the Protégé-OWL tool itself. By feasibility we mean how easy or difficult it is (for a human) to express and understand VAT rules in OWL, in particular this does not cover issues such as modularization. The methodology presented here is inspired by the article [1] together with our own experience. Readers of this guide are assumed to have user experience of Protégé-OWL corresponding to [2] but not of computer science nor of modeling in general. 1.1 Motivation One of the overall goals of the strategic research project 3gERP is to reduce the TCO of Enterprise Resource Planning (ERP) systems. We believe that a VAT model helps to this end in two ways. First we envision that domain experts create and update the model thus eliminating a layer of interpretation (the programmer) where errors can be introduced. Second a VAT model can change handling of VAT from being a customization task into being a configuration task, meaning that no code needs to be changed when the model is updated. VAT and legal rules in general deal with frequent transactions between legal entities. Transactions are typically triggered when certain conditions are fulfilled and therefore dynamic checks on these conditions are needed. The idea is to use the model to automatically infer what actions should be taken based on the conditions. In the case of VAT rules we can ask the model whether a delivery is subject to VAT or not based on the information we know about the delivery. The answer from the model will be Yes, No or Maybe\(^2\) and can be used to trigger an appropriate transaction. In a broader perspective the model is supposed to work as a VAT knowledge system that given a context and a question can tell other systems what to do, e.g. guide accounting systems and if required indicate that authorities should be contacted etc. 1.2 Roadmap The remainder of this paper is structured as follows. In Section 2 we give a short account of description logic and OWL. In Section 3, 4 and 5 we present our methodology by giving examples. Finally we outline future work in Section 6 and we conclude in Section 7. 2 Description Logic and OWL In this section we give a short introduction to description logic (DL) and OWL. This introduction can be skipped, if you are already familiar with the concepts. Description logics are knowledge representation languages that can be used to structure terminological knowledge in knowledge systems which are formally well-understood. A knowledge system typically consists of a knowledge base together with a reasoning service. The knowledge base is often split into a set of concept axioms the \(TBox\), a set of assertions the \(ABox\) and a \(Role hierarchy\). These constitute the explicit knowledge in the knowledge system. The reasoning service is a program that can check the consistency of the knowledge base and make implicit knowledge explicit, e.g. decide equivalence of concepts. Since the reasoning service is a pluggable component knowledge systems separate the technical task of reasoning from the problem of constructing the knowledge base. \(^2\)In the case where insufficient information is provided in order to answer the question. 2.1 OWL OWL which is short for Web Ontology Language is an ontology language designed to be compatible with the World Wide Web and the Semantic Web. The most important abstraction in OWL is concept axioms which are called classes. Each class has a list of necessary conditions and zero or more equivalent lists of necessary and sufficient conditions [2]. A list of necessary conditions is a list of conditions that every member of the class must satisfy. In the same way a list of necessary and sufficient conditions is a list of conditions that must be satisfied by every member of the class and if satisfied guarantees membership in the class. OWL is based on XML, RDF and RDF-S and can be used to represent information in a way that is more accessible to applications than traditional web pages. In addition OWL has a formal semantics, which enables logic reasoning. OWL comes in three variants: OWL-Lite ⊆ OWL-DL ⊆ OWL-Full of increasing expressive power. The variants OWL-Lite and OWL-DL are based on the description logics $SHIF(D)$ and $SHOIN(D)$ respectively [3], which guarantees that important inference problems such as satisfiability and subsumption are decidable. Since OWL is XML based we need an editor to create OWL ontologies. We have used the general purpose OWL editor Protégé developed by Stanford Medical Informatics at the Stanford University School of Medicine. 3 VAT Exemption 1: Sales outside EU Our methodology is aimed at modeling VAT rules as described in guidelines instead of the raw law text itself. This choice was made because guidelines are more accessible to us, and because these are the rules that small companies adhere to in practice. Further the investigation of the feasibility of using OWL to model VAT rules concerns the ease with which rules can be formalized and not so much from where the rules are extracted\(^3\). In what follows we refer to the guideline as the legal source. In order to ease reading we have used the word concept only when we speak about the legal source. The corresponding concept in the model (OWL) is called a class. A concept in the legal source is modeled as one or more classes in the model. Here we present the steps we took in order to make our model of Danish VAT rules. 3.1 Pre-modeling 1. Download Protégé-OWL from http://protege.stanford.edu/download/release/full/ and install. Make sure you can start Protégé in OWL-mode (logic view). When started and if you select the Class tab it should look like Figure 1. 2. Download [2] and read it. This is important because many of the constructions we use are explained herein. 3.2 Modeling First you must decide which legal source(s) you want to model. \(^3\)Since we have used the official guidelines by SKAT (Danish tax administration) we believe that the content of the guidelines is in accordance with the law. 3.2 Modeling 3 VAT EXEMPTION 1: SALES OUTSIDE EU Figure 1: Protégé-OWL class-tab, logic view. 3.2 Modeling In our case we used the official guideline *Moms - fakturering, regnskab mv, E nr. 27, Version 5.2 digital, 19. januar 2005*. 3.2.1 Overall framework Modeling should start with a read through of the legal source. Based on this general (to be refined later) classes such as **Location**, **Goods**, **Services** and **FreeOfVAT** together with attributes such as **hasDeliveryType** and **hasSalesPrice** can be created as subclasses of the built-in top-level class **owl:Thing**. An attribute can usually take on at most a finite number of values. In that case we use value partitions to model them as described in [2][p. 73-76]4. If the domain is not finite we use data type properties instead. Deciding on the overall framework helps to structure the capturing of rules in a homogeneous way and enables working in parallel (which can be needed if the legal source is large). After our read through of the legal source we arrived at the overall framework in Figure 2. ![Figure 2: Overall framework.](image) **Naming Convention.** All classes, properties, individuals etc. should be given names picked from or inspired of the legal source. All names should be in the same language as the legal source (in our case Danish). Using the naming convention supported by Protégé-OWL class and individual names should be written in Pascal Notation, e.g. *InternationalOrganization* not *internationalOrganization* or *InternationalOrganization*, while property names are written in Camel Hump Notation, e.g. *someProperty*. Typically a property is used to assign an attribute to a class. In this case we prefix the name of the property with a verb describing the kind of relation the class has along that property, e.g. *hasNumberOfSides* or *isFragile*. 3.2.2 Rule modeling - step I Having modeled the overall framework it is time to go through the legal source one section at a time looking for rules that should be modeled. Here we give an elaborate description of how to model a single rule from the legal source starting from the overall framework in Figure --- 4An exception is the domain of truth values, which is built-in as a data type. ### Table 1 Extract from the legal source and its translation into English. <table> <thead> <tr> <th></th> </tr> </thead> <tbody> <tr> <td>[4][p. 9]</td> </tr> </tbody> </table> And translated into English: *Sales outside EU (3rd countries). No VAT should be added to goods delivered to destinations outside the European Union, or to the Faroe Islands or Greenland. This fact ordinarily also applies to services, but VAT should be added to certain services.* Translated from [4][p. 9] ### Table 2 Necessary & sufficient conditions for application of the rule in Table 1. - The rule concerns sales. - The rule concerns both goods and services. - The place of delivery must be outside the European Union, or the Faroe Islands or Greenland. 2. In Section 4 and 5 we give a brief description of how to model other rules. Together the modeling of these rules cover all the constructions we have used in our VAT model. Since our legal source is in Danish we present the rules in their original Danish phrasing together with a translation into English. Now let us consider the rule shown in Table 1. Since our model is only a prototype we make a slight simplification and assume that the rule also applies to all services. With this simplification we can identify the necessary and sufficient conditions for application of the rule. These are shown in Table 2. In order to model the necessary and sufficient conditions in Table 2 we must add some attributes to `VarerOgYdelser`. The first and second condition in Table 2 tell us that we must be able to model that goods and services are sold. We do that by adding an attribute to the class `VarerOgYdelser` (translates into `GoodsAndServices`) which already exists in our overall framework. Attributes are modeled using functional properties. In accordance with our naming convention we select the name `harLeveranceType` (translates into `hasDeliveryType`). Since there is a finite number of delivery types we model this attribute as a value partition, i.e. an enumeration. Value partitions can be created using a built-in wizard. Just as in [2] we store value partitions as subclasses of the class `ValuePartitions`. The reason plain enumerations are not used is that they cannot be sub-partitioned. Using value partitions we retain the possibility of further refining the concepts the value partitions model. --- 5 Instead of being sold goods can also be used as e.g. a trade sample. See [4][p. 8-9] for other examples. 6 Menu ▶ Tools ▶ Patterns ▶ Value Partition.... Remark. Technically enumerations are constructed by defining a class in terms of a finite set of individuals plus a functional property that has this class as its range. Since individuals are atoms they cannot be subdivided. On the other hand a value partition is defined using a functional property having as its range a class defined as the union of its subclasses all of which are distinct. These subclasses can (because they are classes) be partitioned into more subclasses if needed. Having created the value partition harLeveranceType which can have Salg (translates into Sale) as a value we need to add it as an attribute to the class VarerOgYdelser. This is done by adding to the necessary conditions an existential quantification over the corresponding property having the value partition (or data type in case of data type attribute) as its range. Thus we add ∃ harLeveranceType some LeveranceType to VarerOgYdelser. The third condition tells us that we must be able to model that goods and services have a place of delivery. A read through of the legal source tells us that only three places are needed namely Denmark, EU and non-EU. Thus this attribute which we name harLeveranceSted (translates into hasPlaceOfDelivery) must be modeled as a value partition. Having modeled these attributes the class VarerOgYdelser looks as shown in Figure 3. 3.2.3 Rule modeling - step II Now we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget (translates into FreeOfVAT). Following our naming convention we name the class MomsfritagetSalgAfVarerOgYdelserTilIkke-EU (translates into VATFreeSalesOfGoodsAndServicesInNon-EU). Then we add a textual description of the rule and a reference to where in the legal source the rule stems from to the rdfs:comment field. Next we must specify necessary and sufficient conditions on membership in MomsfritagetSalgAfVarerOgYdelserTilIkke-EU. It is important to remember that if a class has two sets of necessary and sufficient conditions then they must imply each other, see [2][p. 98]. Based on the necessary and sufficient conditions captured in Table 2 we add the following necessary and sufficient conditions to MomsfritagetSalgAfVarerOgYdelserTilIkke-EU: - VarerOgYdelser - ∃ harLeveranceSted some Ikke-EU - ∃ harLeveranceType some Salg The result is shown in Figure 4. 4 VAT Exemption 2: Sales to Embassies In this section and onwards we will not mention when to add references to the legal source in rdfs:comment fields of classes and properties. The rule of thumb is that this should always be done. Now let us consider the rule in Table 3. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 4. Figure 3: Class and property view after adding attributes. 4 VAT EXEMPTION 2: SALES TO EMBASSIES Figure 4: Asserted Conditions of our model of the legal rule in Table 1. Table 3 Extract from the legal source and its translation into English. Salg til ambassader. Du skal ikke beregne moms af varer og transportydelser, som du leverer til ambassader og internationale organisationer i andre EU-lande. [4][p. 9] And translated into English: Sales to embassies. VAT should not be added to goods and transport services delivered to embassies and international organizations in countries within the European Union. Translated from [4][p. 9] Table 4 Necessary & Sufficient conditions for application of the rule in Table 3. - The rule concerns sales. - The rule concerns goods and transport services. - The place of delivery must be in the European Union. - The buyer must be an embassy or an international organization. 4.1 Rule modeling - step I We are already able to model that the rule concerns sale and that the place of delivery must be in EU. We cannot model the specific service transportation yet. Therefore we must add it to our model. Since it is a service it should be modeled as a subclass of Services. We name the class modeling the service transportation Transport (translates into Transportation). Now we can model that something belongs to the set of goods and transport services by requiring membership of Varer ⊔ Transport. Finally we must be able to model that the buyer is an embassy or an international organization. Since there are only finitely many different kinds of buyers we model this as a value partition, and because this attribute applies to both Varer and Transport we add it to their most specific common super-class which is VarerOgYdelser. We name this attribute harKøberType (translates into hasKindOfBuyer). After having done all this the model looks as shown in Figure 5. 4.2 Rule modeling - step II Having added all the necessary classes and attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of Momsfritaget. Following our naming convention we name the class MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU (translates into VATFreeSalesToEmbassiesAndInternationalOrganizationsInEU). Based on the necessary and sufficient conditions captured in Table 4 we add the following necessary and sufficient conditions to MomsfritagetSalgTilAmbassaderOgInternationaleOrganisationerIEU: - harLeveranceType some Salg - Varer ⊔ Transport - harLeveranceSted some EU - harKøberType some AmbassadeOgPersonaleMedDiplomatiskeRettigheder The result is shown in Figure 6. 5 VAT Exemption 3: Sales in other EU countries In this section we consider one final rule, the rule in Table 5. We identify the necessary and sufficient conditions for application of the rule. These are shown in Table 6. 5.1 Rule modeling - step I We are already able to model that the rule concerns sale of goods delivered inside the European Union. The new thing is that we must be able to indicate whether a buyer is registered for VAT and if so, we must register the buyers VAT registration number. We use a functional data type property named erKøberMomsregistreret (translates into isTheBuyerRegisteredForVAT) with the data type xsd:boolean as its range to model whether the buyer is registered for VAT. Similarly we use a functional data type property named erKøbersMomsnummer (translates into isBuyersVATRegistrationNumber) with the data type xsd:string as its range to register the buyers VAT registration number if he has one. Figure 5: The model after adding classes and attributes as described in Section 4.1. 5.1 Rule modeling - step 5 VAT EXEMPTION 3: SALES IN OTHER EU COUNTRIES Table 5 Extract from the legal source and its translation into English. [4][p. 8] And translated into English: Sales in other EU countries. No VAT should be added to goods delivered to companies in other EU countries, provided that the companies are registered for VAT. In this case you must acquire the VAT registration number of the company. Translated from [4][p. 8] Table 6 Necessary & Sufficient conditions for application of the rule in Table 5. - The rule concerns sales. - The rule concerns goods. - The place of delivery must be in the European Union. - The buyer must be registered for VAT. - You must acquire the VAT registration number of the company. 5.2 Rule modeling - step II A read through of [4] will reveal that you must register the VAT registration number of the buyer exactly when the buyer is registered for VAT. Thus we model this as a property of \textit{VarerOgYdelser} and not of \textit{Varer} (as indicated by the rule). The requirement can be modeled as follows: \[ \begin{align*} &((\text{erKøberMomsregistreret has true}) \sqcap (\text{erKøbersMomsnummer exactly 1})) \sqcup ((\text{erKøberMomsregistreret has false}) \sqcap (\text{erKøbersMomsnummer exactly 0})) \end{align*} \] The result is shown in Figure 7. 5.2 Rule modeling - step II Having added the necessary attributes to the model we are ready to model the rule itself. Since the rule describes a situation where you do not have to pay VAT we model it as a subclass of \textit{Momsfritaget}. Following our naming convention we name the class \textit{MomsfritagetSalgTilAndreEU-lande} (translates into VATFreeSalesToOtherEUCountries). Based on the necessary and sufficient conditions captured in Table 6 we add the following necessary and sufficient conditions to \textit{MomsfritagetSalgTilAndreEU-lande}: \[ \begin{align*} &\text{harLeveranceType some Salg} \\ &\text{Varer} \\ &\text{harLeveranceSted some EU} \\ &\text{erKøberMomsregistreret has true} \end{align*} \] We note that the obligation to register the buyers VAT registration number is modeled indirectly, see Section 5.1. The result is shown in Figure 8. 6 Future work Since this is work in progress there are a lot of areas we need to address. In the near future we plan to integrate our model in a prototype ERP system as described in the introduction. This opens the possibility for modeling the parts of the Danish VAT legislation concerning depreciation and VAT reporting (since they are intertwined and contain a lot of technical requirements on the financial reports). We also need to model other countries VAT rules in order to confirm that Danish VAT rules are indeed representative with respect to the constructions that are needed in the modeling language. Based on this we need to refine our overall framework such that it captures the common structure and we need to identify what kinds of questions a model must be able to answer. The synthesized knowledge from modeling the VAT rules of other countries should also result in a more detailed analysis of what we can and cannot model. Based on all this we should design a minimal description logic extended with the needed functionality identified in the analysis just mentioned, such as predicates like \( x < 100 \) which are needed in some rules. We should also provide a reasoner for the logic together with an editor such that the above process can be repeated. Finally in order to compare our OWL model with a different approach we want to make a model using Datalog, which is the de facto standard language used to express rules in deductive databases, of the rules we have formalized in OWL already. It would also be interesting to try a hybrid solution e.g. OWL plus a rule language like SWRL. This work is independent of the tasks mentioned above and can be carried out in parallel. 7 Conclusion We have shown how to model a subset of Danish VAT rules concerning exemption from VAT using Protégé-OWL. First we created an overall framework for the VAT model with the property that legal rules and the concepts they involve can be modeled as subclasses of existing classes in the framework. This helps to ensure that related concepts are modeled in the same way and that a single concept is not modeled twice. The second step was an iterative process consisting of two steps repeated for each rule. The first step is to extend the model such that the rule in question can be modeled. This is done by modeling concepts from the legal source as classes in the model and by adding attributes to the necessary conditions of such classes. The second step is to model the rule itself. This is done by adding specific requirements for application of the rule to the necessary and sufficient conditions of the class modeling the rule. The step by step iterative modeling has been working fine in practice and an extension to cover several different VAT and duty rates does not seem to be problematic as long as they do not require us to model restrictions such as \( x < 100 \) which is not supported directly in OWL 7. --- 7 Whether this is a weakness of OWL, or just us trying to use OWL for something it was not designed to Apart from modeling inequalities we have not had modeling problems. One problem though is that reasoning about individuals in OWL models is not supported very well. Therefore we have tried to avoid the use of individuals wherever possible (using value partitions). References
{"Source-Url": "https://static-curis.ku.dk/portal/files/15432526/nielsen-simonsen-larsen.pdf", "len_cl100k_base": 5767, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30346, "total-output-tokens": 6875, "length": "2e12", "weborganizer": {"__label__adult": 0.0005974769592285156, "__label__art_design": 0.0008206367492675781, "__label__crime_law": 0.003955841064453125, "__label__education_jobs": 0.00296783447265625, "__label__entertainment": 0.00017464160919189453, "__label__fashion_beauty": 0.00034546852111816406, "__label__finance_business": 0.021728515625, "__label__food_dining": 0.0005736351013183594, "__label__games": 0.00098419189453125, "__label__hardware": 0.0008902549743652344, "__label__health": 0.0010223388671875, "__label__history": 0.0005273818969726562, "__label__home_hobbies": 0.0002655982971191406, "__label__industrial": 0.00150299072265625, "__label__literature": 0.0008058547973632812, "__label__politics": 0.0013294219970703125, "__label__religion": 0.00048828125, "__label__science_tech": 0.1226806640625, "__label__social_life": 0.0002143383026123047, "__label__software": 0.06390380859375, "__label__software_dev": 0.77197265625, "__label__sports_fitness": 0.0002803802490234375, "__label__transportation": 0.0014629364013671875, "__label__travel": 0.0003476142883300781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27021, 0.03721]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27021, 0.49703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27021, 0.90179]], "google_gemma-3-12b-it_contains_pii": [[0, 441, false], [441, 3080, null], [3080, 6325, null], [6325, 9174, null], [9174, 9270, null], [9270, 11430, null], [11430, 14223, null], [14223, 17024, null], [17024, 17083, null], [17083, 17948, null], [17948, 20690, null], [20690, 20775, null], [20775, 21706, null], [21706, 23544, null], [23544, 26205, null], [26205, 26470, null], [26470, 27021, null]], "google_gemma-3-12b-it_is_public_document": [[0, 441, true], [441, 3080, null], [3080, 6325, null], [6325, 9174, null], [9174, 9270, null], [9270, 11430, null], [11430, 14223, null], [14223, 17024, null], [17024, 17083, null], [17083, 17948, null], [17948, 20690, null], [20690, 20775, null], [20775, 21706, null], [21706, 23544, null], [23544, 26205, null], [26205, 26470, null], [26470, 27021, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27021, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27021, null]], "pdf_page_numbers": [[0, 441, 1], [441, 3080, 2], [3080, 6325, 3], [6325, 9174, 4], [9174, 9270, 5], [9270, 11430, 6], [11430, 14223, 7], [14223, 17024, 8], [17024, 17083, 9], [17083, 17948, 10], [17948, 20690, 11], [20690, 20775, 12], [20775, 21706, 13], [21706, 23544, 14], [23544, 26205, 15], [26205, 26470, 16], [26470, 27021, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27021, 0.01863]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
85b50f418d9e7ca1694ff7f1acb73911c5d30c8a
[REMOVED]
{"Source-Url": "http://lamda.nju.edu.cn/shist/file/PAKDD19-DeepReview.pdf", "len_cl100k_base": 6508, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32741, "total-output-tokens": 8271, "length": "2e12", "weborganizer": {"__label__adult": 0.0004267692565917969, "__label__art_design": 0.000301361083984375, "__label__crime_law": 0.00032448768615722656, "__label__education_jobs": 0.0005803108215332031, "__label__entertainment": 5.239248275756836e-05, "__label__fashion_beauty": 0.00016868114471435547, "__label__finance_business": 0.0001798868179321289, "__label__food_dining": 0.0003330707550048828, "__label__games": 0.00051116943359375, "__label__hardware": 0.0007715225219726562, "__label__health": 0.0004346370697021485, "__label__history": 0.0001316070556640625, "__label__home_hobbies": 7.37905502319336e-05, "__label__industrial": 0.0002913475036621094, "__label__literature": 0.0001742839813232422, "__label__politics": 0.0001958608627319336, "__label__religion": 0.00035572052001953125, "__label__science_tech": 0.007640838623046875, "__label__social_life": 7.76052474975586e-05, "__label__software": 0.00479888916015625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0003142356872558594, "__label__transportation": 0.00039076805114746094, "__label__travel": 0.0001806020736694336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31088, 0.06562]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31088, 0.31035]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31088, 0.88843]], "google_gemma-3-12b-it_contains_pii": [[0, 2342, false], [2342, 4844, null], [4844, 7853, null], [7853, 10578, null], [10578, 11738, null], [11738, 14057, null], [14057, 16860, null], [16860, 19433, null], [19433, 23083, null], [23083, 24934, null], [24934, 27778, null], [27778, 31088, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2342, true], [2342, 4844, null], [4844, 7853, null], [7853, 10578, null], [10578, 11738, null], [11738, 14057, null], [14057, 16860, null], [16860, 19433, null], [19433, 23083, null], [23083, 24934, null], [24934, 27778, null], [27778, 31088, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31088, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31088, null]], "pdf_page_numbers": [[0, 2342, 1], [2342, 4844, 2], [4844, 7853, 3], [7853, 10578, 4], [10578, 11738, 5], [11738, 14057, 6], [14057, 16860, 7], [16860, 19433, 8], [19433, 23083, 9], [23083, 24934, 10], [24934, 27778, 11], [27778, 31088, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31088, 0.10625]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
3150f93b8a5b15e21cf116a0e2411f56407d5404
Goals as Constraints: Writing miniKanren Constraints in miniKanren EVAN DONAHUE, University of Tokyo, Japan We present an extension to the relational programming language miniKanren that allows arbitrary goals to run efficiently as constraints. With this change, it becomes possible to express a large number of commonly used constraints in pure miniKanren without modifying the underlying implementation. Moreover, it also becomes possible to express a number of new constraints that have proven difficult to realize within existing constraint authoring frameworks. We believe this approach represents a promising avenue for further extending the expressiveness of miniKanren’s constraint handling capabilities. 1 INTRODUCTION Most non-trivial miniKanren programs depend on the use of constraints beyond unification. However, in many current implementations, adding new constraints requires modifying the underlying constraint solver itself, which requires deep knowledge of the implementation. The situation is somewhat improved by past work on constraint authoring frameworks [1, 11], which separate constraint authoring from core language development. However, even with the use of such frameworks, some complex constraints remain difficult to express. In this paper, we propose using miniKanren itself as a language for constraint authoring. As we demonstrate, using only the core operators of miniKanren, it is easy to express a wide range of common constraints, as well as a number of novel constraints that are difficult to express in non-relational host languages. Moreover, we show that such a constraint language interoperates well with host language constraint frameworks that are better suited for expressing constraints that cannot be expressed in pure miniKanren, such as numeric constraints. The key idea of this paper is that constraint solving in miniKanren can be viewed as a natural extension of the normal miniKanren search procedure, and can therefore be implemented as a modified miniKanren interpreter in which constraints are represented simply as normal miniKanren goals. The remainder of the paper is structured as follows: Section 2 describes the interface extensions made to the language to allow the specification of constraints and presents a list of example implementations of several constraints. Section 3 describes the implementation in detail. Section 4 discusses related work. 2 INTERFACE In this section, we introduce three new forms and implement several constraints from previous work to illustrate their use. constraint (2.1) converts miniKanren goals into constraints. pconstraint (2.2) defines new primitive constraints besides ==. Finally, noto (2.3) negates miniKanren goals. 2.1 constraint: miniKanren Goals as Constraints The constraint form wraps arbitrary miniKanren goals and redefines their semantics. Normally, conde generates multiple search branches and conjoins one child disjunct to each branch. Wrapped with constraint, however, it instead generates a disjunction constraint and conjoins it with the state corresponding to the current branch. Should each disjunct fail, the branch fails, as in the following examples. 2.1.1 booleano. The simplest non-trivial constraint we can write using constraint is the booleano constraint [11]. booleano constrains a variable to be either #t or #f. Using constraint, booleano could be written as follows: (define (booleano v) (constraint (conde [([== v #t]) ([== v #f])])) Assuming \(v\) is free, this constraint will suspend itself in the constraint store and await unification. When \(v\) is unified, the constraint activates and check that \(v\) is either \#t or \#f. If it is one of those two values, the constraint is satisfied and it is removed from the store. If it is bound to a different ground term, the constraint fails. Otherwise, if it is bound to a variable, the constraint returns to the constraint store. Likewise, if \(v\) ever becomes disequal to either \#t or \#f, the disjunction will collapse and the constraint will unify the remaining value in the substitution before removing itself from the store. ### 2.1.2 listo listo checks that a term unifies with a proper list [11]. This constraint lazily walks the list and confirms that it ends—if its tail is ever fully bound—with a null list. (define (listo l) (constraint (conde [([== l '()]) ([fresh (h t) ([== l (cons h t)) ([listo t)])]))) listo in particular among the constraints introduced so far illustrates the duality of goals and constraints in this framework. Without the constraint form, listo would simply be a normal miniKanren goal that generates proper lists. It would be perfectly possible to define listo as a generative miniKanren goal and then wrap it using constraint only at the call site to turn it into a constraint at the programmer’s discretion. Any miniKanren program that generates any arbitrary structure can likewise be turned into a constraint that tests for that structure using the constraint form. Importantly, here and for the rest of the paper, when we write fresh, we in fact refer to a pattern matching form, matcho, that will be described in Section 3.5. matcho has proven easier to work with for the purposes of implementing this constraint system. It is still possible to define fresh appropriately for use in constraints, and so we use it for greater familiarity in the code examples, but we will not cover its implementation in detail in this paper. ### 2.1.3 presento The final constraint in this section, presento, is to our knowledge novel in this paper. presento can be understood to be the logical negation of absento. Instead of asserting that a given value must not appear anywhere in a term, presento asserts that a given value must appear somewhere in the term. (define (presento present term) (constraint (conde [([== term present]) ([fresh (h t) ([== term (cons h t)) ([conde [([presento present h]) ([presento present t)])])])])) One minor limitation is that, unlike the generative version of the relation, the constraint version never grounds the end of the list with null if it is not bound elsewhere in the program. Instead, it reifies as a suspended form of the waiting constraint. We are currently exploring modifications to the reifier that may resolve this issue. --- 1One minor limitation is that, unlike the generative version of the relation, the constraint version never grounds the end of the list with null if it is not bound elsewhere in the program. Instead, it reifies as a suspended form of the waiting constraint. We are currently exploring modifications to the reifier that may resolve this issue. presento is much more difficult to implement than absento using existing constraint frameworks due to the way in which it is fundamentally disjunctive. Because the constraint store implicitly conjoins all contained constraints, absento can insert its child constraints independently into the store. presento, by contrast, must guarantee that, for instance, the child constraint on a list's head must not fail—even if it otherwise would—if the constraint on the tail succeeds. This dependency between the child constraints requires additional bookkeeping that complicates the architecture of the constraint and the store. The complexity is further increased if the constrained value is a complex list term containing free variables, as the constraint may in that instance need to handle the tree traversal logic within the context of a complex unification logic that it may not be possible to resolve immediately. In the present framework, however, both presento and absento can be expressed with roughly the same order of implementation complexity. 2.2 pconstraint: Primitive Constraint Constructor In the previous section, only == was used as a primitive goal. While == allows for a wide range of constraints on structures miniKanren is natively capable of generating, it is insufficient to define the full range of constraints usually present in miniKanren implementations. In particular, defining type constraints such as symbolo or numbero would require a disjunction of unbounded size, which cannot efficiently be represented within a miniKanren program. To support such constraints, this implementation defines the pconstraint form that acts as a constructor for new primitive constraints. pconstraint accepts a list of variables on which the constraint depends, a function responsible for checking the constraint, and an arbitrary Scheme value to be passed as auxiliary data into the constraint checking function. Whenever one of the constrained variables is updated, the function receives the variable, its updated value, any constraints on the variable, and the auxiliary value. The function must return either a simplified pconstraint, or a trivial succeed or fail constraint. pconstraint was designed specifically to implement type constraints, and it may be necessary to further extend the system to handle other primitive constraints. We leave such considerations to future work. 2.2.1 symbolo & numbero. In this section we define a general typeo relation and specialize it to arrive at versions of the usual symbolo and numbero constraints common to many miniKanren systems. (define (typeo v t?) (if (var? v) (pconstraint (list v) type-check t?) (if (t? v) succeed fail))) (define (type-check var val constraint t?) ...)) (define (symbolo v) (typeo v symbol?)) (define (numbero v) (typeo v number?)) (define (pairo) (typeo v pair?)) Typeo accepts a value or variable and a function responsible for type checking, such as symbol?. If it receives a value, it simply returns the trivial fail or succeed goal. If instead it receives a variable, it constructs a pconstraint, represented as a tagged vector of its three arguments: the singleton list of the variable v, the auxiliary data which in this case is the type checking function symbol?, and a function responsible for performing the type check, type. The type checking function, typeo, at present requires some knowledge of the internal representations used by the solver to implement. In practice, simpler interfaces can likely be defined to handle common constraint types. The function is called each time a variable on which the constraint depends is bound, and it accepts as arguments the variable, the value (or variable) to which it has been bound, the auxiliary data (in this case, the type predicate t?), and a constraint goal used to check constraint-constraint interactions. constraint is another primitive constraint bound to var, such as another type constraint. The auxiliary value of primitive constraints can be used to check their interactions, such as failing when two incompatible type constraints are bound to the same variable. 2.3 note: Negating Goals and Constraints Negation has been explored from a variety of angles in past work on miniKanren [14, 19]. In this implementation, note generalizes the usual case analysis used to perform disequality checking. It runs its subgoal, and negates the result. If the subgoal succeeds, note fails. If the subgoal fails, it succeeds. If the subgoal returns any other constraint, that constraint is negated and placed into the store. This scheme allows for the expression of a number of constraints that depend on negation, beginning with =/=. 2.3.1 =/>. Because note generalizes disequality solving, expressing disequality is trivial. (define (=/> aux term) (note (== aux term))) 2.3.2 note-symbolo, note-numbero. note generalizes in the same fashion to other primitive constraints besides ==. (define (not-symbolo v) (note (symbolo v))) (define (not-numbero v) (note (numero v))) (define (not-pairo v) (note (pairo v))) 2.3.3 not-booleano, not-listo. Complex constraints built with conjunction, disjunction, and fresh work also work as expected. (define (not-booleano v) (note (booleano v))) (define (not-listo v) (note (listo v))) 2.3.4 absento. Using disequalities and negated type constraints, it becomes possible to define the familiar absento constraint. (define (absento absent term) (constrain (>/> term absent) (conde [(note (typeo term pair?))] [(fresh (h t) (== term (cons h t)) (absento absent h) (absento absent t))]))) It is also possible to implement absento as a negation of presento, or vice versa. 3 IMPLEMENTATION The implementation as a whole is composed of a pair of miniKanren interpreters. The first—the "stream" interpreter—interprets conde and fresh as stream constructors that generate the interleaving search tree. All other goals are viewed as constraints and are passed to the "constraint" interpreter to check for unsatisfiability within a given branch. As the constraint solver is a miniKanren interpreter, the constraints themselves are normal miniKanren goals, implemented here as first order structures. The constraint interpreter defines ==, constraint, pconstraint, note, succeed, and fail. It also redefines conde and fresh for the constraint solving search context. The constraint interpreter performs a depth-first miniKanren search bounded by the rule that fresh goals must suspend when the variables on which they depend are free. Because constraints within this framework may contain conde, a given miniKanren goal, viewed as a constraint, may imply a disjunction between any number of conjunctions of simpler constraints. The goal of the constraint interpreter’s search is to find one such subset of mutually satisfiable primitive constraints entailed by a single constraint store much in the same way the stream interpreter must search for one subset of mutually satisfiable constraints entailed by the program overall. In the next several sections, we review the implementation of the constraint solver. Code examples have been simplified for greater readability. The implementation is open source, and the source code should be consulted for more detail. ### 3.1 Conjunction The primary interface to the constraint solving interpreter is via the solve-constraint function. Consider the following partial listing: ``` (define (solve-constraint g s ctn resolve delta) (cond [(succeed? g) (if ... (solve-constraint ctn s succeed resolve delta))] [(conj? g) (solve-constraint (conj-lhs g) s (conj (conj-rhs g) ctn) resolve delta)] ...))) ``` The interpreter accepts the constraint goal to be solved, g, the state, s, and three additional goals, ctn, resolve, and delta. These naming conventions will remain consistent throughout the rest of the paper. **g** and **s** are self-explanatory. **ctn** is so named due to a structural analogy with continuations and continuation-passing style. The interpreter is written in a depth-first manner using a "conjunction-passing style" in which the future of the computation, **ctn**, represented as the conjunction of all goals to the "right" of the currently evaluated goal, is passed as an argument to the solver. When the interpreter receives a conjunction for the current goal *g*, it calls itself recursively on the left-hand side while conjoining the right-hand side to the current **ctn**. When the solver later finishes solving the current constraint *g*, it will be called with the trivial succeed goal as the current constraint, which will prompt the interpreter—subject to conditions discussed in more detail in the following sections—to proceed with solving the next conjunct of the current **ctn**. Concretely, calling the solver with $g \iff x \neq 1 \land y \neq 2$ and $ctn \iff z \neq 3$ will first trigger the conjunction condition, calling the solver recursively with $g \iff x \neq 1$ and $ctn \iff y \neq 2 \land z \neq 3$, and then subsequently with $g \iff succeed$ and then $g \iff y \neq 2$ and $ctn \iff z \neq 3$, provided that none of the constraints fail. ### 3.2 Unification Consider the following partial listing of the unification solver, which is called from solve-constraint when g is a unification constraint: ```lisp (define (solve-== g s ctn resolve delta) (let-values ([(bindings recheck s) (unify s (==-lhs g) (==-rhs g))]) (if (fail? bindings) (values fail failure) (solve-constraint succeed s ctn (conj recheck resolve) (conj delta bindings)))) ``` --- 2Recall that fresh in this case is implemented as a pattern matching form that possesses explicit references to the variables on which it depends. This definition of unification will look familiar from its standard implementation elsewhere. The unifier is called, the resulting state is checked for failure. If it has not failed, the solver proceeds to run any constraints that need to be rechecked based on the new bindings. Line 2 calls out to a unifier that works like most miniKanren unifiers with the exception that it returns two goals in addition to the state. bindings is a conjunction of unification goals representing the extensions made to the state $s$. recheck represents the conjunction of constraints on all of the newly bound variables. The next two lines illustrate the remainder of the plumbing of the solver. Line 3 checks whether the unification has failed by checking whether the bindings consist of the trivial fail goal, and if so returns the failure signature—the trivial fail goal and the failure stream. The failure stream corresponds to the failure mode of the input state $s$, and the fail goal likewise corresponds to the failure mode of the input parameter $\delta$, which is a first order representation of the constraints that have been added to $s$ during this execution of the constraint solver. Consider line 4. The unification constraints representing the new bindings are conjoined to $\delta$ and passed to further solving. Should the current constraints ultimately prove satisfiable, the constraint solver will return $s$ and $\delta$, both of which contain the information about which bindings were made at this stage in the solver. $\delta$ can be viewed as an extension of the representation of the state that tracks changes made during solving. It is primarily useful during negation and disjunction, as the current state representation is difficult to negate or disjoin. A constraint without negation or disjunction will ultimately discard $\delta$ and simply return $s$ as the product of solving. The final architectural element of the solver is the resolve constraint, which is conceptually equivalent to the ctn constraint. Both are conjunctions of goals waiting to be solved. The difference is that ctn contains the constraints remaining to be solved from the initial constraint received from the stream interpreter, whereas resolve contains constraints that started out already in the store, and were removed by, for instance, a unification, and must be re-solved later. As such, the constraints relevant to the current unification, recheck, are conjoined with resolve before further solving, and will later be pulled out and solved once ctn has been exhausted. Intuitively, while constraints received from the goal interpreter and stored in ctn are necessarily not yet reflected in the state, constraints conjoined to resolve were initially in the state when the constraint interpreter began solving the current constraint. As such, $\delta$ must contain a record of the changes made to the state, which corresponds to the logical simplification of the ctn constraint, whereas it need not contain re-solved constraints already contained in the state, and so resolve may be discarded from the final output, although it must be checked to ensure consistency. This distinction is important for the correctness of the negation constraint, as discussed in Section 3.3. ### 3.3 Negation Generalized negation operates analogously to the specialized case of disequalities. The same case analysis by which disequality constraints interpret the results of unification can be applied to general constraints such as type constraints and others defined with pconstraint. Negated constraints simply solve their child constraints and invert the result, converting succeed to fail, fail to succeed, and non-trivial constraints to their negations. Conjunctions and disjunctions are negated using De Morgan’s laws in the usual way. Consider the following listing: 1. `(define (solve-noto g s) ctn resolve delta)` 2. `(let-values ([(d s2) (solve-constraint (noto g) s succeed succeed succeed)])`) 3. `(solve-constraint succeed (store-constraint s (noto d)) ctn resolve`) This is analogous to the newly extended prefix of the substitution in association list based implementations, but represented using explicit first-order goals rather than a list of bindings. The negation of \( g \), \((\text{noto } g)\), is solved recursively on line 2 and then negated before being returned to the store and simultaneously to the delta constraint on line 3. Note that the initial call to solve-constraint is invoked with \( ctn \), \( resolve \), and \( delta \) all set to \textit{succeed}. This creates a distinct, "hypothetical" context in which the solver can evaluate the positive version of the goal in isolation and without reference to future conjuncts of the original negated goal. This results in the returned delta, \( d \), containing only changes made by the positive version of goal \( g \) and not by the right-hand conjuncts of the original goal. As a result, \( d \) can simply be negated and returned to the store before further solving. Had \( y = 1 \) been passed as \( ctn \), for example, it would have returned conjoined to \( d \), and subsequently negated to \( y \neq 1 \) before being returned to the store, which is not correct. Before proceeding with the remaining constraints, two remarks are in order. First, it is now possible to return, briefly, to the solve-constraint function and its handling of \textit{succeed}: 1. (define (solve-constraint g s ctn resolve delta) 2. (cond 3. [(succeed? g) 4. (if (succeed? ctn) 5. (if (succeed? resolve) 6. (values delta s) 7. (let-values ([d s] (solve-constraint resolve s succeed succeed delta))) 8. (if (fail? d) (values fail failure) 9. (values delta s)))) 10. (solve-constraint ctn s succeed resolve delta))] 11. ...)) When \( g \) is \textit{succeed}, constraints are first pulled from \( ctn \) on line 10, as described earlier. Once \( ctn \) has been exhausted, the constraints removed from the state to be rechecked as a result of the solving process, contained in \( resolve \), are solved on line 7. However, if \( resolve \) is solved, only the original delta is returned on line 9, not the subsequently solved \( d \). This change ensures that during negation solving, constraints removed from the store do not pollute the returned delta and become incorrectly negated.\(^4\) Finally, once all future constraints have been exhausted, the delta values are returned along with the state on line 6. ### 3.4 Disjunction Consider the following listing: 1. (define (solve-disj g s ctn resolve delta) 2. (let-values ([d-lhs s-lhs] (solve-constraint (disj-lhs g) s succeed succeed succeed)) 3. (cond 4. [(fail? d-lhs) (solve-constraint (disj-rhs g) s ctn resolve delta)] 5. [(succeed? d-lhs) (solve-constraint succeed s ctn resolve delta)] 6. [else (let-values ([d-rhs s-rhs] (solve-constraint (disj-rhs g) s succeed succeed succeed)) 7. (if (fail? d-rhs) 8. (solve-constraint succeed s-lhs ctn resolve (conj delta d-lhs)) 9. (solve-constraint succeed (store-constraint s (disj d-lhs d-rhs)) ctn resolve (conj delta (disj d-lhs d-rhs)))]))]) \(^4\)Note that this procedure necessarily throws away the work done to solve rechecked constraints. We are currently experimenting with alternative designs that retain more of that work. miniKanren’23 solve-disjunction first solves the left-hand disjunct on line 2. Like the negation solver, ctn, resolve, and delta are all succeed, which ensures that the returned constraints reflect only simplifications of constraints contained within the disjunct. If the left-hand disjunct fails, the solver simply solves the right-hand disjunct on line 4. If it succeeds, the rest of the disjuncts can be skipped. Otherwise, the right-hand side is solved on line 6 and it is disjoined with the left-hand side and returned to the store on line 9. If the right-hand side fails, the results of the left-hand side are returned to the store, reusing the state produced by solving the left-hand side as an optimization on line 8. Stepping back, the disjunction constraint finally makes clear what it means to view constraint solving as search in this instance. Each disjunction must search among its child disjuncts for at least one that does not fail in the current state. When the state moves down the right or left-hand branches of the disjunction constraint, it accumulates one child disjunct. When it passes through all conjoined disjunction constraint contained in ctn or resolve, it will have ensured that there is at least one subset of disjuncts that are mutually satisfiable in the current state. Failure to find such a subset proves the unsatisfiability of the store, and the branch fails. Unsatisfiability is relatively easy to detect as it only requires finding one non-failing disjunct in each disjunction. Ensuring that unifications entailed by the constraint store are added directly to the substitution, such as when booleano is conjoined with \( x \neq T \) and therefore unifies \( x = \perp \), requires that additional disjuncts be checked. The simplified implementation above naively checks all disjuncts, but work is ongoing to investigate possible benefits of laziness in the disjunction solver. ### 3.5 Matcho In most cases in the current implementation, the pattern matching form `matcho` is used in place of `fresh`. `matcho` destructures tree terms and binds their elements to variables in a new lexical scope as in the following example: \[ \text{matcho} \left( \left[ xxs \ (x . xs) \right] \right) \ldots \] This form destructures `xxs` and binds its head and tail to `x` and `xs`, respectively, before processing child goals. The internal representation of a `matcho` goal consists of a list of free variables on which it depends, a list of bound values, and a closure for processing the final patterns. Solving simply involves looking up the free variables, adding them to the bound variable list as they become bound, and suspending in the constraint store on encountering a variable that is still free in the current substitution. This procedure guarantees that constraints will only run until they exhaust the bound values in the substitution, preserving the completeness of the search. `fresh` can also be used in constraints, although it is more difficult to optimize. For that reason it is usually preferred to write constraints with the pattern matching form. Opaque `fresh` goals must expand until they yield a disjunction containing at least one non-failing disjunct. Whereas the stream interpreter would create two branches on such a disjunction, the constraint interpreter suspends the computation in the constraint store. ### 3.6 Attributed Variables Once the constraints have been sufficiently solved, they must be added back to the constraint store so the search can progress. For simple implementations that recheck all constraints at each step, this poses no issue. However, many implementations use a version of attributed variables whereby constraints in the store are indexed by the variables on which they depend. When those variables are modified, either by unification or by the addition of another constraint, the constraints already indexed under that variable can be rechecked without wasting effort on unrelated constraints. The only question, then, is on which variables does a given constraint depend? With the exception of disjunction, this question is mostly straightforward. Primitive constraints such as unification depend on all of their variables, while negation and conjunction depend on all of the attributed variables of their children. Because the store itself can be viewed as a conjunction of all the constraints it contains, storing a conjunction directly in the store can be simplified to storing all of its children independently. Disjunctions are the more difficult case, and the variables on which they depend themselves depend on the level of solving performed. For the simple solver above, it is possible to attribute disjunctions to all of their child goals’ variables. However, lazy implementations can get away with fewer. Work on this subject remains ongoing. Once the attributed variables have been determined, the current implementation copies pointers to the constraint to each variable index in the store. Constraints are stored separately to avoid stale constraints proliferating in the store. 4 RELATED WORK Within the domain of miniKanren research, this paper is most closely in conversation with prior work on constraint authoring frameworks [1, 11]. Unlike these approaches, which facilitate the development of domain specific constraints that make heavy use of specialized representations, this paper presents a strategy for leveraging only the core operators of miniKanren to express a wide variety of constraints that have to this point required such specialized implementations. The benefit of the present approach is that it greatly lowers the barrier to authoring constraints that can be expressed within this framework not only by uniformly handling constraint optimization and interoperation, but also by allowing the expression of constraints in miniKanren, which is particularly well suited to expressing constraints on structures that are themselves necessarily expressible in miniKanren. That said, much work remains to be done on bridging the gap and allowing such constraint authoring frameworks to interoperate with the system presented in this paper to allow for the expression of constraints that lie outside of miniKanren’s core representational facilities. More generally, the solving of simultaneous equations and disequations within the framework of logic programming has developed an extensive literature since its introduction [8]. This early work has been surveyed in Comon [9]. The central design of the solver proposed in this paper in particular generalizes the disequality constraint solver originally proposed by Bürckert [5] and further elaborated upon in Buntine and Bürckert [4], which was subsequently adapted for miniKanren by Byrd [6]. The strategy for avoiding unnecessary constraint checking by assigning constraints to specific variables that may make them unsatisfiable if bound or further constrained is based on what can be viewed as an implementation of attributed variables, albeit in a functional style [17]. Attributed variables, roughly, offer a general means to associate additional information with specific variables, and have found particular application in extending logic programming languages with constraint systems, as is being done here [12, 13]. The original approach to attributing disequalities to variables on which this paper builds originated with Ballantyne et al [2] to the best of our knowledge. This paper also engages to a lesser extent with previous work in miniKanren concerned with the semantics of negation, universal quantification, and fresh [7, 14, 16, 18, 19]. In particular, it offers a practical implementation of negation for constraint authoring that would be interesting to compare with more complex forms of negation studied in previous work. 5 CONCLUSION This paper introduced an extension to miniKanren that allows for the interpretation of goals as constraints, and used this extension to implement a wide variety of useful constraints. Much work remains to be done on the constraint system itself, from further studying the effects of laziness to exploring integrations with solvers that require specialized representations. Finally, given the range of constraints this and future related systems make it possible to express, however, it is also worth wondering what kind of applications they may enable, from variations on relational interpretation to as yet unresearched domains. In particular, one of the motivating cases driving this research has been the prospect of running complex relations such as relational interpreters and relational type inferencers as constraints, and studying the effect this might have on the ability to compose such relations efficiently by letting the constraint system decompose and reorder them. Further work on the current implementation is required before such experiments can be undertaken. Because this constraint solver reuses representations and algorithms that already exist in most miniKanren implementations, and particularly those that already use first order representations of goals, and moreover because this solver replaces much of the code dedicated to implementing individual constraints, the implementation burden on top of an existing miniKanren system is relatively minimal. It is therefore our hope that this work can help facilitate the more rapid exploration and prototyping of new types of constraints and the new applications they enable. 6 ACKNOWLEDGMENTS We thank Will Byrd for discussions of early versions of this idea, Evgenii Moiseenko for clarifying some points of previous work, and Greg Rosenblatt for identifying an important edge case. We also thank the anonymous reviewers for their suggestions. REFERENCES
{"Source-Url": "http://www.evandonahue.com/pdf/donahue_goalsasconstraints2023.pdf", "len_cl100k_base": 7118, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29653, "total-output-tokens": 8795, "length": "2e12", "weborganizer": {"__label__adult": 0.00034356117248535156, "__label__art_design": 0.0002493858337402344, "__label__crime_law": 0.0003504753112792969, "__label__education_jobs": 0.0004107952117919922, "__label__entertainment": 6.22868537902832e-05, "__label__fashion_beauty": 0.00014102458953857422, "__label__finance_business": 0.00016701221466064453, "__label__food_dining": 0.0003712177276611328, "__label__games": 0.0005893707275390625, "__label__hardware": 0.0005931854248046875, "__label__health": 0.0004181861877441406, "__label__history": 0.00018167495727539065, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0004131793975830078, "__label__literature": 0.00024187564849853516, "__label__politics": 0.0002460479736328125, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.0139923095703125, "__label__social_life": 8.529424667358398e-05, "__label__software": 0.00457000732421875, "__label__software_dev": 0.97509765625, "__label__sports_fitness": 0.00032782554626464844, "__label__transportation": 0.0005202293395996094, "__label__travel": 0.0001779794692993164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37064, 0.02199]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37064, 0.45697]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37064, 0.90296]], "google_gemma-3-12b-it_contains_pii": [[0, 3433, false], [3433, 6779, null], [6779, 10601, null], [10601, 13162, null], [13162, 16521, null], [16521, 20775, null], [20775, 23961, null], [23961, 27843, null], [27843, 31778, null], [31778, 35936, null], [35936, 37064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3433, true], [3433, 6779, null], [6779, 10601, null], [10601, 13162, null], [13162, 16521, null], [16521, 20775, null], [20775, 23961, null], [23961, 27843, null], [27843, 31778, null], [31778, 35936, null], [35936, 37064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37064, null]], "pdf_page_numbers": [[0, 3433, 1], [3433, 6779, 2], [6779, 10601, 3], [10601, 13162, 4], [13162, 16521, 5], [16521, 20775, 6], [20775, 23961, 7], [23961, 27843, 8], [27843, 31778, 9], [31778, 35936, 10], [35936, 37064, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37064, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
dc9b5343e7a61c658591abe36a86481c7b93ae3d
[REMOVED]
{"Source-Url": "http://repositorio.uportu.pt:8080/bitstream/11328/2701/1/C41.pdf", "len_cl100k_base": 7135, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39763, "total-output-tokens": 9886, "length": "2e12", "weborganizer": {"__label__adult": 0.0010728836059570312, "__label__art_design": 0.002521514892578125, "__label__crime_law": 0.0009908676147460938, "__label__education_jobs": 0.46044921875, "__label__entertainment": 0.0004701614379882813, "__label__fashion_beauty": 0.0006890296936035156, "__label__finance_business": 0.0011796951293945312, "__label__food_dining": 0.0013866424560546875, "__label__games": 0.00852203369140625, "__label__hardware": 0.00173187255859375, "__label__health": 0.0016031265258789062, "__label__history": 0.0013456344604492188, "__label__home_hobbies": 0.00046706199645996094, "__label__industrial": 0.0009975433349609375, "__label__literature": 0.0017137527465820312, "__label__politics": 0.0009002685546875, "__label__religion": 0.001399993896484375, "__label__science_tech": 0.0309600830078125, "__label__social_life": 0.0006051063537597656, "__label__software": 0.015106201171875, "__label__software_dev": 0.462158203125, "__label__sports_fitness": 0.0015897750854492188, "__label__transportation": 0.001445770263671875, "__label__travel": 0.0006895065307617188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42557, 0.02708]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42557, 0.53472]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42557, 0.89634]], "google_gemma-3-12b-it_contains_pii": [[0, 2194, false], [2194, 3623, null], [3623, 6882, null], [6882, 10677, null], [10677, 12710, null], [12710, 15909, null], [15909, 19272, null], [19272, 22067, null], [22067, 24869, null], [24869, 26349, null], [26349, 29734, null], [29734, 32973, null], [32973, 35592, null], [35592, 39166, null], [39166, 42557, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2194, true], [2194, 3623, null], [3623, 6882, null], [6882, 10677, null], [10677, 12710, null], [12710, 15909, null], [15909, 19272, null], [19272, 22067, null], [22067, 24869, null], [24869, 26349, null], [26349, 29734, null], [29734, 32973, null], [32973, 35592, null], [35592, 39166, null], [39166, 42557, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42557, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42557, null]], "pdf_page_numbers": [[0, 2194, 1], [2194, 3623, 2], [3623, 6882, 3], [6882, 10677, 4], [10677, 12710, 5], [12710, 15909, 6], [15909, 19272, 7], [19272, 22067, 8], [22067, 24869, 9], [24869, 26349, 10], [26349, 29734, 11], [29734, 32973, 12], [32973, 35592, 13], [35592, 39166, 14], [39166, 42557, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42557, 0.28804]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
3688361076ca2ce5cdf202bfb237843b9cf9f829
Integrating Java and Matlab components into the same parallel and distributed application using JavaPorts Demetris G. Galatopoullos Andrew P. Funk Elias S. Manolakos* Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115 {demetris, afunk, elias}@ece.neu.edu Abstract As clusters of workstations become increasingly popular the need for component frameworks that facilitate the high level modeling and rapid prototyping of parallel and distributed applications on such systems is becoming pressing. Many scientists and engineers have image and signal processing applications that could benefit from cluster computing. However, these applications often exist as legacy code, such as serial Matlab functions, which are not easily parallelizable. The goal of the JavaPorts project is to provide a framework and a set of tools that make it easy to develop component-based parallel and distributed applications for networks of heterogeneous computing nodes. The latest version of the package supports the integration of Java and Matlab components into the same application and provides a mechanism for incorporating legacy Matlab functions into parallel processing applications. The design and salient features of the framework and associated tools are discussed here, and application examples are presented which highlight how JavaPorts can be used to model, develop, launch and restructure applications with any number of interacting Java and Matlab components. 1 Introduction Motivation Clusters of Workstations (COWs) have become increasingly available and provide a cost-effective alternative to traditional supercomputers for coarse grain parallel processing. Scientists and engineers routinely collect large amounts of sensor and image data that could be processed more efficiently in parallel. Unfortunately, they often lack the expertise or time required to parallelize their algorithms. Furthermore, many scientific algorithms currently exist as modular Matlab functions (or other legacy code) that is not readily parallelized. Rather than taking the time to become expert parallel programmers themselves, most researchers would prefer to have at their disposal a development environment that would allow them to easily translate legacy code blocks to interacting modules and then stitch them together to create distributed applications for COWs. Existing Tools Although tools for cluster computing have matured considerably over the last decade, they are still not adequately supporting distributed application modeling at a high level of abstraction. There is also a need for optimized runtime systems that can manage tasks on distinct address spaces and APIs that free the application developer from the intricacies of low-level message passing. Message passing libraries, such as MPI and PVM, may require that all nodes run the same operating system or that all tasks are implemented using the same programming language. Traditionally MPI implementations were provided only for well established languages, such as C and Fortran, and only recently there have been efforts to extend support to object-oriented languages such as C++ and Java. Furthermore, message passing programs are typically written in the Single Program Multiple Data (SPMD) style where the behavior of each process and the pattern of interprocess communication are tied to its “rank” (index variable). What is the JavaPorts framework? The JavaPorts components framework ([1, 2, 3]), along with its runtime system and development tools, supports the high level visual modeling of multi-component, parallel and distributed applications, as well as their implementation and deployment on a heterogeneous cluster of workstations. JavaPorts is written in Java and it can be utilized under any operating system that can execute a standard edition JVM. The JavaPorts project tries to extend the "write once run ev- * This work has been supported in part by CenSSIS, the Center for Subsurface Sensing and Imaging Systems, under the ERC Program of the National Science Foundation (Award Number EEC-9986821) erywhere” promise of Java to distributed and multithreaded applications. Although it is currently used for cluster computing, there are ongoing efforts in our group towards creating a version of JavaPorts that will support grid-scale distributed application development. JavaPorts supports the integration of Java and Matlab components into the same application, a feature that, to the best of our knowledge, is not currently provided by any other cluster computing framework. The programmer does not need to include named sockets, any environment variables or any application naming inside the application code. Therefore the resulting components become truly location-transparent and reusable across applications. JavaPorts supports all four-phases of component development with well designed tools. Furthermore, it provides a “light” task-level distributed runtime that associates a PortManager object with each application task. Therefore each JavaPorts component interacts with its PortManager only and does not rely for run-time support on machine level daemon processes that need to be started on remote nodes before a distributed application can be launched. Related Work summary There are several on-going efforts aiming at exploiting or extending the Java programming language to facilitate coarse grain distributed application development. We mention here only a representative subset of those that primarily target cluster computing. For example, the Visper framework [4] is an object-oriented environment for building distributed Java applications. It provides a visual parallel programming model based on the Process Communication Graph (PCG) formalism and a comprehensive API for message exchanges between objects. Java Parallel (Javap) [5] is a Java library that uses reification and reflection to support transparent remote objects and seamless distributed computing in both shared and distributed memory multiprocessor systems. The JMPF project [6], a Message Passing Framework for cluster computing, provides portability of user applications over heterogeneous platforms while eliminating the need for complicated socket programming in the application code. Efforts for parallelizing Matlab Matlab [7] is a popular environment that integrates the rapid prototyping of numerical computations with high-level programming. However, because Matlab is an interpreted language its performance does not scale well with the problem size. In order to achieve performance gains, programmers often opt to parallelize their Matlab programs. There is recently a considerable amount of work in providing parallel processing support to Matlab applications [8]. Efforts in this area attempt to either provide MPI-like message passing to Matlab, build methods that will partition work among multiple sessions of Matlab, or convert Matlab scripts into parallel programs. For example, MultiMatlab [9] provides Matlab-style commands so that a programmer can launch MATLAB processes on remote nodes through an interactive session. Another package, the MATmarks [10], enables programmers to execute MATLAB functions in parallel. It extends the MATLAB instruction set in order to add synchronization mechanisms to the language and it supports a shared memory programming style. To the best of our knowledge, there is currently no framework, other than JavaPorts [11], that can be used to build Matlab-based parallel and distributed applications by exploiting the capability of the latest Matlab versions to incorporate Java code in Matlab functions. Due to their complementary nature, combining both languages offers the potential for building interesting collaborative environments. Matlab tasks can utilize Java in order to interact in any desired way in a network computing environment or over the Internet. They can also take advantage of the many good features of Java including the rich set of exceptions, web-based programming, IDE and GUI tools, and threading. On the other hand, by interfacing with Matlab, Java tasks can access a large set of optimized and versatile toolboxes covering a variety of fields from image processing, to statistical analysis and bioinformatics. Therefore there exist exciting opportunities for developing tools that try to exploit the rapid prototyping capabilities of the one and the network computing capabilities of the other in order to achieve a painless transition of available serial legacy code to parallel implementations with acceptable performance. Paper Organization The rest of the paper is organized as follows: We provide a description of the basic elements of the JavaPorts framework in Section 2. We first explain how a Task Graph is used to model the multi-component distributed application. Then we describe the AMTP tree data structure used to store the task graph information. Subsequently, we demonstrate how the JavaPorts Application Configuration Language (JPACL) can be utilized to capture the task graph textually in a configuration file. In Section 3 we discuss the four phases of the JavaPorts application development cycle. We first introduce the JavaPorts Visual Application Composer (JPVAC) for graphically creating and editing task graphs, then the JavaPorts Application Compilation Tool (JPACT) for compiling the graphs to create code “templates” and scripts, and end this section with an outline of the runtime system and a unique distributed task termination mechanism it implements. In Section 4, we present in some detail the Port abstraction and the associated Application Programming Interface (API). We describe the analysis and design of the classes that implement port services. Next, in Section 5, we discuss how JavaPorts was used in a rapid prototyping image processing case study. We provide a summary and point to future directions in Section 6. 2 The JavaPorts Framework The Application Task Graph The task graph models the application as a collection of interacting components and allows the programmer to establish a high-level view of the overall computational strategy. Once the application is decomposed into interacting tasks, the user may start building the graph using the JavaPorts Visual Application Composer (JPVAC) tool [2, 12]. The JPVAC serves as the graphical front-end of the JavaPorts system and it allows incremental task graph construction using standard and advanced graph editing features. The JPVAC can save the task graph in persistent objects and in a configuration file. The programmer can then re-visit the application and complete its task graph at a later time. A JavaPorts task represents a user process that will be used to execute a module (component) of the application. A newly defined task gets automatically translated by JavaPorts into a “template”, i.e. a program skeleton, written in either Java or Matlab. Each task is allocated to a compute node (machine), which the user has chosen for its execution. As it can be seen in Figure 1, several tasks (large solid line boxes) may be allocated to the same machine (dashed-line box). Each machine may be identified by its domain name or its Internet-Protocol (IP) address. In the task graph, tasks are connected using edges. The endpoints of these edges, shown as small solid-line squares in Figure 1, are the Ports. Each task is connected to a peer task through a Port, an abstraction responsible for hiding the location (machine) of the destination task (and port) from the source task and for keeping the inter-task communication and coordination details transparent to the application layer. A pair of ports connected via an edge corresponds to a bi-directional logical link (channel) among directly interacting tasks. Logical links may connect tasks residing on the same machine or they may cross machine boundaries. The AMTP Tree Data Structure The Application-Machine-Task-Port (AMTP) tree is a data structure used to store the task graph information. The AMTP tree is saved as a binary file in the user’s JavaPorts directory space through the standard Java language ObjectOutputStream class. By doing so it becomes possible for JavaPorts to load the object in memory at runtime in order to acquire information about the executing application. As shown in Figure 2, there are four levels in an AMTP tree. The root (Application level node) stores the name of the application. The second level nodes carry information about the Machines on which the application will be deployed. The third level nodes store information about the Machines on which the application will be deployed. The fourth level nodes store information about ports and their connections to peer ports. The JavaPorts Application Configuration Language A task graph can be described textually using the JavaPorts Application Configuration Language (JPACL). The JPACL has a simple, self-explanatory syntax and can be used to create an application configuration file using any available text editor. If the JavaPorts Visual Application Composer (JPVAC) tool is used instead, the configuration file is created automatically every time a task graph is saved. The configuration file is the input that the JavaPorts Application Compilation Tool (JPACT) processes to create code templates for the components of a JavaPorts application and a script for launching it in the network. The configuration file corresponding to the Mandelbrot set application of Figure 1 is provided in Figure 3. 3 JavaPorts Application Development Tools The Application Development Phases The JavaPorts application development cycle follows a four-phase sequence inspired by sound software engineering principles. As it can be seen in Figure 4, the phases are event-ordered in a natural sequence, however, the user may backtrack to a previous phase if it is needed to re-engineer, re-design or correct certain problems. The four phases are described in more detail below: - Phase 1: Application Modeling and Configuration During the first phase the programmer examines the application’s requirements and creates a task graph model for the application. If the JPVAC tool is used, the application configuration file is updated every time the model is saved. - Phase 2: Generation of Templates and Scripts A configuration file is compiled using the JPACT tool. If there exist syntax or logical errors they should be fixed by the user. Code templates and scripts are generated upon successful compilation. - Phase 3: Application Implementation The programmer adds code in the modifiable part of each newly created task template as needed to implement its application-specific functionality. Once each component is developed, the programmer can compile all Java code templates using the automatically generated cjtempl.sh script. Any syntax errors that surface should be corrected during this phase. - Phase 4: Application launching The network application may now be launched from a user-designated (master) node using an automatically generated launching script. In order to address any runtime errors, the programmer should return to Phase 3, modify code templates as needed and re-compile the application before relaunching. The JavaPorts Visual Application Composer (JPVAC) The JPVAC [12] is a graphical user interface for visual configuration of distributed component-based applications. Applications may be constructed by combining new components with existing components that are imported from other applications. For each application, the JPVC generates a configuration file that is processed by the JavaPorts Application Compilation Tool (JPACT) to create task templates and scripts. The JPVC includes many advanced features that enhance productivity. An Undo option allows the user to retrace a sequence of steps in case an error is made during task graph development. Another powerful feature is the ability to concurrently display and edit multiple applications in a single session. This feature, combined with the ability to hierarchically group tasks, makes cross-application development and evaluation straightforward. Certain patterns of tasks used in one application can be viewed and reused in another, thus enhancing application development productivity. The JPVC also provides "off-the-shelf" commonly used task topologies that the user can simply import into an application; e.g. the user can specify the number of tasks in a star topology and the JPVC will automatically create it. The JavaPorts Application Compilation Tool (JPACT) The JPACT tool parses and compiles the application configuration file and, if that step is successful, it creates a “blank” template file (code skeleton of a JavaPorts component with no user code) for each newly defined task and a set of scripts that make it easy to compile and launch the application, or to cleanup in case of abnormal termination. In addition, the JPACT creates or updates the AMTP tree. The JavaPorts Distributed Runtime System The JavaPorts runtime system was designed with the requirement to make the remapping of tasks to the nodes of a potentially heterogeneous cluster easy. Therefore JavaPorts provides the user application with light-weight distributed runtime support that decentralizes the management of application components. Every application task possesses a PortManager object, shown in Figure 5, which is responsible for creating, configuring and serving its ports. The PortManager performs remote lookups and once it discovers the task’s peer ports, it connects them to the task’s ports. It returns an array of handles to the local ports that the task’s implementation can utilize to exchange messages asynchronously with its peer tasks (local or remote). The PortManager is also responsible for disconnecting and disposing of the task’s ports and for performing distributed termination transparently to the application layer. The distributed task termination mechanism The problem of distributed termination is to determine whether a computation has reached a state at which it is safe for its distributed components to terminate themselves. Although trivial in sequential computing, termination detection becomes difficult in distributed processing because a computation may not have come to an end when a participating component is locally completed. JavaPorts employs a simple and general enough distributed termination scheme summarized pictorially in Figure 6 that shows how the release protocol is carried out between two peer tasks. The termination mechanism is handled by each task’s PortManager object. When a task is locally ready to terminate a call to the release() method of its PortManager blocks the execution of a task’s run() method until the PortManager determines that the task can safely terminate. First the PortManager ensures that all asynchronous write-request threads that have been issued on any of the task’s ports are successfully completed. Once all such threads have been joined, the PortManager contacts each one of the task’s peer ports and sets its Shutdown flag to true. After this is accomplished, the PortManager polls through its owner task’s ports and checks if their Shutdown flag has been set to true by their peers. The task is allowed to exit and terminate only after its ports have their Shutdown flag set to true by the PortManagers of their peers. To the best of our knowledge, no other component-based distributed computing framework provides a termination detection mechanism that is transparent to the application developer. 4 The Port Design The Application Programming Interface (API) The JavaPorts framework provides a simple Application Programming Interface (API) that allows both synchronous and asynchronous anonymous message passing operations on port objects. A message may contain primitive data types, complex data structures, or objects. The task sender does not require knowledge of the name or rank of the receiving task or its ports. The JavaPorts design employs buffering of incoming messages that are held in a linked list data structure (called Port List) at the receiver’s side. If a Port List becomes full a message FIFO queue is activated to buffer messages temporarily at the sender side until some space is freed in the receiver’s Port List. A write operation uses a message Key (integer) value that identifies the element in the Port List where the sent message will be buffered. A Read operation issued at a receiving port scans the Port List using the Key value and returns a handle to the message if it has already arrived. A detailed description of the semantics of the supported API methods is provided in [11]. The Port Model The UML diagrams of the classes involved in the Port design are shown in Figure 8. A port object may contain multiple instances of the PortThread and PortException classes. The PortThread class extends the Thread class... class and its instances are used to perform asynchronous writes on local ports. The PortException extends the Exception class in order to allow the implementation of the design to throw exceptions specific to possible malfunctions. A Port instance contains one PortDataList object for buffering incoming messages. It also contains one FIFOQueue object for temporarily buffering write requests on the sender side when the receiver’s Port List is full. Both the Port List and the FIFO Queue are managed by associated threads represented by one instance of the PortListManager and one instance of the FIFOQueueManager respectively. They both inherit from the Thread class and offer services such as expanding lists, dispatching pending elements and notifying the application of elements availability. The UML class diagram of the PortManager class is shown in Figure 9. The PortInterface is necessary for the Port class to be remote since it advertises the methods that constitute the port API. The parent PortInterfaceSys includes the methods of the port that should not be visible to the application. The PortManager class utilizes these two classes, the InputObjectStream and the OutputObjectStream, in order to read the AMTP tree data structure from a file and extract information about the owner task and its ports. Every application task instantiates the PortManager class during runtime. The PortManager instance creates and registers the ports of its owner task. It uses the RegistrySetup class to manage the RMI registry on the host machine. If the RMI registry is not running then the RegistrySetup object will launch it and return a handle to it back to the PortManager object. The PortManager object launches the RMISecurityManager instance in order to allow trusted objects to be transferred across machine boundaries. 5 JavaPorts Application Examples This section describes the development of a parallel application using the JPVAC tool for computing the Mandelbrot set [13] (a fractal image). This application is used quite often as a benchmark for master-slave computing and load balancing because it is easy to partition the problem into subproblems of varying complexities not known at compile time. We first describe the initial composition of the Mandelbrot distributed application. We demonstrate how the JPVAC can be used to build effortlessly different component configurations in order to achieve optimal performance. Then we demonstrate the power of JavaPorts in taking an existing serial Matlab function and turning it into a component of a parallel and distributed implementation. Finally, we demonstrate how a well engineered JavaPorts application can take full advantage of the availability of added network resources without requiring any code changes. Component-based Application Development The Mandelbrot application consists of three basic components (see Figure 10). The Display component is responsible for accepting user input and displaying the computed image. The Display component is connected to the Manager component, which may be either of type Static or Dynamic. The StaticManager component receives the image size parameters from the Display, partitions statically the image into as many row stripes as the number of available Matlab worker components (MMWorkers), and distributes a row stripe to each worker. The number of repetitions necessary to produce a pixel is indicated by color. A red pixel indicates that only a single repetition caused the algorithm to jump outside the specified range, while at the other extreme a black pixel indicates that the algorithm terminated only after the limit of repetitions was reached. Therefore, due to the nature of the Mandelbrot set calculation, some regions of the image are more compute-intensive than others. Figure 12 shows the result of running the Mandelbrot application with a StaticManager component and four Matlab MMWorker components. Even though the StaticManager assigns the same number of rows to each worker, some MMWorkers may take longer than others. The DynamicManager tries to alleviate this problem by using an adaptive work allocation strategy. It initially sends a small fraction of the total image to each available MMWorker to get it started. Subsequently, the amount of rows sent to each requesting MMWorker depends on how fast that component returned its previous results. In this way a form of dynamic load balancing is achieved, where the faster workers are asked to compute more pixels. Figure 10 shows how an existing DynamicManager component is imported easily into the application. This type of high-level application reconfiguration can be accomplished entirely within the JPVAC and requires no code changes. Figure 13 shows the result of running the application again in the new configuration. In this case using the DynamicManager makes more efficient use of the available resources and reduces the overall parallel runtime from 17 seconds to 12 seconds in the same cluster. For a real-world application the amount of time that can be saved with load balancing can be significant, and the JPVAC allows the developer to test various component configurations rapidly to discover the best strategy with the least expenditure of time and effort. Integrating Java and Matlab Components The Display and Manager components have been implemented in Java, and the MMWorker components have been implemented in Matlab. A Manager component receives the total image dimensions from the Display component and distributes a portion of the problem to each MMWorker component. component. As each MMWorker returns the results of its computation, the Manager relays this information back to the Display component. Figure 7 shows the implementation of the MMWorker, a Matlab-based component which receives information about the portion of the image to be computed. The MMWorker component passes this information to the mSets function, which is a legacy serial Matlab code designed to compute the Mandelbrot set on a specified region of the complex plane. By partitioning the image among the available MMWorkers, and calling a separate instance of the mSets function to perform the Mandelbrot computation on each subregion of the image, the original serial Matlab code is transformed into a module of the parallel implementation. Note that no changes are required to the original mSets function; it is encapsulated by the MMWorker component which handles all communication with other components. The code template for the MMWorker component is automatically generated by JavaPorts. The user has to add only the call to the mSets function and the read-data and write-results operations highlighted in Figure 7. Furthermore, many different configurations of the application can be built using the JPVAC and JPACT tools without requiring any changes to the code. Rapid Prototyping The JPVAC also provides the ability to port a parallel application to a new computing platform in the event that new hardware becomes available. In this example, the Mandelbrot application is originally developed on a cluster of single-processor workstations. In this configuration a single MMWorker component is allocated to each computing node. The application is then reconfigured to take advantage of a newer cluster of dual-processor workstations. In this configuration, two MMWorker components are allocated to each computing node to take advantage of the dual-processors (see Figure 11). As with the previous example, this reconfiguration can be performed within the JPVAC and requires no code changes. The performance of the dual-processor configuration is shown in Figure 14. The Mandelbrot application was first run with a fixed problem size of one million pixels (1,000 x 1,000 grid). The application achieves some measure of speedup as the number of processors increases from one to four. As the number of processors increases beyond four, however, the speedup decreases. This speedup saturation is expected as processors get added while the problem size remains fixed. In contrast, when the problem size is increased linearly with the number of processors, the speedup continues to increase. This measure of linear “scaled speedup” is often a better indicator of the potential of a parallel algorithm-infrastructure combination to deliver useful computational throughput. Particularly in the signal and image processing field there are many applications which require processing large amounts of data in parallel. These results demonstrate that a combination of Matlab code and JavaPorts infrastructure can provide a flexible environment for rapid prototyping with the ability to process large data sets efficiently [14]. 6 Conclusions and future directions The recent trend toward computing on networks of heterogeneous nodes has created a need for new development tools to meet the unique requirements of these platforms. Such tools should allow the application to be configured independently of its implementation, so that when some aspect of the platform changes, the application may be reconfigured at a high level, without requiring any implementation changes. Ideally the tool should also provide the productivity features that are commonplace among non-parallel application development environments. JavaPorts provides a high-level task graph abstraction for modeling a distributed application, along with a Ports API that supports anonymous message passing and a run time system that supports task location transparency and distributed task termination. Built on top of Java’s Remote Method Invocation (RMI) technology, it is inherently platform independent. These features allow the developer to test different software and hardware configurations quickly, without having to modify the source code. The JPVAC facilitates the development process by providing an intuitive graphical interface for visually building component-based parallel and distributed applications. This tool provides several productivity features that can help reduce the initial development time as well as enable rapid prototyping via the ability to reconfigure an application at the task graph level. The latest version of JavaPorts provides for the seamless integration of Java and Matlab components in the same distributed application. To the best of our knowledge JavaPorts is the only framework that allows realizing any desirable task graph of Matlab and Java components. This enables existing serial Matlab code to be converted easily into components that may interact in any desired way to solve large size problems in parallel. JavaPorts also provides transparent support for distributed termination, setting it apart from other tools for distributed component-based applications, including efforts to parallelize Matlab. All of these features combine to make JavaPorts an efficient and valuable tool set for parallel and distributed application development. We are currently developing a Visual Task Modeler (JPVTM) tool that can be used in conjunction with the JPVAC, to construct visually a refined two-level structural/behavioral application model. Such models can be analyzed by the JavaPorts QoS system (under construction) to predict whether a desired QoS level can be delivered to the application before any code is even developed. The latest trends in heterogeneous computing involve employing larger numbers of processors and a wider variety of platforms. The goal of Grid computing is to make accessing computational power as easy as accessing electrical power is today. Many mobile handheld devices now support Java and could also become part of this global computational resource. Future research may involve to grid-enable as well as produce JavaPorts versions that may run on power constrained handheld devices and embedded systems. References Figure 3. The JavaPorts application configuration file corresponding to the task graph of Figure 1. The JPACL reserved keywords are shown in uppercase. Figure 5. The JavaPorts distributed Runtime System. There is one PortManager object for every task which creates and maintains the Port objects. Figure 4. The JavaPorts component-based application development phases. The arrows between phases denote allowable transitions. Figure 6. The JavaPorts distributed termination protocol. PortManager objects cooperate to ensure proper termination transparently to the application layer. function MMWorker(AppName, TaskVarName) % register ports portmanager = PortManager; port = portmanager.configure(AppName, TaskVarName); % read first message data = port(1).SyncRead(readkey); % loop until EXIT message received while (data.status "~ equality operator" % call legacy Matlab function mSets to compute Mandelbrot set iterations = mSets(real min max, imag min max), result = Message(data.x1, data.x2, data.y1, data.y2, iterations, data.maxiter, time, TaskVarName + "" (Matlab)" ); % send result port(1).SyncWrite(result, writekey); % read next message data = port(1).SyncRead(readkey); end % distributed termination portmanager.release; quit; end Figure 7. The MMWorker component receives a portion of the overall image and calls the serial Matlab function mSets to perform the Mandelbrot computation. Highlighted are the statements the Matlab programmer has to add. --- Figure 8. UML class diagram showing the organization of the classes used to implement Port functionality. The classes arranged in the top row of the Figure are standard Java distribution classes. --- Figure 9. UML class diagram depicting the static organization of the PortManager class. The classes arranged in the top row of the Figure are standard Java distribution classes. --- Figure 10. JavaPorts allows the user to build a distributed application graphically by importing reusable software components from other applications or from a components library. Figure 11. The JPVC allows the user to allocate a subset of components to any available machine without any code changes required. Figure 12. The StaticManager sends the same number of rows to each Worker regardless of its measured computational performance (pixels/sec). This results in suboptimal processor utilization. Figure 13. The DynamicManager sends more rows to the faster Workers and consequently the overall parallel runtime is reduced. Note: In a black and white image, red pixels (low iterations) appear grey. Figure 14. Due to message passing overhead, the Mandelbrot algorithm does not scale well for a fixed problem size. However, when the problem size increases linearly with the number of processors, the algorithm exhibits close to linear speedup.
{"Source-Url": "https://pdfs.semanticscholar.org/2892/ee02841767c69e0d0d5356c09b61383363c2.pdf", "len_cl100k_base": 6608, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31180, "total-output-tokens": 7821, "length": "2e12", "weborganizer": {"__label__adult": 0.0002944469451904297, "__label__art_design": 0.0002942085266113281, "__label__crime_law": 0.0002567768096923828, "__label__education_jobs": 0.0006170272827148438, "__label__entertainment": 6.99758529663086e-05, "__label__fashion_beauty": 0.00014340877532958984, "__label__finance_business": 0.0002522468566894531, "__label__food_dining": 0.00029850006103515625, "__label__games": 0.0005559921264648438, "__label__hardware": 0.001720428466796875, "__label__health": 0.0004153251647949219, "__label__history": 0.0002498626708984375, "__label__home_hobbies": 8.988380432128906e-05, "__label__industrial": 0.0005946159362792969, "__label__literature": 0.00015938282012939453, "__label__politics": 0.0002351999282836914, "__label__religion": 0.0004513263702392578, "__label__science_tech": 0.055389404296875, "__label__social_life": 7.963180541992188e-05, "__label__software": 0.0106964111328125, "__label__software_dev": 0.92578125, "__label__sports_fitness": 0.0003142356872558594, "__label__transportation": 0.0006842613220214844, "__label__travel": 0.0002129077911376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38049, 0.01311]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38049, 0.51752]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38049, 0.89996]], "google_gemma-3-12b-it_contains_pii": [[0, 4123, false], [4123, 9955, null], [9955, 15446, null], [15446, 20978, null], [20978, 26570, null], [26570, 32267, null], [32267, 35112, null], [35112, 35697, null], [35697, 37280, null], [37280, 38049, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4123, true], [4123, 9955, null], [9955, 15446, null], [15446, 20978, null], [20978, 26570, null], [26570, 32267, null], [32267, 35112, null], [35112, 35697, null], [35697, 37280, null], [37280, 38049, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38049, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38049, null]], "pdf_page_numbers": [[0, 4123, 1], [4123, 9955, 2], [9955, 15446, 3], [15446, 20978, 4], [20978, 26570, 5], [26570, 32267, 6], [32267, 35112, 7], [35112, 35697, 8], [35697, 37280, 9], [37280, 38049, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38049, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
fcea417ee204830359fe2e7a50c5695c0dae28a9
Weakest Preconditions Part 2: Calculating wp, wlp; Domain Predicates CS 536: Science of Programming, Spring 2021 A. Why - Weakest liberal preconditions (wlp) and weakest preconditions (wp) are the most general requirements that a program must meet to be correct. B. Objectives At the end of today you should understand - How to add error domain predicates to the wlp of a loop-free program to obtain its wp. C. Calculating wlp for Loop-Free Programs - It’s easy to calculate the wp and wlp of a loop-free/error-free program S especially since for such programs, the wp and wlp are identical. - The following algorithm takes S and q and syntactically calculates a particular predicate for wlp(S, q), which is why it’s described using wlp(S, q) ≡ ... instead of wp(S, q) ⇔ .... - wlp(skip, q) = q - wlp(v := e, Q(v)) = Q(e) where Q is a predicate function over one variable. - The operation that takes us from Q(v) to Q(e) is called syntactic substitution; we’ll look at it in more detail soon, but in the simple case, we simply inspect the definition of Q, search its text for occurrences of the variable v and replace them with copies of e. - wlp(S₁; S₂, q) = wlp(S₁, wlp(S₂, q)) - wlp(if B then S₁ else S₂ fi, q) = (B → w₁) ∧ (∼B → w₂) where w₁ = wlp(S₁, q) and w₂ = wlp(S₂, q). - Since it’s equivalent, you can also use (B ∧ w₁) ∨ (∼B ∧ w₂). - wlp(if B₁ → S₁ ⋄ B₂ → S₂ fi, q) = (B₁ → w₁) ∧ (B₂ → w₂) where w₁ = wlp(S₁, q) and w₂ = wlp(S₂, q). - For the nondeterministic if, you must use (B₁ → w₁) ∧ (B₂ → w₂), not (B₁ ∧ w₁) ∨ (B₂ ∧ w₂), because they’re not equivalent (unlike the deterministic if statement). - When B₁ and B₂ are both true, either S₁ or S₂ can run, so we need B₁ ∧ B₂ → w₁ ∧ w₂, and this is implied by (B₁ → w₁) ∧ (B₂ → w₂). - Using (B₁ ∧ w₁) ∨ (B₂ ∧ w₂) fails because it allows for the possibility that B₁ and B₂ are both true but only one of w₁ and w₂ is true. This isn’t a problem when B₂ ⇔ ∼B₁, which is why we can use (B ∧ w₁) ∨ (∼B ∧ w₂) with deterministic if statements. D. Some Examples of Calculating wp/wlp: - The programs in these examples never end in “state” \( \bot \), so the \( wp \) and \( wlp \) are equivalent. - **Example 2**: \( wlp(x := x+1, x \geq 0) = x+1 \geq 0 \) - **Example 3**: \( wlp(y := y+x; x := x+1, x \geq 0) = wlp(y := y+x, wlp(x := x+1, x \geq 0)) = wlp(y := y+x, x+1 \geq 0) = x+1 \geq 0 \) - **Example 4**: \( wlp(y := y+x; x := x+1, x \geq y) = wlp(y := y+x, wlp(x := x+1, x \geq y)) = wlp(y := y+x, x+1 \geq y) = x+1 \geq y+x \) - If we were asked only to calculate the \( wlp \), we’d stop here. If we also wanted to logically simplify the \( wp \) then \( x+1 \geq y+x \Leftrightarrow y \leq 1 \). - **Example 5**: (Swap the two assignments in Example 4) \( wlp(x := x+1; y := y+x, x \geq y) \) - \( wlp(x := x+1, wlp(y := y+x, x \geq y)) = wlp(x := x+1, x \geq y+x) = x+1 \geq y+x+1 [\Leftrightarrow y \leq 0 \text{ if you want to logically simplify}] \) - **Example 6**: \( wlp(\text{if } y \geq 0 \text{ then } x := y \text{ fi, } x \geq 0) \) - \( wlp(\text{if } y \geq 0 \text{ then } x := y \text{ else skip fi, } x \geq 0) = (y \geq 0 \rightarrow wlp(x := y, x \geq 0)) \land (y < 0 \rightarrow wlp(\text{skip, } x \geq 0)) = (y \geq 0 \rightarrow y \geq 0) \land (y < 0 \rightarrow x \geq 0) \) - It’s also okay to use \( (y \geq 0 \land y \geq 0) \lor (y < 0 \land x \geq 0) \). - If we want to simplify logically, we can continue with \[\Leftrightarrow y \geq 0 \lor (y < 0 \land x \geq 0)\] \[\Leftrightarrow (y \geq 0 \lor y < 0) \land (y \geq 0 \lor x \geq 0)\] \[\Leftrightarrow y \geq 0 \lor x \geq 0 \text{ (which is also } \Leftrightarrow y < 0 \rightarrow x \geq 0, \text{ if you prefer)}\] E. Domain Predicates for Avoiding Runtime Errors in Expressions - To avoid runtime failure of \( \sigma(e) \), we’ll take the context in which we’re evaluating \( e \) and augment it with a predicate that guarantee non-failure of \( \sigma(e) \). For example, for \( \{P(e)\} = e \{P(v)\} \), we’ll augment the precondition to guarantee that evaluation of \( e \) won’t fail. - For each expression \( e \), we will define a domain predicate \( D(e) \) such that \( \sigma = D(e) \) implies \( \sigma(e) \neq \bot \). - This predicate has to be defined recursively, since we need to handle complex expressions like \( D(b[b[k]]) = 0 \leq k < \text{size}(b) \land 0 \leq b[k] < \text{size}(b) \). - As with \( wp \) and \( sp \), the domain predicate for an expression is unique only up to logical equivalence. For example, \( D(x/y + u/v) = y \neq 0 \land v \neq 0 \Leftrightarrow v \neq 0 \land y \neq 0 \). - **Definition** (Domain predicate \( D(e) \) for expression \( e \)) We must define \( D \) for each kind of expression that can cause a runtime error: • $D(c) = D(v) = T$ if where $c$ is a constant and $v$ is a variable. • Evaluation of a variable or constant doesn't cause failure. • $D(b[e]) = D(e) \land 0 \leq e < \text{size}(b)$ • $D(e_1 / e_2) = D(e_1 \% e_2) \iff D(e_1) \land D(e_2) \land e_2 \neq 0$ • $D(\sqrt{(e)}) = D(e) \land e \geq 0$ • And so on, depending on the datatypes and operations being used. • The various operations ($+$, $-$, etc.) and relations ($\leq$, $=$, etc.) don't cause errors but we still have to check their subexpressions: • $D(e_1 \text{ op } e_2) = D(e_1) \land D(e_2)$, except for $\text{op} = \div$ or $\%$ • $D(\text{op } e) = D(e)$, unless you add an operator that can cause runtime failure. • $D(\text{if } B \text{ then } e_1 \text{ else } e_2 \text{ fi}) = D(B) \land (B \rightarrow D(e_1)) \land (\neg B \rightarrow D(e_2))$ • (For a conditional expression, we only need safety of the one branch we execute.) **Example 7:** $D(b[b[k]]) = D(b[k]) \land 0 \leq b[k] < \text{size}(b)$ $= D(k) \land 0 \leq k < \text{size}(b) \land 0 \leq b[k] < \text{size}(b)$ $\iff 0 \leq k < \text{size}(b) \land 0 \leq b[k] < \text{size}(b)$ **Example 8:** $D((-b + \sqrt{(b*b - 4*a*a*c)})/(2*a))$ $= D(e) \land D(2*a) \land 2*a \neq 0$ where $e = -b + \sqrt{(b*b - 4*a*c)}$ $= D(-b) \land D(\sqrt{(b*b - 4*a*c)}) \land D(2*a) \land 2*a \neq 0$ $\iff D(\sqrt{(b*b - 4*a*c)}) \land 2*a \neq 0$ %since $D(-b) = D(2*a) = T$ $= D(b*b - 4*a*c) \land (b*b - 4*a*c \geq 0) \land 2*a \neq 0$ $\iff b*b - 4*a*c \geq 0 \land 2*a \neq 0$ **Example 9:** $D(\text{if } 0 \leq k < \text{size}(b) \text{ then } b[k] \text{ else } 0 \text{ fi})$ $= D(B) \land (B \rightarrow D(b[k])) \land (\neg B \rightarrow D(0))$ where $B = 0 \leq k < \text{size}(b)$ $= (B \rightarrow D(b[k])) \land (\neg B \rightarrow T)$ since $D(B)$ and $D(0) = T$ $\iff B \rightarrow D(b[k])$ expanding $D(b[k])$ $= B \rightarrow D(k) \land 0 \leq k < \text{size}(b)$ $\iff B \rightarrow T \land B$ $\iff T$ F. Domain Predicates for Avoiding Runtime Errors in Programs • Recall that we extended our notion of operational semantics to include $\langle S, \sigma \rangle \rightarrow^* \langle E, \bot_e \rangle$ to indicate that evaluation of $S$ causes a runtime failure. • We can avoid runtime failure of statements by adding domain predicates to the preconditions of statements. Though for loops we can't in general calculate the $wp/\text{wp}$, we can calculate the domain predicate for them. • **Definition:** For statement $S$, the predicate $D(S)$ gives a sufficient condition to avoid runtime errors. • $D(\text{skip}) = T$ • $D(v := e) = D(e)$ • \(D(b[e_1] := e_2) = D(b[e_1]) \land D(e_2)\) • \(D(S_1 ; S_2) = D(S_1) \land wp(S_1, D(S_2))\) [Running \(S_1\) when \(D(S_1)\) holds tells us \(S_1\) won't cause an error. Running \(S_1\) when \(wp(S_1, D(S_2))\) holds tells us that \(S_1\) will establish \(D(S_2)\), so running \(S_2\) won't cause an error.] • If \(\sigma \models D(S_1)\) then \(\bot \notin M(S_1, \sigma)\). • If \(\sigma \models wp(S_1, D(S_2))\), then \(M(S_1, \sigma) \models D(S_2)\). • If \(M(S_1, \sigma) \models D(S_2)\), then \(\bot \notin M(S_2, M(S_1, \sigma))\). • \(D(\text{if } B \text{ then } S_1 \text{ else } S_2 \text{ fi}, q) = D(B) \land (B \rightarrow D(S_1)) \land (\neg B \rightarrow D(S_2))\) • \(D(\text{if } B_1 \rightarrow S_1 \text{ do } S_2 \text{ od}, q) = D(B_1) \lor (B_1 \rightarrow D(S_1)) \land (B_1 \rightarrow D(S_2))\) Note we need \((B_1 \lor B_2)\) to avoid failure of the nondeterministic if-fi due to none of the guards holding. • This definition extends easily to if-fi with more than two guarded commands. • \(D(\text{while } B \text{ do } S_1 \text{ od}) = D(B) \land (\neg B \rightarrow D(S_1))\) • \(D(\text{do } B_1 \rightarrow S_1 \text{ do } B_2 \rightarrow S_2 \text{ od}) = D(B_1) \lor (B_1 \rightarrow D(S_1)) \land (B_2 \rightarrow D(S_2))\) The domain predicate for nondeterministic do-od is like that for if-fi except that having none of the guards hold does not cause an error. **Calculating \(wp\) for loop-free programs** • With the domain predicates, it's easy to extend \(wlp\) for \(wp\) for loop-free programs because we don't have to argue for termination of a loop. • **Definition**: \(wp(S, q) = D(S) \land D(w) \land w\), where \(w = wlp(S, q)\). • \(D(S)\) tells us that running \(S\) won't cause an error, \(D(w)\) tells us that \(w\) makes sense, and \(w\) tells us that running \(S\) will establish \(q\) (if \(S\) terminates). **Example 10**: If a program does a division, then the \(wp\) and \(wlp\) can differ. • Let \(p_2 = wp(x := y; z := v/x, z > x+2) = wp(x := y, p_1)\) • Where \(p_1 = wp(z := v/x, z > x+2) = D(z := v/x) \land D(w) \land w\) where \(w = wlp(z := v/x, z > x+2) = v/x > x+2\) \(p_1 = D(z := v/x) \land (v/x > x+2) \land v/x > x+2 = x \neq 0 \land v/x > x+2 \iff x \neq 0 \land v/x > x+2\) • So \(p_2 = wp(x := y, p_1) = wp(x := y, x \neq 0 \land v/x > x+2)\) \(= wlp(x := y, x \neq 0 \land v/x > x+2)\), since \(x := y\) causes no errors \(= y \neq 0 \land v/y > y+2\) **Example 11**: Let's calculate \(p_0 = wp(x := b[k], sqrt(x) \geq 1)\). When \(S = x := b[k]\) and \(q = sqrt(x) \geq 1\), then • \(p_0 = wp(S, q) = D(S) \land D(w) \land w\) where \(w = wlp(S, q) = wlp(x := b[k], q = sqrt(x) \geq 1) = sqrt(b[k]) \geq 1\) • Breaking this down, • \( wlp(S, q) = wlp(x := b[k], \sqrt{x} \geq 1) \iff \sqrt{b[k]} \geq 1 \), so \[ D(wlp(S, q)) \equiv D(\sqrt{b[k]} \geq 1) = D(b[k]) \land b[k] \geq 0 = 0 \leq k < \text{size}(b) \land b[k] \geq 0. \] \[ D(S) = D(x := b[k]) = D(k) \land 0 \leq k < \text{size}(b) \iff 0 \leq k < \text{size}(b) \] • Combining, we get \[ wp(x := b[k], \sqrt{x} \geq 1) \equiv D(wlp(x := b[k], \sqrt{x} \geq 1)) \land wlp(x := b[k], \sqrt{x} \geq 1) \land D(x := b[k]) \] \[ \iff (\sqrt{b[k]} \geq 1) \land (0 \leq k < \text{size}(b) \land b[k] \geq 0) \land (0 \leq k < \text{size}(b)) \] \[ \iff 0 \leq k < \text{size}(b) \land b[k] \geq 0 \land \sqrt{b[k]} \geq 1 \] (which, if we decide to simplify numerically), \[ \iff 0 \leq k < \text{size}(b) \land b[k] \geq 1 \] Weakest Preconditions Part 2: Calculating wp, wlp; Domain Predicates CS 536: Science of Programming, Spring 2021 A. Why - The weakest precondition and weakest liberal preconditions are the most general preconditions that a program needs in order to run correctly. B. Objectives At the end of this activity you should be able to - Describe the relationship between \( wp(S, q_1 \lor q_2) \), \( wp(S, q_1) \), and \( wp(S, q_2) \) and how it differs for deterministic and nondeterministic programs. - Be able to calculate the \( wlp \) of a simple loop-free program. C. Problems 1. How are \( wp(S, q_1 \lor q_2) \) and \( wp(S, q_1) \) and \( wp(S, q_2) \), related if \( S \) is deterministic? If \( S \) is nondeterministic? For Problems 2 – 4, just syntactically calculate the \( wlp \); don't also logically simplify the result. 2. Calculate the \( wlp \) in each of the following cases. a. \( wlp(k := k - s, n = 3 \land k = 4 \land s = -7) \). b. \( wlp(n := n^*(n-k), n = 3 \land k = 4 \land s = -7) \). c. \( wlp(n := n^*(n-k); k := k - s, n > k + s) \) 3. Let \( Q(k, s) = 0 \leq k \leq n \land s = \text{sum}(0, k) \) where \( \text{sum}(u, v) \) is the sum of \( u, \ u+1, \ldots, \ v \) (when \( u \leq v \)) or 0 (when \( u > v \)). a. Calculate \( wp(k := k+1; s := s+k, Q(k, s)) \). b. Calculate \( wp(s := s+k+1; k := k+1, Q(k, s)) \). c. Calculate \( wp(s := s+k; k := k+1, Q(k, s)) \). (This one isn't compatible with \( s = \text{sum}(0, k) \).) 4. Calculate the \( wp \) below. a. \( wp(\text{if } B \text{ then } x := x/2 \text{ fi}; y := x, x = 5 \land y = z) \). b. \( wp(\text{if } x \geq 0 \text{ then } x := x^2 \text{ else } x := y \text{ fi}; x := c^x, a \leq x < y) \) For Problems 5 and 6, don't forget the domain predicates. You can logically simplify as you go. 5. Calculate \( p \) to be the \( wp \) in \( \{ p \} x := y/b[k] \ (x > 0) \). 6. Calculate \( p_1 \) and \( p_2 \) to be the \( wp \)'s in \( \{ p_1 \} y := \text{sqrt}(b[k]) \ (z < y) \) and \( \{ p_2 \} k := x/k \ (p_1) \).
{"Source-Url": "http://cs.iit.edu/~cs536/handout/Class_11_536_q.pdf", "len_cl100k_base": 5020, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 30949, "total-output-tokens": 5578, "length": "2e12", "weborganizer": {"__label__adult": 0.0003995895385742187, "__label__art_design": 0.0004265308380126953, "__label__crime_law": 0.00040435791015625, "__label__education_jobs": 0.005558013916015625, "__label__entertainment": 9.179115295410156e-05, "__label__fashion_beauty": 0.0001933574676513672, "__label__finance_business": 0.00022494792938232425, "__label__food_dining": 0.0005636215209960938, "__label__games": 0.0008769035339355469, "__label__hardware": 0.0009551048278808594, "__label__health": 0.0006113052368164062, "__label__history": 0.000247955322265625, "__label__home_hobbies": 0.0001951456069946289, "__label__industrial": 0.000690460205078125, "__label__literature": 0.000507354736328125, "__label__politics": 0.00032329559326171875, "__label__religion": 0.0007724761962890625, "__label__science_tech": 0.0269927978515625, "__label__social_life": 0.00019168853759765625, "__label__software": 0.0052337646484375, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0003724098205566406, "__label__transportation": 0.0007176399230957031, "__label__travel": 0.0002233982086181641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13041, 0.02754]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13041, 0.78286]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13041, 0.66074]], "google_gemma-3-12b-it_contains_pii": [[0, 2037, false], [2037, 4842, null], [4842, 7452, null], [7452, 10208, null], [10208, 10980, null], [10980, 13041, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2037, true], [2037, 4842, null], [4842, 7452, null], [7452, 10208, null], [10208, 10980, null], [10980, 13041, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13041, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13041, null]], "pdf_page_numbers": [[0, 2037, 1], [2037, 4842, 2], [4842, 7452, 3], [7452, 10208, 4], [10208, 10980, 5], [10980, 13041, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13041, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
522a781d5e7f6ddbff6ea4759fb464f6df33d344
Styling R plots with cascading style sheets and Rcssplot Tomasz Konopka February 25, 2018 Abstract Package Rcssplot provides a framework for customizing R plots in a way that separates data-handling code from appearance-determining settings. 1 Introduction The R environment provides numerous ways to fine-tune the appearance of plots and charts. Taking advantage of these features can make complex data visualizations more appealing and meaningful. For example, customization can make some components in a composite visualization stand out from the background. However, such tuning can result in code that is long and complex. A specific problem with code for graphics is that it often mixes operations on data with bookkeeping of visual appearance. The mixture makes such code difficult to maintain and extend. A similar problem in web development is addressed by separating style settings from content using cascading style sheets. The Rcssplot package implements a similar mechanism for the R environment. This vignette is organized as follows. The next section reviews how to create composite visualizations with base graphics. Later sections describe how to manage visual style using Rcssplot. The vignette ends with a summary and a few pointers to other graphics frameworks and packages. 2 Styling plots with base graphics To start, let’s look at styling plots using R’s built-in capabilities, called ‘base graphics’. For concreteness, let’s use an example with a bar chart and a small data vector with made-up numbers. ```r a <- setNames(c(35, 55, 65, 75, 80, 80), letters[1:6]) a ## a b c d e f ## 35 55 65 75 80 80 ``` The function to draw a bar chart in R is `barplot`. We can apply it on our data, `a`, to obtain a chart with R’s default visual style (Figure 1A). ```r barplot(a, main="Base graphics") ``` The output contains many of the elements that we expect from a bar chart (bars, axes, etc.). But there is room for improvement. At a minimum, the chart requires a title and a label for the vertical axis. We might also like to change some colors and spacings. Many of these features can be tuned directly through the `barplot` function (Figure 1B). ```r barplot(a, main="Manual tuning", ylab="y label", col="#000080", border=NA, space=0.35) ``` The function call is now longer, but the output is more complete. It is possible to tune the plot a little further using other arguments to `barplot`. However, some aspects of the chart, for example margins, are not accessible in this manner. Furthermore, we may wish to add other custom elements to the chart area, for example a subtitle. To adjust or to create these elements, it is necessary to issue several function calls. In practice it is convenient to encapsulate such commands in a custom function. Figure 1: Charts created with base graphics using: (A) R’s `barplot` function and default settings; (B) R’s `barplot` function and some custom settings; (C) a custom plot function that styles bars, axes, and labels individually. ```r ## helper builds a string describing a range range.string <- function(x) { paste0("Data range is \([", min(x), ",", max(x), "]\)\) } ## barplot with several custom settings and components base.barplot.1 <- function(x, main="Custom plot function", ylab="y label") { ## start with a plot with bars, but nothing else barpos <- barplot(x, col="#000080", axes=FALSE, axisnames=FALSE, border=NA, space=0.35) ## add custom components axis(1, at=barpos[,1], labels=names(x), lwd=0, col="#111111", cex.axis=1.2, line=-0.35) axis(2, col.ticks="#444444", col.axis="#444444", cex.axis=1.2, lwd=1.2, las=1, tck=-0.03, lwd.ticks=1.2) mtext(main, adj=0, line=2.2, cex=1.1) mtext(range.string(x), adj=0, line=0.9, cex=0.8, col="#444444") mtext(ylab, side=2, cex=0.8, line=3, col="#444444") } ``` The first block above is a helper function to construct a subtitle. The second block is a definition of function `base.barplot.1`. It takes as input a data vector `x` and two strings for the title and y-axis label. The first line of the function body creates a chart without excess decorations. Subsequent lines add axes and labels. Each command carries several custom settings. We can now apply the custom function on our data (Figure 1C). ``` base.barplot.1(a) ``` The function call is concise, yet its output is a bar chart that looks legible and sophisticated. Coding custom functions like `base.barplot.1` is the usual way for making composite charts with R’s base graphics. However, this approach has some disadvantages. - The custom function is now so specialized that it may only be fit for one-time use. Although we can produce many charts by passing different data vectors and labels, we cannot easily change any visual aspects without updating the function definition. - Because the function mixes code that manipulates data with code that adjusts visual appearance, there are opportunities to introduce bugs during tuning or maintenance. - It is rather difficult to create a second function with the same visual style and to maintain these styles consistent throughout the lifetime of a long project. These observations stem from the fact that the custom function performs several distinct roles. First, it combines graphical commands to create a composite visualization. Second, it performs some useful manipulations on the data (here, compute the range). Third, the function styles graphical components. The difficulties in maintenance all arise from the styling role. Thus, it would be useful to separate this role from the others, i.e. to provide styling settings that are independent from the data-handling instructions. 3 Styling with cascading style sheets The Rcssplot package provides a mechanism to style R’s graphics that is inspired by cascading style sheets (css) used in web-page design. In this approach, settings for visual representation are stored in a file that is separate from both the raw data and the code that creates visualizations. 3.1 Using Rcss styles Let’s adopt a convention where style definitions have Rcss extensions. Let’s begin with a style file called vignettes.bar1.Rcss. This file is available in a sub-folder along with the package vignette. ```r barplot { border: NA; col: #000080; space: 0.35; } ``` The file contains a block with the name `barplot`. This corresponds to R’s function for bar charts. Elements within the block are property/value pairs that correspond to the function arguments. We can read the style definition into the R environment using function `Rcss`. ```r library("Rcssplot") style1 <- Rcss("Rcss/vignettes.bar1.Rcss") ``` We can look inside the object to check that it loaded correctly. ```r style1 ## Rcssplot: ## Defined selectors: barplot ## Use function printRcss() to view details for individual selectors printRcss(style1, "barplot") ## Rcssplot: barplot ## | border: NA ## | col: #000080 ## | space: 0.35 ## ## Defined classes: ``` The first command displays some basic information about the newly loaded style. The second command shows more details for the `barplot` component (called a selector). In this case, we recognize the three property/value pairs from the Rcss file. Next, let’s use the style object in a plot. The Rcssplot package provides wrappers for many of R’s graphics functions. These wrappers have prefixes Rcss and accept the same arguments as their base-graphics parents. For example, to create a barplot, we invoke `Rcssbarplot` (Figure 2A). ```r Rcssbarplot(a, main="Rcssbarplot, unstyled", ylab="y label") ``` When used in plain form as above, the output of the wrapper is exactly the same as from base graphics `barplot`. But we can add styling by passing our style object as an argument (Figure 2B). ```r Rcssbarplot(a, main="Rcssbarplot, styled", ylab="y label", Rcss=style1) ``` The output is analogous to one of the previous examples (c.f. Figure 1B). Previously, we achieved the effect by specifying three arguments within a `barplot` function call (border, col, and space). The Rcssplot alternative requires only one argument: custom settings are extracted automatically from the style object, `style1`. In some cases it is useful to override settings defined in a style sheet (Figure 2C). ```r Rcssbarplot(a, main="Rcssbarplot, override", ylab="y label", space=1, Rcss=style1) ``` Here, the bar width is determined by `space=1` in the function call despite this property being also specified in the style object. Thus, values set manually take precedence over cascading style sheets. Figure 2: Charts created with base graphics and Rcssplot using: (A) the default style; (B) a style determined through a style sheet; (C) a style sheet, but with the bar width over-ridden by a setting within a function call. 3.2 Using Rcss classes Next, let’s implement the entire custom bar plot using style sheets and introduce a new feature, style classes. We need additional css definitions. These are encoded in another file vignettes.bar2.Rcss. ```r axis { cex.axis: 1.2; } axis.x { line: -0.35; lwd: 0; } mtext.ylab, mtext.submain, axis.y { col: #444444; } axis.y { col.axis: #444444; col.ticks: #444444; las: 1; lwd: 1.2; lwd.ticks: 1.2; tck: -0.03; } mtext { cex: 0.8; adj: 0; } mtext.main { line: 2.2; cex: 1.1; } mtext.ylab { line: 3; adj: 0.5; } mtext.submain { line: 0.9; } ``` The definitions are again arranged into blocks that correspond to R’s base graphics commands. It is worth noting a few features. - The values in the style sheet match the settings hard-coded into function base.barplot.1. The format of the style sheet makes it easy to identify property/value pairs. - Some blocks contain names with dots followed by a string, e.g. axis.x. This notation defines property/value pairs that are activated only in particular circumstances. In the case of axis.x, the definitions pertain to function Rcssaxis, but only when accompanied by class label x. This will become clearer below. • Some blocks contain names for several base graphics components separated by commas, e.g. `mtext.ylab, mtext.submain, axis.y`. This syntax defines property/value pairs for several components at once. In this case, it is convenient to specify a common color. We can now write a new function based on Rcssplot wrappers. ``` ## barplot using Rcssplot, version 1 css.barplot.1 <- function(x, main="Custom Rcss plot", ylab="y label", Rcss="default", Rcssclass=c()) { ## create an empty barplot barpos <- Rcssbarplot(x, axes=FALSE, axisnames=FALSE, Rcss=Rcss, Rcssclass=Rcssclass) ## add custom components Rcssaxis(1, at=barpos[,1], labels=names(x), Rcss=Rcss, Rcssclass=c(Rcssclass,"x")) Rcssaxis(2, Rcss=Rcss, Rcssclass=c(Rcssclass,"y")) Rcssmtext(main, Rcss=Rcss, Rcssclass=c(Rcssclass, "main")) Rcssmtext(range.string(x), Rcss=Rcss, Rcssclass=c(Rcssclass, "submain")) Rcssmtext(ylab, side=2, Rcss=Rcss, Rcssclass=c(Rcssclass,"ylab")) } ``` The structure mirrors `base.barplot.1`, but also accepts an `Rcss` object and a vector for `Rcssclass`. Within the function body, all the custom graphical settings are replaced by an `Rcss` argument and a vector for `Rcssclass`. When there are multiple calls to one graphic function (e.g. `Rcssaxis` for the x and y axes), the `Rcssclass` vector contains some distinguishing labels. These labels match the css subclasses we saw previously. The output from the new function is a complete plot with all our custom settings (Figure 3A). ``` style2 <- Rcss(c("Rcss/vignettes.bar1.Rcss", "Rcss/vignettes.bar2.Rcss")) css.barplot.1(a, main="Rcss style2", Rcss=style2) ``` The first line creates a new style object, `style2`, using the `Rcss` definitions from both files displayed above. The call to `css.barplot.1` then creates the chart. The advantage of this approach is that we can now change the visual output by replacing the `Rcss` style object by another one without re-coding the custom function. One way to change the style is to edit the `Rcss` files (or use different files), load the definitions into a new style object, and generate a new figure with the new style. Another way, which we discuss next, is to define multiple styles within one `Rcss` object. ### 3.3 Using multiple styles Let’s look at another set `Rcss` file, `vignettes.bar3.Rcss`. ``` barplot.typeB { col: #449944; space: 0.6; } mtext.typeB.main { cex: 1.0; font: 2; } ``` The two blocks are decorated with a subclass called `typeB`. This class name is not explicitly used within the code of the plot function `css.barplot.1`. However, we can prime the plot function to use these definitions by providing the class name during the function call (Figure 3B). ``` style3 <- Rcss(paste0("Rcss/vignettes.bar", c(1, 2, 3), ",")) css.barplot.1(a, main="Rcss style3, class typeB", Rcss=style3, Rcssclass="typeB") ``` The output now incorporates settings defined in the generic `barplot` and `mtext` css blocks, but also those settings targeted using the `typeB` subclass. As in conventional cascading style sheets, when a parameter is specified in multiple locations with an `Rcss` object, the definition with the more specific class takes precedence. When the `Rcssclass` argument contains items that are not recognized, these items are just ignored (Figure 3C). In summary, we saw in this section how to use cascading style sheets to determine visual appearance. This approach has several advantages over using base graphics alone. - The new function separates the details of visualization from the R code. This makes it easier to tweak aesthetics (in the Rcss files) without worrying about the code structure. - The new function is shorter because calls to commands that generate structure (e.g. axis and mtext) are not interspersed with details of graphical parameters. This makes it easier to see the organization of the composite graphic. - The styles can be reused in several custom functions. Thus, it is straightforward to maintain a uniform style across a family of functions. In the next section we will look at additional tricks that can simplify creation of custom graphics. ## 4 Additional features This section covers some “advanced” features. The first three subsections deal with reducing repetitive code. The last subsection introduces usage of css objects as general data stores. ### 4.1 Overloading base graphics Although our custom function rcss.barplot.1 provides us with opportunities to tune the chart, its code has a number of inelegant repetitive elements. One of these is the Rcss prefix before each of the plot commands. It is possible to avoid this prefix by overloading the base graphics functions with their Rcssplot wrappers. Overloading is achieved using function RcssOverload. ```r ## barplot using Rcssplot, version 2 (using overloading) rcss.barplot.2 <- function(x, main="Custom Rcss plot", ylab="y label", Rcss="default", Rcssclass=c()) { ## overload base graphics function by Rcssplot wrappers RcssOverload() ## create a barplot (without Rcss prefixes) barpos <- barplot(x, axes=FALSE, axisnames=FALSE, Rcss=Rcss, Rcssclass=Rcssclass) axis(1, at=barpos[,1], labels=names(x), Rcss=Rcss, Rcssclass=c(Rcssclass, "x")) axis(2, Rcss=Rcss, Rcssclass=c(Rcssclass, "y")) mtext(main, Rcss=Rcss, Rcssclass=c(Rcssclass, "main")) mtext(range.string(x), Rcss=Rcss, Rcssclass=c(Rcssclass, "submain")) mtext(ylab, side=2, Rcss=Rcss, Rcssclass=c(Rcssclass, "ylab")) } ``` Here, the first step signals that subsequent calls to e.g. axis should actually invoke the corresponding wrappers, e.g. Rcssaxis. The subsequent code thus omits the Rcss prefixes. Note that executing RcssOverload masks several commands from base graphics; the step carries numerous side effects for the working environment. Such behavior is typically undesirable. In this case, however, the net effect is similar as could be achieved by masking base graphics functions within the package and taking effect automatically when the package is loaded. This implementation with an explicit overload step provides a mechanism to activate the masking only when needed. It is provides a means to use both base graphics and Rcssplot wrappers within a single project. 4.2 Using a default style and compulsory classes Other repetitive elements are the constructions Rcss=Rcss and Rcssclass=Rcssclass. They ensure that the style object and the class specified through the function call are passed on to the individual wrappers. We can avoid this repetition by setting a default style and a compulsory class. Handling of default values is achieved through objects RcssDefaultStyle and RcssCompulsoryClass. These objects can be defined in any environment, for example in the global environment in the console or inside a function. When present, the package wrappers detect them and use information therein to influence plot behavior. Consider, for example, the following custom function. ```r ## barplot using Rcssplot, version 3 (using defaults & compulsory classes) rcss.barplot.3 <- function(x, main="Custom Rcss plot", ylab="y label", Rcss="default", Rcssclass=()) { ## overload base graphics, set defaults and compulsory classes RcssOverload() RcssDefaultStyle <- RcssGetDefaultStyle(Rcss) RcssCompulsoryClass <- RcssGetCompulsoryClass(Rcssclass) ## create a barplot (without Rcss arguments) barpos <- barplot(x, axes=FALSE, axisnames=FALSE) axis(1, at=barpos[,1], labels=names(x), Rcssclass="x") axis(2, Rcssclass="y") mtext(main, Rcssclass="main") mtext(range.string(x), Rcssclass="submain") mtext(ylab, side=2, Rcssclass="ylab") } ``` The preparation steps here perform overloading, and then set a default style and compulsory class. Subsequent calls to graphics functions do not to refer to object Rcss or the class Rcssclass. Nonetheless, the output of the custom function can exhibit styling. - Calls to `axis` and `mtext` in the above function still carry Rcssclass arguments. These are necessary to distinguish styling between the x- and y-axis, and between the title and sub-title. However, setting the compulsory class reduces clutter (no need to write `Rcssclass=Rcssclass`). - It is important to note that the preparation steps set RcssDefaultStyle and RcssCompulsoryClass with the help of function calls. Their use will become more clear in the next section. In short, those functions help preserve defaults that may have been set outside of the custom function. 4.3 Using Rcssplot globally In the previous two examples, rcss.barplot.2 and rcss.barplot.3, we used overloading and changes to defaults within those custom functions, i.e. in environments local to those functions. In some cases, it may be reasonable to apply these changes in the global environment instead. This can be achieved by running the preparation outside of the custom function. ```r RcssOverload() RcssDefaultStyle <- style3 RcssCompulsoryClass <- c() ``` Subsequent to these commands, the custom function can be simplified further. ```r ## barplot using Rcssplot, version 4 (assumes global use of Rcssplot) css.barplot.4 <- function(x, main="Custom Rcss plot", ylab="y label", Rcssclass="typeB") { ## adjust compulsory class RcssCompulsoryClass <- RcssGetCompulsoryClass(Rcssclass) ``` ## create a barplot ```r carpos <- barplot(x, axes=FALSE, axisnames=FALSE) axis(1, at=carpos[,1], labels=names(x), Rcssclass="x") axis(2, Rcssclass="y") mtext(main, Rcssclass="main") mtext(range.string(x), Rcssclass="submain") mtext(ylab, side=2, Rcssclass="ylab") ``` There are a couple of points to note. - Function `rcss.barplot.4` assumes that overloading has taken place. This is evidenced by calls to, for example, `axis`, with `Rcssclass` arguments. Thus, if this function is ever invoked without a prior overloading step, those calls will generate errors. - The function definition no longer carries an argument `Rcss`. The style is assumed to come entirely from the default style. - The function still carries an argument `Rcssclass`. Keeping the argument is a mechanism that allows functions within a project to use different sub-classes without the need to repeatedly redefine the compulsory class in the global environment. Sometimes, we may want to reset the default style and/or the compulsory style class(es). This is simply achieved by setting those objects to `NULL`. ```r RcssDefaultStyle <- NULL RcssCompulsoryClass <- NULL ``` Now that we’ve adjusted default settings within custom functions as well as in the global environment, let’s revisit the functions `RcssGetDefaultStyle` and `RcssGetCompulsoryClass`. Consider the following snippet. ```r RcssCompulsoryClass <- "bar0" RcssCompulsoryClass ## [1] "bar0" foo1 <- function() { RcssCompulsoryClass <- "bar1" RcssCompulsoryClass } foo1() ## [1] "bar1" ``` The first result is `bar0`; let’s think of this as a css class that we wish to employ at a global level. In the first function, `foo1`, the compulsory class is set with a naive assignment. The return value reveals that within that function, the compulsory class becomes `bar1` and our previous value `bar0` is lost. This is normal behavior, but it does not reflect our intention to keep `bar0` as a global style class. To keep the intended global class, we can use function `RcssGetCompulsoryClass`. ```r foo2 <- function() { RcssCompulsoryClass <- RcssGetCompulsoryClass("bar2") RcssCompulsoryClass } foo2() ## [1] "bar0" "bar2" RcssCompulsoryClass ## [1] "bar0" ``` Here, `foo2` looks up the compulsory class set in parent environments and augments it with the new label. The effective compulsory class within that function thus becomes a combination of the global and local settings. The final command show that `RcssCompulsoryClass` in the global environment remains unaffected. The use of the labels `bar1` and `bar2` are thus localized to the custom functions. The function `RcssGetDefaultStyle` fulfills an analogous role for style objects. Using a function call `RcssGetDefaultStyle("default")` returns an object equivalent to the one set in a parent environment. ### 4.4 Using custom selectors In this section, let’s switch our focus toward using cascading style sheets as general data structures. From an abstract viewpoint, Rcss objects are just stores of property/value pairs. Consider style file `vignettes.bar4.Rcss`. ```r baraxis { stripe: 1; } barplot.dotted { col: #9999cc; } baraxis.dotted { stripe: 1; ylim: 0 101; } abline.dotted { col: #666666; lty: 2; } ``` The first block is named `baraxis`, but this does not correspond to any of R’s base graphics commands. Therefore, this block does not affect any of the Rcssplot wrapper functions. But we can write code to exploit information in `baraxis` by extracting values manually. The package provides two functions for this purpose, `RcssProperty` and `RcssValue`. ```r style4 <- Rcss(paste0("Rcss/vignettes.bar", c(1, 2, 4), ".Rcss")) RcssProperty("baraxis", "stripe", Rcss=style4) ## $defined ## [1] TRUE ## $value ## [1] 1 ``` The output signals that the `stripe` property in a `baraxis` block is indeed defined, and provides its value. A related command automatically substitutes undefined values with a provided default. ```r RcssValue("baraxis", "stripe", default=0, Rcss=style4) ## [1] 1 RcssValue("baraxis", "stripe", default=0, Rcss=style4) ## [1] 0 ``` The result here is 1 for `stripe` because we saw this property is defined; the suggestion `default=0` is ignored. The second result is 0 because the misspelling is not present in the file. We can now exploit this feature to augment our bar chart with an option to draw horizontal rules instead of a y-axis. ```r ## barplot using Rcssplot, version 5 (uses custom css selectors) rcss.barplot.5 <- function(x, main="", ylab="Proportion (%)", Rcss="default", Rcssclass=c()) { ## use overloading, custom style, compulsory class RcssOverload() RcssDefaultStyle <- RcssGetDefaultStyle(Rcss) RcssCompulsoryClass <- RcssGetCompulsoryClass(Rcssclass) ## extract custom properties - show axis? force ylim? stripes <- RcssValue("baraxis", "stripe", default=0) ylim <- RcssValue("baraxis", "ylim", default=NULL) ``` Figure 4: Charts using custom css selectors: (A) horizontal rules instead of a y-axis; (B) styled rules with a fixed vertical scale; (C) again styled rules with a fixed vertical scale, showing new data. ```r ## create background barpos <- barplot(x, axes=FALSE, axisnames=FALSE, ylim=ylim, col="#ffffff", border=NA) ## draw a bar chart axis(1, at=barpos[,1], labels=names(x), Rcssclass="x") if (stripes) { stripevals <- axis(2, lwd=0, labels=NA) labpos <- axis(2, lwd=0, lwd.ticks=0, Rcssclass="y") abline(h=labpos) } else { axis(2, Rcssclass="y") } barplot(x, axes=FALSE, axisnames=FALSE, add=TRUE) mtext(main, Rcssclass="main") mtext(range.string(x), Rcssclass="submain") mtext(ylab, side=2, Rcssclass="ylab") ``` Two commands near the top fetch values for `stripes` and `ylim`. The subsequent code produces output conditional to these new variables (Figure 4A). ```r css.barplot.5(a, main="Stripes", Rcss=style4) ``` The style we loaded also defines a class dotted (Figure 4B). ```r css.barplot.5(a, main="Stripes, y-scale 100", Rcss=style4, Rcssclass="dotted") ``` In addition to providing styling for the horizontal rules, the class dotted also defines a property `ylim`. Its value is used within `css.barplot.5` to force limits on the vertical axis. This behavior can be desirable for several reasons. If the plotted values are proportions in percentages, it may be useful to show the full range from 0% to 100%. A fixed range can also be useful when displaying plots side-by-side (Figure 4C). ```r a2 <- setNames(c(12, 20, 26, 72, 88, 94), tail(letters, 6)) css.barplot.5(a2, main="... new data", Rcss=style4, Rcssclass="dotted") ``` In this example, the new data are easily compared with the old because the vertical scales in the charts are recognizably the same. 5 Summary This vignette introduced the Rcssplot package through an extended example based on a bar chart. We started with a visualization implemented using R’s base graphics, and then adapted this design using Rcssplot. At the technical level, the package provides a framework for customizing R graphics through a system akin to cascading style sheets. One part of the framework consists of functions that manage information in style sheets. These functions parse Rcss files, extract property/value pairs relevant in various contexts, and manage default styles and classes. Another part of the framework consists of wrapper functions that mimick base graphics functions (plot, axis, text, etc.), but extract styling details from the cascading style objects. From a useability perspective, the Rcssplot package breaks building composite visualizations down into distinct tasks. Fine-tuning of aesthetics is delegated to cascading style sheets, which become external to R code. They can thus be adjusted safely without compromising data analysis and they can be shared between projects. The R code that is left is focused on data analysis and on the structure of the composite visualization. It is thus easier to understand and maintain. The Rcssplot package is intended to provide a straightforward and familiar means to tune graph- ics (given background in conventional cascading-style sheets). It is important to note, however, that this is not the only graphics framework available for R. Indeed, other approaches have served as in- spirations and models. In the space of static graphics, package ggplot2 provides a mature approach to creating complex charts [1]. It supports tuning via themes; package ggthemes provides several examples [2]. In the space of interactive visualizations, packages shiny [3] and plotly [4] create very compelling results. Acknowledgements Many thanks to R’s documentation and manuals. A particularly valuable resource is [5]. Rcssplot is developed on github with contributions from (in alphabetical order): cuche27, nfultz. References 2016. and Marianne Corvellec and Pedro Despouy. plotly: Create Interactive Web Graphics via ‘plotly.js’ R package version 4.5.6, 2016. A Appendix A.1 Grammar Parsing of cascading style sheets is performed within the Rcssplot based on the grammar below. This formal definition is a summary and guide, and can serve as a comparison to the full css grammar of web design. However, actual parsing within the package is carried out manually, not using an auto-generated parser. ### A.2 Package versions **v0.3.0** - New functions for getting and setting values from Rcss objects: RcssValue, RcssUpdate. These functions are complementary to previously existing functions, but are less verbose, especially for fetching values from a default style. - Better parsing and handling of special values in css files, e.g. TRUE/FALSE, NA, NULL. **v0.2.0** First version submitted to CRAN. ### A.3 Session info ```r sessionInfo() ``` ``` ## R version 3.4.1 (2017-06-30) ## Platform: x86_64-pc-linux-gnu (64-bit) ## Running under: Ubuntu 16.04.4 LTS ## ## Matrix products: default ## BLAS: /software/opt/R/R-3.4.1/lib/libRblas.so ## LAPACK: /software/opt/R/R-3.4.1/lib/libRlapack.so ## ## locale: ## [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C ## [3] LC_TIME=en_GB.UTF-8 LC_COLLATE=C ## [5] LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 ## [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C ## [9] LC_ADDRESS=C LC_TELEPHONE=C ## ## attached base packages: ## [1] stats graphics grDevices utils datasets methods base ## ## other attached packages: ## [1] knitr_1.17 Rcssplot_0.3.0 ## ## loaded via a namespace (and not attached): ## [1] compiler_3.4.1 magrittr_1.5 tools_3.4.1 stringi_1.1.5 ## [5] highr_0.6 stringr_1.2.0 evaluate_0.10.1 ```
{"Source-Url": "https://cran.rstudio.com/web/packages/Rcssplot/vignettes/Rcssplot.pdf", "len_cl100k_base": 7624, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30075, "total-output-tokens": 8752, "length": "2e12", "weborganizer": {"__label__adult": 0.0002696514129638672, "__label__art_design": 0.0016679763793945312, "__label__crime_law": 0.00019860267639160156, "__label__education_jobs": 0.0003178119659423828, "__label__entertainment": 9.208917617797852e-05, "__label__fashion_beauty": 9.709596633911131e-05, "__label__finance_business": 0.0001329183578491211, "__label__food_dining": 0.00022268295288085935, "__label__games": 0.00044083595275878906, "__label__hardware": 0.0004727840423583984, "__label__health": 0.00016796588897705078, "__label__history": 0.00016832351684570312, "__label__home_hobbies": 7.939338684082031e-05, "__label__industrial": 0.00020265579223632812, "__label__literature": 0.00014507770538330078, "__label__politics": 0.00013172626495361328, "__label__religion": 0.0002942085266113281, "__label__science_tech": 0.00557708740234375, "__label__social_life": 7.283687591552734e-05, "__label__software": 0.0266571044921875, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.000152587890625, "__label__transportation": 0.00017189979553222656, "__label__travel": 0.00017070770263671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30337, 0.03918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30337, 0.34461]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30337, 0.72437]], "google_gemma-3-12b-it_contains_pii": [[0, 2786, false], [2786, 5700, null], [5700, 8585, null], [8585, 10035, null], [10035, 13345, null], [13345, 15682, null], [15682, 19280, null], [19280, 21891, null], [21891, 24196, null], [24196, 26602, null], [26602, 28824, null], [28824, 30337, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2786, true], [2786, 5700, null], [5700, 8585, null], [8585, 10035, null], [10035, 13345, null], [13345, 15682, null], [15682, 19280, null], [19280, 21891, null], [21891, 24196, null], [24196, 26602, null], [26602, 28824, null], [28824, 30337, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30337, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30337, null]], "pdf_page_numbers": [[0, 2786, 1], [2786, 5700, 2], [5700, 8585, 3], [8585, 10035, 4], [10035, 13345, 5], [13345, 15682, 6], [15682, 19280, 7], [19280, 21891, 8], [21891, 24196, 9], [24196, 26602, 10], [26602, 28824, 11], [28824, 30337, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30337, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0ef037fb61905d28c7317e20b6f6199cf0a26930
Model Check What You Can, Runtime Verify the Rest Timothy L. Hinrichs, A. Prasad Sistla and Lenore D. Zuck University of Illinois at Chicago {hinrichs,sistla,zuck}@uic.edu Abstract Model checking and runtime verification are pillars of formal verification but for the most part are used independently. In this position paper we argue that the formal verification community would be well-served by developing theory, algorithms, implementations, and applications that combine model checking and runtime verification into a single, seamless technology. This technology would allow system developers to carefully choose the appropriate balance between offline verification of expressive properties (model checking) and online verification of important parts of the system’s state space (runtime verification). We present several realistic examples where such technology appears necessary and a preliminary formalization of the idea. 1 Introduction The virtues and drawbacks of model checking (MC) are well known: model checked systems undeniably satisfy some important properties, but some systems are too unwieldy for model checking in practice. Likewise the virtues and drawbacks of runtime verification (RV) are well known: most systems can be runtime verified, but that verification only ensures that some important properties hold for that fraction of the system that happens to be exercised during verification. In short, model checking guarantees completeness at the cost of applicability and runtime verification guarantees applicability at the cost of completeness. But if there were techniques for integrating model checking and runtime verification (MC+RV), system developers would have fine-grained control over the tradeoff between completeness and applicability during the verification process. Besides the intellectual gratification of hybridizing two well-known techniques for verification, MC+RV would allow the verification community to present a unified front to the outside world—to provide consumers of verification technology with a single conceptual product to understand and apply. Improving the accessibility of verification technology to the outside world will help both society as a whole, since software will contain fewer bugs, and the research community in particular because of the increased source of real-world verification problems encountered in the wild. The technical problems arising when combining model checking and runtime verification are mainly due to their substantially different underlying assumptions. Model checking runs offline (i.e., before the system is deployed), and runtime verification runs online (i.e., after the system is deployed). Model checking only works properly when the entire system can be examined, and runtime verification only requires a single run of the system. These differences are of course what makes the hybridization of the two techniques so attractive. These basic observations about the potential benefits of MC+RV are not novel in and of themselves. [7] employed static program analysis (of which MC is a special sort) to simplify runtime verification checks. [20] utilized the combination of model checking and runtime verification to identify security vulnerabilities in web applications. [8] utilizes runtime analysis to Model Check What You Can, Runtime Verify the Rest T.L. Hinrichs, A.P. Sistla and L.D. Zuck perform an anytime variant of static analysis. While such examples of combining static and dyna- mic analysis exist, they are not based on a single, unified theory of combining these forms of analysis and require intimate knowledge of the underlying analysis (or verification) technology. In this position paper, we advocate investigating the underlying theory of combining model checking and runtime verification. That theory will enable tools where a developer chooses the extent of verification appropriate to the application—to prioritize how mission critical each portion of a system is and to verify that system accordingly. In short, we argue that verifica- tion technology will become commonplace only when that technology accounts for the limited resources (time, compute cycles, etc.) organizations have to devote to verification. In the remainder of this paper, we begin by comparing the strengths and weaknesses of model checking and runtime verification in more detail (Section 2). We then state the position this paper puts forth (Section 3) and identify several domains where the combination of model checking and runtime verification appears to offer substantial benefit (Section 4). Then we briefly discuss a formalization of the central problem of MC+RV (Section 5) and finish with related work (Section 6) and a conclusion (Section 7). 2 Model Checking and Runtime Verification Model Checking. Model checking is a technique where one takes a model of a system and a model of a property and algorithmically checks whether the system satisfies the property [10, 11, 27]. Typically, the systems one has in mind are hardware or software systems, and the specifications are temporal. While model checking is commonly applied to finite-state systems, at times it is possible to apply it for infinite-state systems using, for example, abstractions, symmetry properties, and small-model properties (e.g., [1, 14, 21, 25]). Model checking can be explicit-state where both model and properties (or their complements) are represented as automata-like structures, and model checking reduces to finding whether any path in the system violates the property. Of course, when liveness properties are concerned, one must consider only fair paths. An alternative approach to model checking is using symbolic techniques (most often, BDD- or SMT- based). Each approach has its strengths. For example, all systems we are aware of that model checked timed properties employ an explicit-state approach. The attraction of model checking is that it is algorithmic, the alternative being the use of deductive methods and theorem provers (assuming that one wishes the verification to be at least automatically checked). However, whichever model checking approach one employs, state-explosion remains the main hurdle of using model checking for large systems. Much work has been devoted to pushing the envelope, to the extent that some infinite state systems have been model checked. Yet, it is still often the case, especially with software, that the state space is too large and no known reduction in the state space removes this problem. This is the reason that model checking is mainly used as a debugging mechanism in the early design stages and methods such as bounded model checking [6] have gained popularity. Another drawback of model checking techniques is that they are not easily amenable to compositionally. While much work as been done on compositional model checking [21], the application of such methods in “full-fledged” software systems is still impractical. This is important since a single system often includes several components that can be swapped out for similarly behaving components, and one may wish to verify the system without the swappable components independently from the components themselves. To complicate matters further, some of the swappable components may be black boxes. Lack of model checking compositionality is also problematic for large (e.g., infinite state) systems where there is a good reason to believe that during most of its executions the system visits only a small number of states, and this 235 portion of the system can be model-checked. For these and other reasons (some of them will be elaborated on in the sequel) one may want to use model checking to verify only parts of a system and to use other methodologies to guarantee the correctness of other parts. We propose to use Runtime Verification for this purpose. **Runtime Verification.** In contrast to model checking, runtime verification monitors a system and extracts information from its executions to detect (and, sometimes, react to) flaws. While the term runtime verification is only a decade old, the ideas behind runtime verification had been used long before. Recently, runtime verification has become an active research area because of its power to ensure correctness of systems that are not amenable to model checking. In particular, while model checking is mostly useful at the design stage, runtime verification is useful at the deployment stage [3,4,12,19]. The main technical problems in runtime verification are (i) automatically adding runtime verification code to an existing system and (ii) minimizing the overheads of that code. A popular approach for the former is to use aspect-oriented programming techniques that allow such code to be written independently of the original system and through compiler techniques automatically injected into the code [2]. For the second problem, some work has employed static analysis to eliminate any runtime checks that can easily be identified as unnecessary [7]. In contrast to model checking, runtime verification inherently performs well when exploring only a portion of the state space—it was designed to do so. But its overall guarantees are much weaker than those offered by model checking. The properties that are checked need only hold over the portion of the state space that are executed once the system is deployed, and the class of properties amenable to runtime verification is a strict subset of the properties amenable to model checking. Thus while runtime verification is better behaved on partial state space explorations, it offers weaker guarantees of completeness (both in terms of expressiveness and state-space coverage). ### 3 Position Both runtime verification and model checking are powerful techniques for checking correctness of systems. Table 1 represents a rough comparison of the methods. <table> <thead> <tr> <th></th> <th>Model Checking</th> <th>Runtime Verification</th> </tr> </thead> <tbody> <tr> <td>Completeness</td> <td>Entire State Space</td> <td>Pertinent State Space</td> </tr> <tr> <td>Applicable Properties</td> <td>Arbitrary (temporal)</td> <td>Safety+</td> </tr> <tr> <td>Performance Penalty</td> <td>Offline</td> <td>Online</td> </tr> </tbody> </table> **Table 1: Model Checking vs. Runtime Verification** Our position is that a theory for combining model checking and runtime verification offers numerous advantages and advances the community’s long term goal of making both hardware and software system verification ubiquitous. Our premise is that it is often relatively easy to identify a set of states where model checking techniques can be employed. This set of space can then be model checked. Once a system enters a state outside of this set, a runtime monitor can be invoked. Of course, one has to monitor whether or not the system’s state is in or out of this set. Thus, the test whether or not the system is in a state that is in the set has to be simple, or otherwise, the advantage of model checking states in the set is compromised. In the sequel we give examples of several systems/properties where MC+RV is beneficial. Below we describe MC+RV in terms of the three evaluation criteria from Table 1. Completeness: Model checking is concerned with the entire state space of the system, whereas runtime verification is only concerned with the portion of the state space that is relevant to a particular deployment. By combining model checking and runtime verification the state space explored is somewhere in between the entire space and the deployment-relevant space. Applicable Properties: Model checking allows us to verify temporal properties of a system whereas runtime verification is usually limited to safety properties. A hybrid of the approaches would allow us to verify all that runtime verification can outside of the model checked state space, and all that model checking can in it. Performance Penalty: The performance penalty for model checking occurs offline (i.e., before deployment), whereas runtime verification pays an online performance penalty. The performance penalty of the hybrid approach is a developer-controlled combination of offline and online. We envision the results of this theory embodied within a tool that allows a developer to choose the extent to which model checking is performed, allowing runtime verification to handle the remaining system. Our message to developers is simple: model check what you can, and runtime verify the rest. 4 Examples Here we present a number of examples illustrating why combining model checking and runtime verification (MC+RV) is useful for verifying real-world systems. For each example, we ask the questions: - What portion of the space should we model check and which portion should we runtime verify? - What formula are we model checking, and what formula are we runtime verifying? 4.1 Contracts In the context of programming languages, a software contract is a logical specification dictating how a piece of software is supposed to behave [22]. For example, a Heap is a data structure that stores an ordered set of elements. The contract for the Heap might require that inserting an element produces a heap that represents the same set as before the insert but with the new element added; merging two heaps results in a heap with the union of the original heaps’ elements. In theoretical computer science, researchers will often introduce a new algorithm and then prove its correctness. Those proofs of correctness usually assume the data structures they utilize behave as they are supposed to, e.g., the Heap obeys its contract. The benefit to this style of proof is that any implementation of those data structures that satisfies their contracts can be used without violating the correctness of the overall algorithm. For example, [15] describes an algorithm for curve finding useful for computer vision that utilizes a Heap and proves correctness as well as runtime complexity assuming the Heap satisfies its contract and complexity bounds. Partitioning the proof of an algorithm’s correctness into two phases (algorithm and data structures) points to an application of MC+RV. For systems relying on data structures, we can model check the correctness of those systems assuming the data structures behave according to their contracts. We can then independently verify the contracts of the data structures. For those situations where offline contract verification is impractical, we can runtime verify the contracts, which is functionality being built into modern programming languages. [16] describes an implementation of runtime contract verification that for certain properties preserves the (amortized) asymptotic behavior of the data structures. In this example, it is clear what portion of the system ought to be model checked and what portion should be runtime verified. It is also clear what formula we intend to model check and what formula we should runtime verify: if the data structure’s contract is \( \alpha \) and the overall property of the system we want to verify is \( \beta \) then we model check \( \alpha \rightarrow \beta \) and we runtime verify \( \alpha \). At runtime, the verifier is turned on only during the data structure operations. 4.2 Race Conditions Race conditions occur in multi-threaded software when the integrity of data depends on the order in which different threads access that data. For example, if Alice and Bob both try to withdraw the last $100 from their joint bank account, and there is some way for them both to succeed, then there is a race condition in the bank’s software. The authors of [13] outfitted Java with a runtime race condition detector and found that the overheads could be an order of magnitude or more slowdown of the software. This performance penalty is likely too high to warrant automatic race condition detection in production systems, especially when the race conditions are rare. However, if we could verify that race conditions do not occur “most of the time” via model checking, we might be willing to pay such overheads for runtime race condition detection in those rare situations when they might occur. For example, suppose that the bank software correctly handled the situation where Alice and Bob were both simultaneously trying to withdraw their last $100 but not the case where Alice, Bob, and their son Charlie were all simultaneously trying to withdraw the last of their money. If we could verify offline that the software performed correctly in the case of two competitors, we might pay for an order of magnitude slowdown in that exceedingly rare situation when three or more competitors accessed the system simultaneously. Since the state space of the system grows exponentially with the numbers of competitors, one would wish to model check on as few competitors as possible. Yet, if there are good reasons to believe that the number of simultaneous competitors is usually small, one can employ MC+RV by model checking for this small number of competitors and runtime verify when the number of competitors is higher. To apply MC+RV the developer must decide what to model check and what to runtime verify since she is the only one who knows the number of competitors that is rare enough not to warrant model checking. The formula that we must model check is the obvious one: if \( \beta \) represents a lack of race conditions and \( \alpha \) represents that the number of competitors is not exceedingly rare then we want to model check \( \alpha \rightarrow \beta \) and we runtime verify \( \alpha \). To investigate this idea, we designed a variant of Peterson 2-process mutual exclusion algorithm [24] that prevents two processes from accessing a critical section simultaneously. In our version, there are \( N \) threads (for any \( N > 1 \)), and the goal is to guarantee mutual exclusion when the number of competitors for access to the critical section is \( \leq 2 \). The protocol is presented in Figure 1. There, \( N \) processes run asynchronously. There is a global variable last that can take on values between 1 and $N$ and denotes the id of the last process to wish to enter the critical section. Each process can be in one of four locations: location 0 (initial state) where it may idle forever or wish to enter the critical section, location 1 where it declares that it wishes to enter the critical section by setting $last$ to its own id, location 2 where it waits until no other process is competing or is in the critical section (i.e., all are in location 0) or that $last$ has changed since the process last modified it, hence, it is not the last one to enter the competition. In this case, the process can enter the critical section (location 3), after which it returns to its idle state. We formally proved, using the method of invisible invariants \[25\] with the BDD-based model checker TLV that $$\square \left( \sum_{i=1}^{N} at_{f_{1..3}}[i] \leq 2 \rightarrow \forall i,j. (i \neq j \rightarrow \neg (at_{f_{3}}[i] \land at_{f_{3}}[j])) \right)$$ where $at_{f_{k}}[i]$ denotes that the process whose id is $i$ is at location $k$. The summation term counts the processes whose location is in the range 1..3, i.e., the number of competitors. Thus, as long as the number of competing threads is no greater than 2, mutual exclusion is guaranteed (via model checking). To identify all violations of mutual exclusion, we need only outfit the system with a runtime verifier that performs mutual exclusion detection when the number of threads is greater than 2\[1\]. Following the terminology of the previous subsections, here $\alpha$ is the property that the number of competitors is less than 2, and $\beta$ is the mutual exclusion property. We model check $\alpha \rightarrow \beta$ and we runtime verify $\alpha$. ### 4.3 Hardware Verification \[26\] describes IDV, a system developed at Intel that enables simultaneous hardware logic design and verification. One of the keys to efficiency in this work is giving the developer the ability to provide the model checker with hints about the behavior of certain aspects of the design. For example, the developer can tell the model checker that two intermediate results are never simultaneously 1. This additional information helps prune the state space that must be verified; however, if the developer makes a mistake when providing the hints, the property that has been verified by the model checker may not hold on the actual system. Developer-provided hints implicitly partition the state space of the system. Some of the state space can be used to model check high level properties of the design, and other portions \[1\] It is interesting to note that the protocol of Figure 1 satisfies liveness, i.e., that $\forall i. \square (at_{f_{1}}[i] \rightarrow \Diamond at_{f_{3}}[i])$ of the state space are used to verify lower level properties. For those situations where the hints are beliefs about how the environment will interact with the system, it is natural to use runtime verification to verify the lower level properties—to analyze whether or not the developer-provided hints are satisfied by the environment. Sufficient testing will uncover any flaws in the developer-provided hints. In this example, the partitioning of the space into that which is model checked and that which is runtime verified is a direct result of the hints the developer provides. If the developer asserts hints $\alpha_1, \ldots, \alpha_n$, and we want to model check the property $\beta$ then the model checking query we actually analyze is $\alpha_1 \land \cdots \land \alpha_n \rightarrow \beta$. The properties we verify at runtime are just $\alpha_1, \ldots, \alpha_n$. ### 4.4 Web Services Many web applications today rely on external web services. For example, an e-commerce web site will validate a credit card number by consulting the appropriate company’s web service. Verifying that an e-commerce web site behaves appropriately is difficult because the code for external web services is unknown (and may change from the time the application is verified to the time it is deployed). Nevertheless, it is reasonable for a web developer to hope that verification technology can help give her confidence that her code behaves appropriately. Here MC+RV is useful because model checking helps the developer verify that the web application is well-behaved assuming that the credit card company’s web service is well-behaved. For example, if the web service is supposed to return a response in 15 milliseconds, and the web application is a real-time system that relies on that fact, then it can be model checked to ensure its real-time properties hold so long as the credit card company’s service behaves as advertised. By runtime monitoring the credit card company’s web service, we can detect when it takes longer than 15 milliseconds and identify situations in which the overall application may miss its real-time constraints through no fault of its own. In this example the partitioning of the system’s state space is based on the code available at the time of model-checking. The code that exists can be used for model checking; the remainder must be runtime verified. This kind of model checking requires additional information from the developer: a spec that dictates how the external web service is intended to behave. The model checking property of interest is again $\alpha \rightarrow \beta$, where $\alpha$ is the spec for the web service and $\beta$ is the high-level behavior of the web application. At runtime we simply verify $\alpha$ holds over each web service execution. ### 4.5 Repurposed Component Many software applications delegate some low-level functionality to the operating system in which they are deployed. In Linux, for example, it is common to read the contents of a directory by executing the `ls` command. If the application interacts with the operating system by simply executing a system command written as a string, it may seem as though any possible OS command is being used. This is problematic from the perspective of model checking because now the entire state space of the application plus the entire state space of the operating system would need to be checked. Such problems can sometimes be addressed with MC+RV. If through simple analysis or by developer intervention we learn that the application only uses one or two OS commands, we can model check the application and the implementations for those commands, without needing to model check the entire state space of the OS. We can then runtime verify that the application only invokes the OS commands that we model checked. In this example, we model check $\alpha \rightarrow \beta$ where $\alpha$ says the only OS commands are those on the developer’s list, and $\beta$ is the application property of interest. At runtime we monitor $\alpha$: that the only OS commands executed are those from the list. 4.6 Exceptions Computer systems can conceptually be broken into two kinds of code: exception-handling code and everything else. We often expect exception-handling code to be run infrequently; we also expect that many properties that we want to verify do not depend on exception-handling code. It is therefore reasonable to model check the system assuming no exceptions are triggered and to relegate the verification of the exception-handling code to runtime. In this example, it is clear how to partition the state space into the parts that we model check and runtime verify. For model-checking, consider all paths through the code that produce no exceptions, and for run-time verification, place runtime verifiers only inside exception handling code. It is also clear that if we want property $\beta$ to hold that we model check $\beta$. What is less clear is how we runtime verify $\beta$ in the exception-handling code. If $\beta$ is a liveness property, run-time verification will not be possible. But even if $\beta$ is a safety property, we may need to outfit the application to store the pertinent history so that the exception-handling code has access to the right information when performing runtime verification. 5 Discussion From a formal perspective, we can view MC+RV as an instantiation of assume-guarantee reasoning. Given a system $S$ and a property of interest $\beta$, an instance of MC+RV constructs a (safety) formula $\alpha$, model checks that $S$ guarantees $\beta$ assuming $\alpha$ and then runtime verifies $\alpha$. We know that any execution satisfying $\alpha$ is guaranteed to satisfy $\beta$ because of the model checker, but we do not know whether the remaining executions satisfy or falsify $\beta$. The runtime verifier identifies executions for which $\alpha$ is falsified and hence additional action must be taken, e.g., issue a warning, runtime verify $\beta$, or even verify some other property $\gamma$ related to $\beta$. Note that runtime verifying $\beta$ (or $\gamma$) may require having stored some history information in the state even before $\alpha$ is falsified. A key observation about this formulation is that $\alpha$ must be a safety property, since it needs to be runtime verified; furthermore, runtime verifying $\alpha$ must be less expensive than runtime verifying $\beta$ itself since otherwise we could forego model checking entirely and simply verify $\beta$. In fact, $\alpha$ should be a formula that is the least expensive to verify out of all the candidate $\alpha$s. Each of the examples discussed earlier fits within this conceptual framework, and each example being distinguished in terms of what constitutes $\alpha$, what constitutes $\beta$, and what the application does if the runtime verifier discovers that $\alpha$ is falsified. For software contracts (and web services), $\alpha$ dictates that the implementations of data structures (e.g., Heap) used within the application (e.g., curve-finding) are correct, and $\beta$ is just the property of interest. Thus for this example we model check that if the data structure implementation is correct ($\alpha$ is satisfied) then the application is correct ($\beta$ is satisfied), and we runtime verify that the data structure implementation is correct. If the runtime verifier discovers that $\alpha$ is falsified, we have found a bug in the data structure implementation, which is irrelevant for the purpose of curve-finding correctness. For the race conditions example, $\alpha$ is the condition that the number of threads throughout the execution is 2 or fewer, and $\beta$ says there are no race conditions. Thus the model checker ensures that when the number of threads is 2 or fewer ($\alpha$), no race conditions hold ($\beta$), and we runtime verify that there are 2 or fewer threads. When the runtime verifier discovers $\alpha$ is falsified, in this example we runtime verify $\beta$: that no race conditions occur. For hardware verification, $\alpha$ is the conjunction of the developer-provided hints dictating invariants on parts of the system. $\beta$ is the property of interest. We model check that when the developer hints hold, the property of interest also holds, and we runtime verify that the hints are correct. If the runtime verifier discovers that $\alpha$ is falsified, we simply inform the developer that some of her hints were wrong, thereby suggesting that the hints be altered and the model checking rerun. For the repurposed component, $\alpha$ dictates that only a handful of system calls are ever made by the program. $\beta$ is the property of interest. We therefore model check the system assuming that the only system calls are the ones we have identified, and we runtime verify that those are the only system calls the application makes. When $\alpha$ is violated at runtime, we know that $\beta$ may (or may not) be violated, and the appropriate action depends on the application. For exceptions, $\alpha$ dictates that no exceptions are thrown (i.e., it is the negation of the exception conditions), and $\beta$ is the property of interest. We therefore model check the system assuming that no exceptions are thrown, and we runtime verify that no exceptions are thrown. When $\alpha$ is violated (exceptions are thrown), a potentially different formula $\gamma$ is likely checked, e.g., all the appropriate files are closed and the program terminates. In the future, we plan to investigate automated techniques for generating $\alpha$ automatically, as well as optimizing the choice of $\alpha$ so as to minimize the cost of its runtime monitor while simultaneously minimizing the cost of model checking $\alpha \rightarrow \beta$. We can then utilize existing runtime verification techniques to automatically instrument the system $S$ to monitor $\alpha$ and existing techniques to model check $\alpha \rightarrow \beta$. 6 Related Work Below we detail work that combines static and dynamic analysis of systems. One line of work attempting to understand when and how we can verify properties of an entire state space while only exploring a portion of that space is the work on 3-valued model checking [17]. The key insight to this work is that if we know what we don’t know then sometimes we can infer what the possible things we don’t know could be and conclude that none of those possibilities could violate the specification. Of course, sometimes we simply fail to explore enough of the space for even 3-valued model checking to help us verify the system. One class of related work utilizes static analysis to simplify runtime verification code e.g., [7]. After statically analyzing code to extract invariants, those invariants can demonstrate that particular runtime verification checks are unnecessary because the code enforces them directly. Similarly [23] describes CCured, a C program transformation tool that attempts to impose a statically-checked type system on C programs. Those elements of a program that cannot be statically type-checked are outfitted with runtime monitors to catch memory errors. This work could be construed as using static techniques to simplify runtime checks as in [7]; however, the focus here is just the reverse: that the runtime checks are used only when the static analysis is insufficient. Another line of work, e.g., [5, 9, 15, 20, 28], combines model generation, model checking, and testing (which can be construed as a kind of runtime verification) to achieve better coverage of the system’s state space and find bugs or provide proofs of verification. 7 Conclusion We propose investigating the theory of integrating model checking and runtime verification, with the eventual goal of providing a single framework for exporting verification technology to the outside world. From the perspective of model checking, MC+RV broadens the applicability of model checking to include systems that are too large or too dynamic to be completely explored. The downside is that the entire system cannot be model checked before it is deployed, and hence verification failures may not be found until after deployment. From the perspective of runtime verification, MC+RV expands the class of properties that can be verified and reduces the overhead of runtime verification. The downside is that the MC step may find bugs that are irrelevant to the deployments of interest and incurs offline costs. MC+RV therefore has drawbacks as well as benefits; hence, we envision the theory of MC+RV resulting in controls for the developer to choose the appropriate mix of model checking and runtime verification. For those situations where offline costs must be small, the developer can dictate the model checking component only consider a small portion of the state space, and for those situations where verification before deployment is important, the developer can dictate that only those components of the system not known at the time of analysis should be runtime verified. MC+RV will allow developers to choose the verification approach best-suited for their application by properly balancing the tradeoffs of model checking and runtime verification. References time temporal logic. In Proc. IBM Workshop on Logics of Programs, volume 131 of Lect. Notes in [14] E. A. Emerson and A. P. Sistla. Utilizing symmetry when model checking under fairness assump- [15] Pedro Felzenszwalb and David McAllester. A min-cover approach for finding salient curves. In [16] Robert Bruce Findler, Shu yu Guo, and Anne Rogers. Lazy contract checking for immutable data structures. In Olaf Chitil, Zoltán Horváth, and Viktória Zsók, editors, IFL, volume 5083 of Lecture [17] Patrice Godefroid and Radha Jagadeesan. Automatic abstraction using generalized model check- ing. In Proceedings of the 14th International Conference on Computer Aided Verification, CAV Transactions on Software Engineering and Methodology., 2006. [20] Monica S. Lam and Michael Martin. Securing web applications with static and dynamic infor- mation flow tracking. In ACM Symposium on Partial Evaluation and Semantics-based Program [23] George C. Necula, Jeremy Condit, Matthew Harren, Scott Mcpeak, and Westley Weimer. CCured: Type-safe retrofitting of legacy software. ACM Transactions on Programming Languages and Proc. 7th Int. Conference on Tools and Algorithms for the Construction and Analysis of Systems [26] Carl Seger. Treating constraints as components: An experiment in user control. Invited talk at the Seventh International Workshop on Constraints in Formal Verification, 2011. [27] J. Sifakis and J.P. Queille. Fairness and related properties in transition systems – a temporal logic
{"Source-Url": "http://www.easychair.org/publications/?page=1307682134", "len_cl100k_base": 7143, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 30284, "total-output-tokens": 9080, "length": "2e12", "weborganizer": {"__label__adult": 0.0003736019134521485, "__label__art_design": 0.0003209114074707031, "__label__crime_law": 0.0004062652587890625, "__label__education_jobs": 0.00047087669372558594, "__label__entertainment": 6.157159805297852e-05, "__label__fashion_beauty": 0.00014984607696533203, "__label__finance_business": 0.00018227100372314453, "__label__food_dining": 0.0003285408020019531, "__label__games": 0.0005311965942382812, "__label__hardware": 0.000732421875, "__label__health": 0.0005002021789550781, "__label__history": 0.0002161264419555664, "__label__home_hobbies": 7.796287536621094e-05, "__label__industrial": 0.0003230571746826172, "__label__literature": 0.00028586387634277344, "__label__politics": 0.00026679039001464844, "__label__religion": 0.0004608631134033203, "__label__science_tech": 0.0187530517578125, "__label__social_life": 8.434057235717773e-05, "__label__software": 0.005084991455078125, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.0003161430358886719, "__label__transportation": 0.0005807876586914062, "__label__travel": 0.00019919872283935547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39592, 0.03764]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39592, 0.29042]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39592, 0.90924]], "google_gemma-3-12b-it_contains_pii": [[0, 3310, false], [3310, 7550, null], [7550, 11000, null], [11000, 13988, null], [13988, 18054, null], [18054, 20812, null], [20812, 24645, null], [24645, 28693, null], [28693, 32499, null], [32499, 36121, null], [36121, 39592, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3310, true], [3310, 7550, null], [7550, 11000, null], [11000, 13988, null], [13988, 18054, null], [18054, 20812, null], [20812, 24645, null], [24645, 28693, null], [28693, 32499, null], [32499, 36121, null], [36121, 39592, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39592, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39592, null]], "pdf_page_numbers": [[0, 3310, 1], [3310, 7550, 2], [7550, 11000, 3], [11000, 13988, 4], [13988, 18054, 5], [18054, 20812, 6], [20812, 24645, 7], [24645, 28693, 8], [28693, 32499, 9], [32499, 36121, 10], [36121, 39592, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39592, 0.02618]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6d94c6cfd12847287ee1bb8faef0cc3ee153d107
CHAPTER 4 EXTENDED ENTITY HIERARCHICAL RELATIONAL AXIOM REPRESENTATION OF CONTEXT 4.1 INTRODUCTION Humans have always used their understanding of circumstances or context to navigate the world around them, to organize information, and to adapt to conditions. The behaviour of a context-aware system depends not only on its internal state and user interactions, but also on the context sensed during its execution, as cited by Ejigu et al (2008). The existing models of context representation do not fully address the basic issues related to context data due to variations in the types of context information they can represent. Some models take the user's current situation, while others consider the physical environment. The challenge is to put a more generic approach in context modeling capture, and represent various features of context information including a variety of context information and their dependencies. Context representation needs a model that represents the data and its meaning. Semantic representation, support for processing, conversion of data into the necessary knowledge and action, are important for context-aware reasoning and decision support. In this chapter, a new approach for a context representation is proposed. The proposed context meta model uses context entities, their hierarchies, relationships, axioms and metadata as basic building blocks for context representation. 4.2 CONTEXT DEFINITION Based on Dey’s (2000) generalized definition and Winograd’s (2001) specification on the definition of context, context was redefined by Ejigu et al (2008). Here, context is defined as an operational term. It depends on the intention for which it is collected. It also interprets the operations involved in an entity at a particular time and space, rather than the inherent characteristics of the entities and the operations themselves. Consider a smart medical ward in a hospital where patients, nurses, physicians, etc. are involved. Assume that the ward is equipped with context sensor technologies (hardware and software) in its rooms, corridors and garden at the disposal of the individuals involved. Patients admitted to the hospital may need intensive follow up which may create staff shortage, and may result in inappropriate care to the needy ones due to overloading. Human interventions are needed only when alerted by the system. Live multimedia recording and transmission of an event are used for monitoring purposes. Personalized attendances are given to those who are concerned, as cited in OpenGALEN project (2007). Classes of generic context entities for the above scenario and its sub-entities are shown in Figure 4.1. Figure 4.1 Smart Hospital Scenarios Context data is collected for each participating entity using hardware or software tools. The figure shows the relation between entities including the subEntityOf relation. At the root of the hierarchy is a global entity named ContextEntity. Figure 4.1 also shows the major classes of context entity descriptors, such as personal, device, physical environment, network, activity, service, and location contexts. Personal entity provides contexts like a person’s identity, address, service preference, device ownership, activity, location etc. Device entity provides contexts like hardware properties, software properties, display properties, device capabilities etc. Network entity contexts are expressed in terms of properties like delay and error characteristics, data rate, transport protocols, etc. Physical environment entity provides contexts like illumination, noise level, humidity, temperature etc. Activity entity context shows whether an activity is scheduled or not / it needs special location or not / type of activity etc. Location entity provides contexts about its containment and situation with respect to other entities. Service entity provides contexts about where the service is located, type of service (data service, audio service, video service, and application service), service availability, etc. Context can be defined on generic and domain levels as shown in Table 4.1. The column on the left side of the Table 4.1 shows a generic level definition of the relationship between entities. This is equivalent to defining the domain and range of a relation. The column on the right side of Table 4.1 shows the use of the same relationship in a specific domain of application, as discussed above in the smart hospital scenario. Table 4.1 Generic and Domain based definition of relationships <table> <thead> <tr> <th>Generic Level Definition</th> <th>Domain Level Equivalent</th> </tr> </thead> <tbody> <tr> <td>Person isEngagedIn Activity</td> <td>Physician isEngageIn Patient – treatment</td> </tr> <tr> <td>Location isLocatedIn Location</td> <td>Canteen isLocatedIn Campus</td> </tr> <tr> <td>Person isLocatedIn Location</td> <td>Physician isLocatedIn Canteen</td> </tr> <tr> <td>Network hasDataRate xxx</td> <td>ConnectionX hasDataRate low</td> </tr> </tbody> </table> The statements in Table 4.1 need a higher-level statement. These statements can be expressed in terms of a meta-statement or axioms. Information like the time of occurrence, precision, source of data, etc., can be part of meta-information. For example, “physician isLocatedIn canteen” at a given time $t$ and the precision of the statement “a network connection has low speed” is 90%. Time $t$ that is associated with “physician isLocatedIn canteen” and the precision of 90% that is associated with “connection hasSpeed low”, refer not to the individual components of the statements, but to the entire statements. Regarding the axioms, if a relation LocatedIn is transitive then LocatedIn obeys the transitivity axiom: for example, “canteen isLocatedIn campus” and “physician isLocatedIn canteen” means that “physician isLocatedIn campus”; similarly, if owned-by is an inverse of owns then: “device owned-by person” means “person owns device”. 4.3 PROPOSED EEHRAM MODEL FOR CONTEXT REPRESENTATION The proposed EEHRAM is a layered context representation meta-model that uses a set of extended entities (EE), a set of hierarchies (H), a set of relations (R), a set of axiomatic relations (A), and a set of metadata (M), to represent the context data and its semantics. The proposed model EEHRAM is composed of the following components: - EE is a set of extended entities for which context is to be captured. - H is a set of hierarchical relations that forms an inversely directed acyclic graph (inverted DAG) on entities. These relations are binary in nature. The Nodes of the graph represent entities, and the arcs of the graph represent hierarchical relations. The root entity at the top of the hierarchy graph is a global entity known as ContextEntity. • R stands for the union of the sets of binary relations $R_e$ and $R_a$. $R_e$ is the set of binary relations having both its domain and range from the set $EE$. $R_a$ is the set of binary relations defined from the set of entities $EE$ to the set of literals, representing the entity attributes. The domain of the relation $R_a$ is the set of $EE$ while its range is a set of literal values. • A is a set of axiomatic relations. An axiomatic relation is a relation about relations. For all $r_i$, an element of a set of relations $R$, have a zero or more $a_j$, an element of set of axioms $A$ that $r_i$ obeys. For example, define a relation $r_1$ as a transitive relation then $r_1$ obeys the transitivity property (axiom): $$(e_1, r_1, e_2) \text{ and } (e_2, r_1, e_3) \Rightarrow (e_1, r_1, e_3).$$ • Similarly, if a relation $r_2$ is defined as a symmetric relation, then $r_2$ obeys the symmetric property (axiom): $$(e_1, r_2, e_2) \Rightarrow (e_2, r_2, e_1).$$ • M is a set of metadata about a defined relation instance. The set of Metadata together with the set of Axioms enhances the EEHARAM model for handling the semantics of the context data. Hierarchy is an important structure to organize and classify context entities and relations into two layers. The layered organization helps to classify and tag context data as domain dependent or domain independent (generic). Figure 4.2 shows a layered graphical representation of the proposed EEHRAM structure that shows hierarchies, entities, entity relations, attribute relations, axiomatic relations, metadata and the layers: (Layer (a) is the generic layer and layer (b) is the domain layer). Figure 4.2 Layered representation of the EEHRAM Components 4.3.1 EEHRAM Model using a Real Life Example Consider an application in a medical domain, where the context data come from medical entities, like patients, doctors, activities and events in a hospital, devices, locations, etc. The representation of the components of the EEHRAM model, using a few example data from the application in a smart medical domain, is given in Figure 4.3. Entity and Hierarchy Activity, Person and Device are examples of generic entity classes and they are presented in the generic layer. They are high-level context entities from which specific entities can be derived, and they are common to all domains of applications. All entities in this category have a hierarchical relation named isa with the root entity known as ContextEntity. Meeting, Patient, Doctor and Phone are examples of domain entity classes in a medical application, and they are presented in the domain layer. Bob, Pascal and Sphone0095 are examples of entity instances in the medical application. Examples of hierarchical relations from Figure 4.3 are (Device, isa, ContextEntity), (Person, isa, ContextEntity), (Activity, isa, ContextEntity), (Doctor, isa, Person), (Pascal, instanceOf, Doctor), etc. Relation Relations like \((\text{Activity, hasStartTime, time})\), \((\text{Person, isEngagedIn, Activity})\), \((\text{Person, locatedWith, Person})\), \((\text{Person, locatedWith, Device})\), \((\text{Device, hasMemory, memory})\) are defined in the generic layer. Such generic relations can be inherited down in the hierarchy, by the sub-entities and instances in the specific domain of application. Similarly, relations like \((\text{Meeting, hasEndTime, time})\) and \((\text{Patient, hasDoctor, Doctor})\) from the medical application are presented in the domain layer. They restrict the domain and the range of the relations that are inheritable down in the hierarchy by entity instances. Finally, relations like \((\text{Bob, hasBodyTemp, 39.5})\), \((\text{Pascal, Owns, SPhone0095})\), \((\text{Bob, hasDoctor, Pascal})\) and \((\text{SPhone0095, hasMemory, 400})\) defined on entity instances represent the basic context definition formalism. Relations like \(\text{hasEndTime, hasMemory, hasStartTime, and hasBodyTemp}\) are elements defined as attribute relations \(\left( R_a \right)\) because the range of these relations is the set of literal values. Relations like \(\text{hasDoctor, locatedWith and owns}\) are elements defined as entity relations \(\left( R_e \right)\), because they are defined from entity to entity, i.e., both their domain and range are drawn from the set of entities. Some relations in Figure 4.3 are defined to have associated axioms and some have metadata. Examples of relations with associated axioms are \((\text{Person, locatedWith, Device})\) and \((\text{Pascal, owns, SPhone0095})\). In Figure 4.3, \(\text{locatedWith}\) is defined to be symmetric, and therefore it obeys the symmetric-axiom property, which means the relation \((\text{Device, locatedWith, Person})\) automatically holds true. Similarly, because \(\text{owns}\) is defined to be an inverse of \(\text{hasOwner}\), it obeys the inverse-axiom property, which means the relation \((\text{SPhone0095, hasOwner, Pascal})\) automatically holds true. The relation \((Person, \text{engagedIn}, \text{Activity})\) has a metadata that informs about its precision represented by \(\text{hasPrecision}\). 4.3.2 Layers, Axioms and Metadata Generic layer in the EEHRAM model consists of classes representing basic entities. Such classes have the \textit{generalization} relation with the base classes called \textit{ContextEntity} that represents the EEHRAM root entity. All association relations and attributes defined on these entities apply to all sub-entities lower down in the hierarchy. They are defined independent of the domain of application. For example, if a relation \(\text{hasAddress}\) applies to an entity class Person, i.e. \(\text{hasAddress}(\text{Person, Address})\), then this relation applies to all sub entities and instances of Person. The domain layer represents entities that define specific domains of application. In the hierarchy graph, the domain layer consists of all entities that do not have a direct generalization relation with the root entity. In addition to their defined relations, they inherit relations from the parent entities. An axiom is a sentence, proposition or rule that is taken as valid, and serves as a necessary starting point and formal logical expression, for reducing and inferring logically consistent statements. Axiomatic relations can be defined both at the generic and domain layer of the EEHRAM model. Descriptions of some of the generic level axiomatic relations, \textit{sameAs}, \textit{inverse}, \textit{symmetric} and \textit{transitive} are given as follows: \begin{align*} \forall r \in R & \quad \text{symmetric} (r) \iff (\forall e_1, e_2 \in E, r(e_1, e_2) \Rightarrow r(e_2, e_1)) \\ \forall r \in R & \quad \text{transitive} (r) \iff (\forall e_1, e_2, e_3 \in E, r(e_1, e_2) \land r(e_2, e_3) \Rightarrow r(e_1, e_3)) \\ \forall r_1, r_2 \in R & \quad \text{inverse} (r_1, r_2) \iff (\forall e_1, e_2 \in E, r_1(e_1, e_2) \Rightarrow r_2(e_2, e_1)) \\ \forall r_1, r_2 \in R & \quad \text{sameAs} (r_1, r_2) \iff (\forall e_1, e_2 \in E, r_1(e_1, e_2) \Rightarrow r_2(e_1, e_2)) \end{align*} Similarly, application domain based axiomatic relations are used to state axioms and rules that are used to deduce further knowledge for reasoning. A statement “under normal condition, a patient is always treated by the same doctor” can be considered as an axiom (assumption) in a medical domain. Using this assumption, another domain based deduction rule can be created as follows: \[ \forall d \text{ instanceOf Doctor}, p \text{ instanceOf Patient}: \text{hasDoctor}(p,d) \land \\ \text{engagedInActivity}(p, \text{takeTreatment}) \Rightarrow \text{engagedInActivity}(d, \text{giveTreatment}) \] Some degree of accuracy in such axioms or rules can be associated as metadata. *Metadata* is data about data. Metadata in context modeling is important, to associate quality, precision, source, time stamps, and other information to the context data. Such information is important to prepare the context data for reasoning and decisions. In the EEHRAM model, metadata is a relation that describes another relation instance. For example, consider a given context information that says “patient is located in the garden”; it is possible to make other statements about this statement to answer questions like: *Who* reported this information? *Which* service is used to report this information? *When* did it happen? *How* accurate is the information? *Why* is the subject in this situation? *What* will happen next? etc. The different data representations like the Unified Modeling Language (UML), binary relation, and Resource Description Framework (RDF) model for transforming the proposed EEHRAM conceptual model into data representations are discussed below. ### 4.3.3 Mapping the Hierarchical Conceptual Model to the UML Unified Modeling Language (UML) is used to formalize hierarchical as a conceptual context representation model. The UML is a standard specification language for object modeling. It is used as a tool for ORDBMS designing. Incremental mapping of the concepts in EEHRAM to the UML is given below: - *Extended Entity* in EEHRAM can be represented as a UML class. - The concept of *hierarchical* relation in EEHRAM can be represented as *generalization* relationship in the UML. - *Entity relations* in EEHRAM can be represented as *association relationships* in UML, and attribute relations in EEHRAM can be represented using *attributes* in the UML class. - *Axiomatic relations* in EEHRAM can be represented as *association classes* in the UML. The concept of metaclass can also be used to represent axiomatic properties like *symmetric property*, *inverse property*, etc. - *Metadata* in EEHRAM can be represented using *association classes* in the UML. The above mapping of EEHRAM and UML is also shown in Table 4.2. Table 4.2 Mapping the proposed EEHRAM with the UML <table> <thead> <tr> <th>Hierarchical</th> <th>UML</th> </tr> </thead> <tbody> <tr> <td>Extended Entity</td> <td>(EE)</td> </tr> <tr> <td>Hierarchy Relation</td> <td>(H)</td> </tr> <tr> <td>Entity Relation</td> <td>(R_e)</td> </tr> <tr> <td>Attribute Relation</td> <td>(R_a)</td> </tr> <tr> <td>Axioms</td> <td>(A)</td> </tr> <tr> <td>Metadata</td> <td>(M)</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Hierarchical</th> <th>UML</th> </tr> </thead> <tbody> <tr> <td>Extended Entity</td> <td>UML Class</td> </tr> <tr> <td>Hierarchy Relation</td> <td>Generalization Relationship</td> </tr> <tr> <td>Entity Relation</td> <td>Association Relationship</td> </tr> <tr> <td>Attribute Relation</td> <td>Attributes in UML Class</td> </tr> <tr> <td>Axioms</td> <td>Association Classes</td> </tr> <tr> <td>Metadata</td> <td>Association Classes</td> </tr> </tbody> </table> The limitations of using UML as a modeling tool to represent the EEHRAM modeling components are that, attribute relationships, context metadata and axiomatic relations cannot be adequately represented in the UML. Representing axiomatic relations in the EEHRAM model as association classes in the UML has the limitation of repeating the same set of axiomatic relations for every occurrence of the instance of the entity. For example, the inverse axiomatic relation, hasCustomer that is defined on hasDriver using an association, class should be independent of every instance of the association, but refer to the association itself. The limitations of UML to mapping to the proposed EEHRAM meta-model can be overcome, by using the extended UML features like meta-classes and association-classes. There is limited support for the representation of the semantic aspect of context data. The UML is used as it supports the proposed ontology based Object Relational database management system. UML modeling tools can be used to formalize the representation of the EEHRAM model. 4.3.4 Mapping the Proposed EEHRAM and Relational Models Context and context metadata can be represented using notations and concepts in a binary relation. The binary relation \( R \) is a subset of a Cartesian product \( X \times Y \), where \( X \) and \( Y \) are arbitrary sets. The sets \( X \) and \( Y \) are called the domain and range of the relation respectively. The statement \( (x, y) \) is a subset of \( R \), is read "\( x \) is \( R \)-related-to \( y \)" , and is denoted by \( xRy \) or \( R(x,y) \). The order of the elements in each pair is important: if \( a \neq b \), then \( R(a, b) \) and \( R(b, a) \) can be true or false, independently of each other. Based on this definition of relations, given the sets of context entities \( E \) and the set of values \( V \) drawn from the set of context entities and literal values, a relation \( R \) is the subset of the Cartesian product of the sets of \( E \) and \( V \) as shown in Equation (4.1) \[ R \subseteq \{(e_i,v_j): e_i \in E, v_j \in V\} \quad (4.1) \] Consider all meaningful set of statements, \((e_i,v_j) \in R\) that can also be represented as: \[\{R(e_i,v_j): (e_i,v_j) \in R\}\] or in a more linear form \[\{(e_i,R,v_j): (e_i,v_j) \in R\}\] This triple can be used to define a context (\( C \)) as shown in Equation (4.2) \[ C \equiv \{(e_{i,k}, r_k, v_{j,k}) : e_{i,k} \in E_k, r_k \in R, v_{j,k} \in V_k\} \quad (4.2) \] This can be extended to define context and context metadata (\( CM \)) using the basic context \( C \), meta-relation (\( RM \)) and meta-value (\( VM \)) as shown in Equation (4.3) and Equation (4.4) \[ CM \equiv \{(c_i, rm_k, vm_j) : c_i \in C, rm_k \in RM, vm_j \in VM\} \quad (4.3) \text{ Or} \] \[ CM \equiv \{((e_i, r_k, v_j), rm_i, vm_p): e_i \in E, r_k \in R, v_j \in V, rm_i \in RM, vm_p \in VM\} \quad (4.4) \] Basic context data (medical application): (Schedule, isA, Service), (TimeTable, instanceOf, Schedule), (Meeting, isA, Activity), (Meeting005, instanceOf, Meeting), (Meeting005, hasStartTime, #201211081400) (Doctor, isA, Person), (Pascal, instanceOf, Doctor) (Pascal, isEngagedIn, Meeting005) Context with metadata: ((Pascal, isEngagedIn, Meeting005), hasSource, Agenda) ((Pascal, isEngagedIn, Meeting005), hasPrecision, yy%) N-ary relation (relation of degree N) as a base of the relational database model inherits its properties from binary relations. Relational models can be used to represent the entity, hierarchy, relations and metadata components of the EEHRAM model. They can also be extended to represent axioms in the EEHRAM model using definitions like (locatedWith, is, symmetric), (locatedWith, is, transitive), (owns, inverseOf, hasOwner), etc. However, this is not sufficient to represent fully the semantic aspect of the context data represented in the EEHRAM. 4.3.5 Mapping the EEHRAM and RDF Data Model RDF models have been in use to represent semantic metadata in different application domains. The work by Bouzeghoub(2004) uses the RDF semantic description model, to allow the reuse and assembling of learning objects that represent pedagogical materials available on the web. The core element is the representation of semantic metadata that allows the description of the domain model, user model and learning object model. In this section, the RDF and its extensions to build a generic context meta-model, are discussed. The primary characteristic of a context data is that, it possesses an actor or a subject (an entity). The context value defined on the subject is expressed in terms of multiple properties. Further, the terms predicate and object are used to represent the situation of the subject with respect to a specific property of a context data. This convention goes with the RDF-triple representation formalism, $<\text{subject}, \text{predicate}, \text{object}>$, which in turn, maps all types of relations in the proposed EEHRAM model. Additional context metadata can also be included as part of the context data. In addition to the subject, predicate and object triples, context modeling requires context metadata to extend the context model to the historic, probabilistic, or confidence-carrying models. Such attributes are meaningful only when they refer to a particular instance of the triple, not to each individual element. To describe this situation, the RDF and its extension, called RDF reification W3C (2007), are used. Reification is used to represent facts that must then be manipulated in some way; for example, to compare logical assertions from different witnesses to determine their credibility. The message "John is six feet tall" is an assertion of truth that commits the sender to the fact, whereas the reified statement, "Mary reports that John is six feet tall" defers this commitment to Mary. In the same way, a reified RDF data contains each original statement as a resource and the other additional statements made about it. The four properties used to model the original statement as the RDF resources are, subject, predicate, object and type. A new resource with these four properties represents the original statement and can be used as a subject or an object of other statements with an additional statement made about it. Figure 4.4 shows the demonstration of a context metadata representation, using statement reification. This figure shows an example of triple statement: “Bob is located in the Library”. This statement can be reified by additional meta-information like “is reported by sensor#5”, “has accuracy of 88%”, “has occurred at 11:40 today”, etc. ![Figure 4.4 Context metadata represented using reification](image) Figure 4.4 shows an equivalent graphical representation of the RDF data model for the reified context data. The RDF reification principle can be used to represent additional context attributes to the basic context triples. The RDF is one of the major building blocks in a formalism, to represent ontology, which has features for defining and representing axioms. Therefore, axioms can be represented using RDF/OWL formalisms. Details on mapping the EEHRAM to ontology are discussed in chapter 5. 4.4 CONCLUSION In this chapter, the context representation model EEHRAM is proposed. Components of the EEHRAM model (entity, hierarchy, relations, axioms and metadata) are derived directly from the definition of the context. A generic EEHRAM graph is used to represent an abstract conceptual model. This makes it simple, as it follows the abstraction and conceptualization of context data and its semantics, in the form of axioms and metadata. The different data representation formalisms like the UML, binary relation with its extended meta-form and the RDF model can be used to convert the EEHRAM conceptual model into concrete data representation. A more comprehensive approach, which deals with the mapping of the proposed EEHRAM model into standard data management structures, that support the storage and processing of both the context data and the context semantics, are presented in the next chapter.
{"Source-Url": "http://shodhganga.inflibnet.ac.in/bitstream/10603/24766/9/09_chapter4.pdf", "len_cl100k_base": 6141, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 35967, "total-output-tokens": 6798, "length": "2e12", "weborganizer": {"__label__adult": 0.0003750324249267578, "__label__art_design": 0.0006871223449707031, "__label__crime_law": 0.0007200241088867188, "__label__education_jobs": 0.0033397674560546875, "__label__entertainment": 9.47117805480957e-05, "__label__fashion_beauty": 0.0002290010452270508, "__label__finance_business": 0.0007219314575195312, "__label__food_dining": 0.0004363059997558594, "__label__games": 0.0006289482116699219, "__label__hardware": 0.0011377334594726562, "__label__health": 0.001809120178222656, "__label__history": 0.0004968643188476562, "__label__home_hobbies": 0.00016891956329345703, "__label__industrial": 0.00069427490234375, "__label__literature": 0.0008916854858398438, "__label__politics": 0.00033354759216308594, "__label__religion": 0.0005993843078613281, "__label__science_tech": 0.314453125, "__label__social_life": 0.00017940998077392578, "__label__software": 0.03570556640625, "__label__software_dev": 0.63525390625, "__label__sports_fitness": 0.00028204917907714844, "__label__transportation": 0.0006871223449707031, "__label__travel": 0.0002484321594238281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25866, 0.01913]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25866, 0.64972]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25866, 0.88438]], "google_gemma-3-12b-it_contains_pii": [[0, 1178, false], [1178, 2676, null], [2676, 3525, null], [3525, 5325, null], [5325, 6792, null], [6792, 8252, null], [8252, 8900, null], [8900, 9718, null], [9718, 11786, null], [11786, 13893, null], [13893, 15556, null], [15556, 16643, null], [16643, 18808, null], [18808, 20652, null], [20652, 22048, null], [22048, 23833, null], [23833, 24683, null], [24683, 25334, null], [25334, 25866, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1178, true], [1178, 2676, null], [2676, 3525, null], [3525, 5325, null], [5325, 6792, null], [6792, 8252, null], [8252, 8900, null], [8900, 9718, null], [9718, 11786, null], [11786, 13893, null], [13893, 15556, null], [15556, 16643, null], [16643, 18808, null], [18808, 20652, null], [20652, 22048, null], [22048, 23833, null], [23833, 24683, null], [24683, 25334, null], [25334, 25866, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25866, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25866, null]], "pdf_page_numbers": [[0, 1178, 1], [1178, 2676, 2], [2676, 3525, 3], [3525, 5325, 4], [5325, 6792, 5], [6792, 8252, 6], [8252, 8900, 7], [8900, 9718, 8], [9718, 11786, 9], [11786, 13893, 10], [13893, 15556, 11], [15556, 16643, 12], [16643, 18808, 13], [18808, 20652, 14], [20652, 22048, 15], [22048, 23833, 16], [23833, 24683, 17], [24683, 25334, 18], [25334, 25866, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25866, 0.16058]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9e5fcb39104f87c279cb4bc42d3c2c6fd74bfb30
Control Patterns - Bridging The Gap Between Is Controls And BPM Thomas Schaefer Institute for Information Systems (IWi) at the DFKI, Saarbruecken, Saarland, Germany, thomas.schaefer@iwi.dfk.de Peter Fettke Institute for Information Systems (IWi) at the DFKI, Saarbruecken, Saarland, Germany, peter.fettke@iwi.dfk.de Peter Loos Institute for Information Systems (IWi) at the DFKI, Saarbruecken, Saarland, Germany, loos@iwi.uni-sb.de Follow this and additional works at: http://aisel.aisnet.org/ecis2013_cr Recommended Citation Schaefer, Thomas; Fettke, Peter; and Loos, Peter, "Control Patterns - Bridging The Gap Between Is Controls And BPM" (2013). ECIS 2013 Completed Research, 88. http://aisel.aisnet.org/ecis2013_cr/88 This material is brought to you by the ECIS 2013 Proceedings at AIS Electronic Library (AISeL). It has been accepted for inclusion in ECIS 2013 Completed Research by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. CONTROL PATTERNS – BRIDGING THE GAP BETWEEN IS CONTROLS AND BPM Schäfer, Thomas, Institute for Information Systems (IWi) at the German Research Center for Artificial Intelligence (DFKI) and Saarland University, Campus, Building D32, 66123 Saarbrücken, Germany, thomas.schaefer@iwi.dfki.de Fettke, Peter, Institute for Information Systems (IWi) at the German Research Center for Artificial Intelligence (DFKI) and Saarland University, Campus, Building D32, 66123 Saarbrücken, Germany, peter.fettke@iwi.dfki.de Loos, Peter, Institute for Information Systems (IWi) at the German Research Center for Artificial Intelligence (DFKI) and Saarland University, Campus, Building D32, 66123 Saarbrücken, Germany, peter.loos@iwi.dfki.de Abstract: While playing an increasingly important role across various industries, the efficient management of legal and regulatory compliance requirements remains a challenge in modern organizations. Commonly, compliance is handled by separate organizational units and not well integrated with core business processes. Based on the area of information systems (IS) controls as mechanism to fulfil given requirements, this paper proposes a concept to bridge this gap between regulatory compliance and business process management. The presented concept enables linking process models with internal control systems and provides a common language for all parties involved. An approach is developed to identify and extract reusable control elements for regulatory compliance from real-world process specifications. These artefacts are formalized as “Control Patterns” (CPs). Inductive analysis methods are employed to identify reusable patterns. As a proof of concept, based on a sample of proven implementations of the software change management process, the paper analyses possibilities to extract, generalize and explicitly model IS controls. Keywords: Business Process Management, Process Modelling, Internal Controls, Compliance, Control Patterns, IS Audit and Control 1 Introduction Process complexity is continuously increasing in modern organizations. At the same time organizations must adhere to a rapidly growing plethora of internal as well as external laws, policies, directives and regulations (Caldwell, 2012). Professional media identify this as a top driver for additional information security spendings in 2012 (Schwartz, 2011). A common way to tackle compliance requirements is the setup of an internal control system (ICS, see 2.2). For this, the requirements – which are usually presented in unstructured text format – are first transposed and grouped into “control objectives” (COs). For each objective, sets of measures are established, which support the achievement of the CO – these measures are denoted “controls”. Organizations nowadays face the problem, that the link between their ICS and business processes is not well established. This starts already with a gap at the design stage. On one hand business processes are graphically modelled by BPM (business process management) experts in cooperation with business responsibles using dedicated methods and tools like EPC (Event-driven Process Chain) or BPMN (Business Process Model and Notation). On the other hand compliance and control aspects, which form the ICS, are managed by organizationally separated compliance and risk teams. The ICS is formalized separately in the shape of textual representations, often based on common frameworks like COSO (COSO, 1992), COBIT (ISACA, 1993) and ISO27001 (ISO, 2005) or control programs (e.g. SAS 70, ISAE 3402, SOx 404). An integrated perspective is not available in this case. Business process compliance (BPC, see 2.3) as research domain aims at providing such integration mechanisms (Elgammal et al., 2011). In addition to traditional process models, compliance requirements are expressed in the shape of rules and formalized in declarative languages like LTL, FCL or CTL (Muehlen et al., 2007). Depending on the chosen method, these rules have then to be followed during process modelling stage (design-time BPC) or process execution stage (runtime BPC) (Governatori et al., 2008, Lu et al., 2008). Though research has been performed in this area for several years (Pesic, 2007) and promising approaches are available, such declarative formalization is still rarely used for managing regulatory compliance in practice. A limitation of current BPC techniques is the significant complexity and effort needed to formalize rules and maintain the rule base (El Kharbili et al., 2008). The transfer of compliance requirements, e.g. law texts, into suitable formal rules is not trivial and in practice, organizations often lack resources to accomplish this. Legal and compliance departments feature the skills to interpret law texts, but lack knowledge about process and rule modelling, vice versa accounts for BPM departments. This indicates that it is necessary to develop further concepts, which incorporate compliance and control requirements directly into process management in an efficient and accessible way. The Control Patterns (CPs) presented here can be seen as a complementary approach to existing BPC research. Traditional BPC approaches often declare automated compliance validation of business processes as their primary objective and thus first of all build upon strict formal models, which aim at facilitating later IS implementation. This comes along with the price of usability and maintenance issues as mentioned above, especially for complex business processes and compliance topologies. CPs intend to bridge the gap between BPM and compliance from another angle: As a starting point, the CP-idea uses the currently most common business practices for process modelling (graphical activity sequences) as well as for compliance management (internal controls approach). By combining those two worlds, CPs provide a common ground for all stakeholders involved in BPM and compliance topics. As such, the CP approach clearly provides benefit on its own, but it might as well serve as an intermediary step to put organizations into a position, where they are better able leverage the potential of existing BPC approaches, ultimately heading towards extended compliance automation. The intention of this paper is two-fold: First of all, the idea of Control Patterns shall be introduced and discussed. Second, to prove the practicability of the CP idea, an approach shall be presented, how a CP can be derived from a set of process models in a proof-of-concept exercise. Hence, the following two questions define the research goal: a) Is it possible to transfer the “design pattern” idea to the cross-section of BPM and compliance in order to achieve better integration between current practices in both domains? b) If Control Patterns could contribute here, how can those be created based on existing process implementations? Subsequent to this introduction, in chapter 2 relevant terminology and related research is recapitulated. Chapter 3 describes the chosen research approach based on inductive analysis. Building upon this, the paper examines in chapter 4 possibilities to extract and generalize internal controls based on real-life process specifications. The control aspects will be formalized in the shape of reusable, modular process elements for compliance-aware process design and improvement. As a proof of concept, with the Software Change Management process a palpable example out of the IS controls domain is investigated. Three entities of this process, which are implemented and proven in practice at three different organizations, are examined, and based on this a generalized Control Pattern “Production Deployment” is derived in chapter 5. The results are summed up in chapter 6 and an outlook is given. 2 Terminology and related research 2.1 Business Process Management Houy et al. (2010) performed a literature review on business process management. According to this, a business process can be understood as a chronological sequence of activities to fulfil a business task during which a value is delivered by transformation of materials or information. Business process management denotes a set of methods, techniques and software tools to support the design, implementation, monitoring and analysis of operational business processes in order to facilitate an optimized value creation (van der Aalst, 2013). Current research activities support an evolutionary view, where BPM itself is conducted as an iterative process following a lifecycle model to facilitate continuous improvement of business processes (Scheer and Brabänder, 2010). 2.2 Compliance and Internal Control Systems Compliance is defined as “ensuring that business processes, operations and practice are in accordance with a prescribed and/or agreed set of norms” (Sadiq and Governatori, 2010). This is to be clearly distinguished from another understanding of the term “compliance” common in BPM research, where it denotes as “process compliance” the alignment of process instances to their respective model or model to meta-model (Chesani et al., 2008). In the given sense of regulatory compliance, it encompasses an iterative compliance and risk management process including the implementation of detective, preventative and compensating measures to fulfil compliance requirements – so called “controls”. The totality of such controls constitutes an organizations internal control system (ICS). The Committee of Sponsoring Organizations of the Treadway Commission established in 1992 the de-facto ICS standard with their COSO framework (COSO, 1992). An ICS according to COSO strives to establish a process, which allows a valid assessment of the effectiveness and efficiency of business operations, the reliability of financial statements and compliance with given regulation. 2.3 Business Process Compliance Business process compliance constitutes an important element at the junction of BPM and compliance. Conceptually, BPC denotes the execution of business processes in adherence to applicable internal and external regulations and as such represents an integrated view on business processes and compliance. El Kharbili et al. (2008) performed a review on the state-of-the-art of business process compliance checking. They distinguish between three general validation mechanisms for BPC: While the “design-time” approach uses validation of process models during the modelling phase to identify compliance conflicts, the “runtime” approach inspects via process monitoring individual process instances during execution in order to highlight potential discrepancies towards a predefined set of rules. “Backward” validation as the third concept follows a retrospective approach and uses data and process analysis methods to extract potential compliance violations ex post. Compliance requirements are often expressed in the shape of rules and formalized in declarative languages like Event Calculus, LTL, FCL or CTL (Governatori et al., 2008, Lu et al., 2008, Sadiq and Governatori, 2010, Muehlen et al., 2007, F.M. Maggi, 2011, Pesic, 2007). Although they acknowledge the relevance of formal modelling, El Kharbili et al. (2008) view the complexity of current solutions and prior knowledge necessary for users as a significant adoption barrier. 2.4 Patterns Patterns in the sense of observable regularities in a certain environment are discussed in various research disciplines. A specific class of patterns are “design patterns”, which, after their definition, are intended to serve as modular templates for future real-world applications in a “good practice” style. The current understanding of design patterns originated from the field of architecture and construction (Alexander et al., 1978). The concept gained increased attention in computer science and IS research through the well-known Software Engineering Design Patterns (Gamma et al., 1995). With “Workflow Patterns” (van der Aalst et al., 2003) the concept was transferred to the BPM domain. As stated before, the given paper transfers the basic design pattern idea in the shape of “Control Patterns” to the domain of internal controls and regulatory compliance of business processes. The term “Control Pattern” is used in a similar context by Namiri and Stojanovic (2007), yet they follow a different approach as they primarily focus on so called “application controls”, which can be implemented hard-coded into application systems to automatically support selected compliance requirements. Similarities exist as well between CPs and the concept of Compliance Fragments (Schumm et al., 2010), although the approach for the generation of patterns and the area of application differ with the latter one aiming at dynamically hiding process parts, e.g. in an outsourcing scenario. 3 Research approach As stated before, the given paper aims at the creation of generic, reusable process snippets denoted Control Patterns, which afterwards may be drawn upon to support regulatory compliance in business processes. To achieve this, a technique from the domain of pattern research is applied, building upon the “three-occurrences-rule” (Winter, 2009) (a solution that has been deployed in three different situations can be considered a pattern). Following an inductive research approach, CPs are extracted through analysis of a set of real-world process implementations. Figure 1 provides an overview of the approach including a reference where the respective steps are discussed for the given case study: ![Figure 1: Research Approach](image) As a first step, the target process to be examined has to be chosen. Then (at least) three exemplary real-world implementations of this process have to be acquired. The sample implementations have critical influence on the quality of patterns potentially derived from them. Hence characteristics of sample organizations (e.g. industry, size, culture) are to be taken into account and mature, proven implementations should be preferred. As a sub-step, the representation of the sample processes has to be unified. They are (re-)modelled by a process expert in a common modelling notation like EPC or BPMN and reviewed by a second person for consistency. As a third step, from an internal controls perspective a control objective, where the process implementations are evaluated against, needs to be formulated. This is done based on analysis of law texts or compliance requirements for the domain of the sample process. If standards or generalized control frameworks (e.g. Cobit, COSO) are available for the sample process domain, the control objectives defined within can be drawn upon. As it is shown in chapter 4.2, most of the time such a control objective will be multi-faceted, i.e. it will state more than one elementary requirement to be fulfilled. In a fourth step, all sample process implementations are reviewed by a skilled person for elements supporting the defined control objective, the so called “control elements”. These are highlighted in the processes, referenced and collected in a tabular overview including information, which aspects of the given, typically multi-faceted control objective are supported. This step is in large parts similar to what auditors usually do in practice during process assessments. The outcome of this step can be improved by a second independent sample process review and subsequent result matching. When all sample processes have been reviewed, in step five a summarized set of all identified control elements (indicating the control objective aspects they support) is created. In the case that similar control elements exist, which originate from different sample processes, but represent the same subject (e.g. “user acceptance test” vs. “perform end user test”), these are merged into a single generalized control element and it is documented, in which sample processes they occur (cp. section 5.1). In the final step six, generalized Control Patterns are derived from the prepared data. A threshold t is defined with t being the minimum number of sample processes, in which a control element must occur in order to qualify for inclusion in a Control Pattern. The reasoning behind this is, that a control element, which has been found in many of the sample processes, is likely to be relevant for other similar process implementations in the future. When the relevant control elements are identified, a process expert again reviews those areas of the sample processes, where the relevant control elements occur. Control elements, which are closely related in the process model (from a graph perspective) are grouped together. Model areas with a high density of relevant control elements are primary candidates for the deduction of Control Patterns. For such cases, the process expert attempts to derive a generalized process part from the given sample implementations, which still reflects the respective group of control elements – a Control Pattern. This pattern is then validated again by a second skilled person against the source process implementations as well as for its support of the defined control objective. With this model, there is an n:m-relationship between identified control elements and potentially derived Control Patterns. Bearing the aim of practical applicability of the Control Patterns in mind, the following rules have been set up: 1. Don’t create trivial patterns, which represent only one control element. This may lead to an abundance of patterns, which are difficult to manage and to use (trivial patterns might be interesting though from a academic perspective though, e.g. as “base” patterns). 2. Don’t create overly complex patterns, which e.g. represent a whole process (even if many control elements are sequentially linked, split models in such cases). Such patterns are not easy to understand, to reuse and to integrate as fragments into existing processes. In addition, generalization from sample processes will be more complex. From the rules above follows as a suggested rule of thumb, that a Control Pattern should contain 3-5 process steps with 4-10 model elements. 4 Software Change Management – control elements 4.1 Process Selection and Sample Acquisition In order to discuss common IS control elements inside processes, which can be generalized into CPs later on, an exemplary evaluation is performed based on the Software Change Management (SWCM) process (also referred to as Release Management or Software Deployment process). As a first step according to our research method, this process was chosen, because it can be found in many organizations and it features several control elements. In short, it describes how requested software/program changes to application systems are migrated to production. Note, that from an IS control perspective, this is to be distinguished from the general software development process – it’s closely linked, but not the same. As defined by the research method step 2 (section 3), three real world SWCM process implementations were collected (two IT service providers, one financial institution; two organizations with 100-500 employees, one with >1000 employees). At all companies the reviewed SWCM process was in operation for at least three years at the time of the review. Given this, together with the size of the source organizations and level of regulation in their domains, the maturity of the sample processes is considered sufficient for the chosen research approach. The sample processes were initially represented as textual descriptions or (semi-formal) process models. In a first step, these representations were transposed by experienced modellers into EPCs to create a common basis for the subsequent analysis. In the EPCs, trivial events are omitted as it is common to avoid model pollution. 4.2 SWCM Control Objective Subsequently, step 3 of the research method (section 3) requires identifying a relevant control objective. From a controls perspective, the SWCM process belongs to the domain of information security management. Thus it is reasonable to identify those elements in the sample processes, which support goals of information security\(^1\), e.g. the well-known “C-I-A” triad of confidentiality, integrity and availability (Solomon and Chapple, 2004). “Confidentiality” means that only authorized people or systems have access to information. “Integrity” stands for the correctness of information and processing, including that information cannot be altered undetected. “Availability” describes the aim of having information available when and where it is needed. Internal control systems generally define a set of subsidiary control objectives to achieve such overall goals. As one such example, the SWCM control objective used here is derived from a common framework in the domain, COBIT (ISACA, 1993). It shall be as follows: *Controls provide reasonable assurance that changes to the production environment are approved, prioritized, tested and documented.* In practice, this would be the starting point for an auditor who is reviewing an organizations SWCM process. According to step 4 of the research method (section 3), the SWCM control objective is used as a reference for the analysis of the three sample processes. In the following chapter, the evaluation procedure is described in detail based on one of the sample process implementations. Corresponding evaluations for the two other samples can be provided on request. The process description is accompanied by a figure showing a process overview. In this overview, the control elements discussed subsequent to each description are marked with circles containing counters as identifiers for reference in the format [Process Number].C[Control Number], e.g. “I.C4”. 4.3 SWCM Process Analysis 4.3.1 SWCM I. – Process Description In the presented sample SWCM process (Figure 2), a business department starts the process by requesting a software change. This leads to the initiation of a linked sub-process for software development with a new or changed component developed as its result. Now the Business Analyst (BA) as responsible for the change triggers the deployment. He fills in a dedicated change management paper form. Based on this, the IT Operations department installs the changed software in a test environment. When done the BA performs a unit test. If the test fails this is documented, the current change is closed and a new change is requested. Otherwise if the unit test is successful a User \(^1\) It is not feasible to discuss the topic of information security, related control objectives and the C-I-A triad on a technical level in detail within the given scope. Please refer e.g. to Solomon et al. (2004) for further insights. For the purpose of this paper, information security shall serve as one example control domain, having confidentiality, integrity and availability of information systems as its major goals. Acceptance Test is to be performed by the affected business users and a sign-off is requested. If the test was successful the process proceeds to the next phase. In coordination with the involved entities the BA schedules the go-live of the given change. Here it is distinguished between changes that are put into production individually (high urgency) and changes that are accumulated and bundled into periodic software releases for go-live. After all relevant information has been collected the change form is given to the supervising Change Manager for final approval of the change. Now the current process either links to the release management process or IT Operations carries out the actual implementation. Subsequently the Change Manager verifies the change and depending on the change impact business departments perform backtesting (validation) activities. Finally the BA closes the change instance. 4.3.2 SWCM I. – Control Elements Based on the previous process description, a selection of control elements can be identified, which support the control objective (CO) defined in section 4.2. As a reminder, it stated that changes shall be appropriately approved, prioritized, tested and documented. The control denoted I.C1 in Figure 2 refers to the fact, that the process model explicitly includes a paper form to document software changes. This supports a structured, repeatable process and helps to collect all relevant information. Obviously this supports the documentation requirement of the CO. Furthermore this document constitutes the reference point for all approvals during the process. The next control I.C2 highlights the fact, that with the BA a defined individual takes responsibility for a change. This ensures follow up and timely processing of the change as goals within the given CO. Usage of a test system constitutes control I.C3. The next control I.C4 relates to the user acceptance test required during the given sample process. This again supports the CO requirement to test changes, but even more important this is a crucial step for appropriate change approval before go-live, another core aspect of the CO. The relevant business departments perform tests here. The following control I.C5 is concerned with scheduling of changes and as such is coherent with the change prioritization required by the SWCM CO. It could be argued that this is late for prioritization as a lot of resources have already been invested in a change here; still there might have already been earlier prioritization in the hidden sub-process for Software Development. Regardless of this, prioritization of change implementation is an important element at the current process stage as it decides whether changes are supposed to go live directly or as part of a future release, potentially several months later. After all preparation steps have been performed the Change Manager as a supervising entity for all changes is asked for final approval before go-live in I.C6. This clearly supports the CO objective of proper change approval and the completeness of documentation is checked. The next control in Figure 2, I.C7, points out that the actual implementation of a change is performed by dedicated IT Operations personnel. Though not very obvious at first glance, this is a very valuable control. The capabilities to implement changes in a production system can be limited to a small group of people this way. This limits the risk of transport of unauthorized software changes or even malware to production systems which could severely impact system confidentiality, integrity and availability. Without such proper segregation of duties, e.g. developers might unintentionally move unapproved test software to production. The last control aspect highlighted in the process, I.C8, covers ex-post controls for changes. The change verification by the Change Manager as well as a backtesting procedure from business side support the integrity of the system. 5 Control Patterns 5.1 Control Elements Consolidation After all sample processes have been analysed, the identified control elements are collected in a common table as described in step 5 of the research method (section 3). As a reminder, elements appearing in multiple processes are combined into a generalized control element. Table 1 presents an overview of the identified control elements and the process samples where they occurred. It lists both the link to high level “C-I-A” information security goals and the specific SWCM control objective aspects as they were identified during the assessment (see chapter 4.3.2). The overview visualizes common ground between the three process representations. It also shows where the position and order of certain control elements in relation to the process sequence varies between our sample processes. The reviewed sample SWCM processes differ in perspective: While SWCM I., which was presented in detail in section 4.3, puts an emphasis on the testing and rollout of a change after development, the second implementation SWCM II. rather takes a developers perspective and aims at ensuring that all relevant approvals have been given and proper prioritization has been performed before a change is developed. SWCM III. features as a specific, that it is the only process with an explicitly modelled backout procedure. Furthermore it goes beyond the perspective of a single change and indicates surrounding support processes. None of the sample processes explicitly covers all of the extracted control elements. Some of these differences might indicate potential areas for improvement in the individual processes. The previously extracted control elements can be used as a foundation to develop generalized process fragments as will be shown in the subsequent chapter. Control Element\(^1\) | IS goal\(^2\) | SWCM CO\(^3\) | SWCM I.\(^4\) | SWCM II.\(^5\) | SWCM III.\(^6\) ---|---|---|---|---|--- a) Standardized, structured case documentation | I, A | d | I.C1 | II.C3 | III.C1 b) Timely case processing through establishment of case ownership | A | p | I.C2 | II.C1 | III.C2, III.C6 c) Business prioritization for change | I, A | p | - | II.C4 | - d) Business approval for change before development | I | a | - | II.C2, II.C4 | III.C3, III.C4 e) Staging concept – usage of dedicated test infrastructure | C, I, A | t | I.C3 | - | - f) Testing (general) | I, A | t | I.C3 | II.C9 | III.C5 g) User acceptance testing and business sign-off | I | t, a | I.C4 | - | - h) Technical scheduling of implementation in production | I, A | p | I.C5 | II.C6, II.C8 | III.C4 i) Final go-live approval by supervision entity | I, A | a | I.C6 | II.C7 | - j) Communication of change schedule to relevant entities | I, A | a, p | - | II.C8 | - k) Change migration to production system performed by designated personnel (distinct from development) | C, I, A | t, d, a | I.C7 | - | - l) Ex-post change verification | C.I | a, d | I.C8 | II.C10 | III.C6 m) Defined Recovery Procedures for unsuccessful changes | I, A | d | - | - | III.C7 n) Validation of documentation completeness | I, A | d | I.C8 | II.C5 | III.C6 Table 1: Overview of control elements 1\(^1\) short text description of the control; 2\(^2\) overall Information Security goals supported by the control, values (C)onfidentiality, (I)ntegrity, (A)vailability; 3\(^3\) supported aspects of Software Change Management control objective, changes shall be (a)pproved, (p)rioritized, (t)ested and (d)ocumented; 4), 5), 6) reference to the matching control in each sample process if existent, e.g. I.C1, III.C6 ### 5.2 Control Patterns – Generalization As already stated, it is the intention of this work to leverage design patterns – in sense of “formalized best practices” (Winter, 2009) for given problems – to better integrate BPM and compliance requirements. Following this, a Control Pattern is an abstract process building block introducing an “internal control system”-perspective into process modelling. CPs are supposed to be used as a partial template at the process design stage or as a guideline during process review and improvement exercises. A pattern may contain various process elements as defined in the used modelling language, e.g. functions, events, systems, organizational units and flow control elements. Beyond this, it is always extended with information relevant from an ICS perspective. This includes the supported overall domain goals (here the Information Security “C-I-A” triad) as well as the support aspects for the concrete control objective (here the CO Software Change Management with its “approved, prioritized, tested and documented” aspects). This makes it possible to create reference catalogues of such control patterns structured by domain, control objectives and support aspects. Based on given compliance requirements, appropriate patterns can be identified and thereupon be employed for process design and improvement. A CP is designed to be reusable and thus remains abstract to a certain degree. This shall be illustrated with an example CP “Production Deployment”, which is derived according to step 6 of the research method (section 3): From the consolidated list of control elements (Table 1) those elements are considered for CP generalization, which occur in at least two of three sample processes (threshold t=2). When locating those “relevant” control elements in the sample processes, a grouping can be identified towards the end of the process models representing control elements i), l) and n) – while j), k) and m) are not considered due to threshold (see Table 1). As described by the method, a generalized process part is derived from the given samples, which still reflects the respective group of control elements. The resulting CP is shown in Figure 3. Figure 3: Control Pattern "Production Deployment" The CP expresses “distilled” requirements i), l) and n), which shall be fulfilled in this context: - before a change is moved to production, the go-live has to be approved to avoid conflicts - structured case documentation is required for all steps in the pattern, e.g. the approval should be given based on a sound change documentation - an entity independent of the change implementer should verify ex post, that the change has been deployed correctly in the production system as defined in the change documentation The visualization of internal control requirements as a process building block makes it easier to understand the requirements and facilitates consideration of control requirements during process design. Depending on an organization's individual need, various CPs might be chosen from available reference catalogues and combined as required. As the patterns serve as generic templates, they may be adapted according to individual demand. In case of significant changes to a pattern, a reflection would be advised on whether the pattern still fulfills its designated control objective as expected. Conceptually, CPs offer high flexibility concerning the abstraction level. It may vary depending on the intended purpose, i.e. be (information security) domain specific as in the example above or more general like the following one: A generic Control Pattern “Testing” for example could be defined in a way that it could be used in control objectives for software development, hardware change management, project management, product design and so on. An even more abstract CP could define a pattern like “Authorization for subsequent process step” based on completeness and correctness of input information and the appropriate role for authorization. More general patterns require higher expertise when they are integrated into a process. In return they allow more flexibility regarding application and can be transferred to new domains or control objectives, where specific patterns may not be available. 6 Conclusion and Outlook The given paper proposes an approach to extract/generalize internal controls based on real-life process specifications, make them explicit and harness the results in the shape of reusable patterns for process improvement. It is shown, how the idea of “design patterns” can be transferred to the domain of BPC with Compliance Patterns. A method for pattern creation is developed and applied in a case proof of concept. Though the conducted case study extracted a broad set of control elements from the sample processes, which could be used for generalization, it showed at the same time, that from a controls perspective all three real-world processes had deficiencies and were lacking some controls modeled in the other cases. This makes clear, that concepts to consider control aspects during process design offer potential for process improvement. Explicit modelling of control elements in process representations can help to support transparency between business processes and compliance requirements. Research in this area is still evolving. BPC offers well-grounded concepts with the formalization of compliance rules and linking these to processes. However, the formalization of requirements as formal rules involves a significant initial evaluation and modelling effort combined with on-going maintenance. In addition, it relies on intensive cooperation of organizational units with special skill sets. Thus, many companies are reluctant to invest into such an integrated approach and continue to manage business processes and their internal control system separately from one another. CPs may contribute to mend this issue by taking real-world control elements and making them reusable as structures defined with well-proven BPM modelling techniques. As a result, the usage of CPs facilitates the design of compliance-aware processes. As for all pattern approaches, CPs are not finished designs, which can be implemented 1:1 in processes. They are to be considered blue-prints for how to solve a certain problem, i.e. support a given control objective. This allows for a high degree of freedom concerning their implementation. CPs may be used one at a time for a “soft”, less intrusive step by step improvement of existing processes or in combination at the design stage for new processes. Reuse of CPs helps to avoid common control design mistakes, due to CPs being “formalized best practices”, derived from proven real-world processes. They provide a common language for parties involved with transfer of control requirements to operational business processes. However, CPs are to be distinguished from prevalent best practice process templates as they are e.g. provided by ITIL. Instead of only giving hints “how” a certain task should be performed, CPs are always closely linked to control objectives and thus offer reasoning “why” a certain control element is established. Building on this, they may significantly increase efficiency of audits, because by providing an explicit control perspective on processes, auditors may be able to understand these processes faster and thus may easier assess audit relevant aspects. The given paper serves as proof of concept for the deduction of CPs from real processes. Current limitations include the extent of the case study, which is linked to availability and quality of suitable sample processes, as well as dependency on the expertise of the process reviewers for steps like the identification of control elements and generalization. Additional research will be required regarding the formalization and application of CPs. Among other things, this includes the visualization and maintenance of a (customized) CP, once it has been integrated in an organizations process, considering the aim of making control elements explicit for audits. Furthermore it will be necessary to extend the available set of CPs beyond the presented examples before a real benefit e.g. for process design can be expected. It is self-evident, that (as for all pattern-based approaches) the added value of the concept increases with the number of supported domains, control objectives and patterns. Therefore, if the current work and feedback from the research community indicates further potential, it is planned to set up an open web platform, structured by adequate characteristics (e.g. by business domains and common ICS control objectives), where through collaborative process review and modelling efforts a library of reusable Control Patterns will be created, thus supporting enhanced compliance in business processes. References
{"Source-Url": "https://aisel.aisnet.org/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1311&context=ecis2013_cr", "len_cl100k_base": 8009, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31938, "total-output-tokens": 10214, "length": "2e12", "weborganizer": {"__label__adult": 0.0005559921264648438, "__label__art_design": 0.0013027191162109375, "__label__crime_law": 0.00212860107421875, "__label__education_jobs": 0.00717926025390625, "__label__entertainment": 0.00016319751739501953, "__label__fashion_beauty": 0.0003349781036376953, "__label__finance_business": 0.02398681640625, "__label__food_dining": 0.0005702972412109375, "__label__games": 0.00112152099609375, "__label__hardware": 0.0010089874267578125, "__label__health": 0.0010728836059570312, "__label__history": 0.0006570816040039062, "__label__home_hobbies": 0.0002536773681640625, "__label__industrial": 0.001556396484375, "__label__literature": 0.000885009765625, "__label__politics": 0.0006628036499023438, "__label__religion": 0.0005321502685546875, "__label__science_tech": 0.151611328125, "__label__social_life": 0.0002276897430419922, "__label__software": 0.053619384765625, "__label__software_dev": 0.7490234375, "__label__sports_fitness": 0.0003123283386230469, "__label__transportation": 0.0008540153503417969, "__label__travel": 0.0003199577331542969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45209, 0.02845]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45209, 0.34051]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45209, 0.91609]], "google_gemma-3-12b-it_contains_pii": [[0, 1016, false], [1016, 3016, null], [3016, 7612, null], [7612, 11594, null], [11594, 15490, null], [15490, 19827, null], [19827, 23969, null], [23969, 25416, null], [25416, 29755, null], [29755, 33748, null], [33748, 36662, null], [36662, 41128, null], [41128, 45209, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1016, true], [1016, 3016, null], [3016, 7612, null], [7612, 11594, null], [11594, 15490, null], [15490, 19827, null], [19827, 23969, null], [23969, 25416, null], [25416, 29755, null], [29755, 33748, null], [33748, 36662, null], [36662, 41128, null], [41128, 45209, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45209, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45209, null]], "pdf_page_numbers": [[0, 1016, 1], [1016, 3016, 2], [3016, 7612, 3], [7612, 11594, 4], [11594, 15490, 5], [15490, 19827, 6], [19827, 23969, 7], [23969, 25416, 8], [25416, 29755, 9], [29755, 33748, 10], [33748, 36662, 11], [36662, 41128, 12], [41128, 45209, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45209, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
0a226ca142dc1f29c239b46dae630e8e7189fca2
Adaptation of Cooking Instructions Following the Workflow Paradigm Mirjam Minor, Ralph Bergmann, Sebastian Görg, Kirstin Walter Business Information Systems II University of Trier 54286 Trier, Germany {minor|bergmann|goer4105|walt4701}@uni-trier.de Abstract. Former CCC systems have considered mainly the ingredients of cooking recipes. This paper contributes to the open challenge with a novel approach that targets the preparation instructions. Our demo system CookingCakeWf employs the workflow paradigm to represent and adapt cooking instructions in form of cookery workflows. Adaptation cases on former modification episodes of cookery workflows are reused for current change requests. A small experimental evaluation with cookery workflows created from pasta recipes of the CCC 2010 recipe base provides first insights into case-based adaptation of cookery workflows. 1 Introduction The International Computer Cooking Contest (CCC) is going into its third year. Cooking recipes are given in a case base to be retrieved and reused with the assistance of a computer system. The participating teams compete by their systems providing recipes in order to answer cooking wishes. The main task has been nearly the same over the years while additional challenges are changing to address recent developments in Case-based Reasoning. This year, the contest includes an open challenge for the first time. This paper adopts the open challenge by focusing on the cooking instructions and on how to adapt them to cooking wishes. For this, the textual cooking instructions are formally represented. We employ the workflow paradigm to represent cooking instructions as cookery workflows. A case-based adaptation method [1] developed for agile workflow technology is extended to adapt cookery workflows. Both, the control flow (of cooking steps) and the data flow (of ingredients and their products) are considered. The latter has not yet been addressed by previous work on automated workflow adaptation [1]. The output of our CCC open challenge system CookingCakeWf is an adapted workflow not yet a textual cooking instruction. The automated generation of text from the workflow representation is a topic of future work as well as the automated transformation of the original textual description of a cooking description into a formal workflow. Additionally, we participate at the CCC main challenge with our last year’s system CookingCake\(^1\) which is described in the literature \([2]\). The paper is organized as follows: Section 2 introduces a formal representation of the cookery workflows. Section 3 presents a novel case format for adaptation knowledge. Section 4 describes the methods for applying the adaptation knowledge to the workflows. Section 5 demonstrates the feasibility of the approach by means of a first experimental evaluation. \(<RECIPE> <TI>Baked Spaghetti</TI> <IN>1 c Chopped onion</IN> <IN>1 c Chopped green pepper</IN> <IN>1 tb Butter/margarine</IN> <IN>1 cn (28 oz.) Tomatoes with liquid; cut up</IN> <IN>1 cn (4 oz.) Mushroom stems and 1 piece - drained</IN> <IN>1 cn (2-1/4 oz.) Ripe olives, sliced and drained</IN> <IN>2 ts Dried oregano</IN> <IN>1 lb Ground beef, browned and drained (optional)</IN> <IN>12 oz Spaghetti, cooked & drained</IN> <IN>2 c (8 oz.) shredded cheddar cheese</IN> <IN>1 cn (10-3/4 oz) Condensed cream</IN> <IN>Mushroom soup; undiluted</IN> <IN>1/4 c Water</IN> <IN>1/4 c Parmesan cheese, grated</IN> <PR>In a large skillet, saute onion and green pepper in butter until tender. Add tomatoes, mushrooms, olives and oregano. Add ground beef if desired. Simmer, uncovered, for 10 minutes. Place half of the spaghetti in a greased 13-inch x 9-inch x 2-inch baking dish. Top with half of the vegetable mixture. Sprinkle with 1 cup of cheddar cheese. Repeat layers. Mix the soup and water until smooth; pour over casserole. Sprinkle with Parmesan cheese. Bake, uncovered, at 350 degrees for 30-35 minutes, or until heated throughout.</PR> </RECIPE> Fig. 1: Sample cooking recipe on Baked Spaghetti from the CCC recipe base. ## 2 Cookery workflows Cooking recipes are usually described by a list of ingredients and a cooking instruction. Fig. 1 shows a sample recipe description of a pasta recipe from the CCC recipe base in XML. The title of the recipe (<TI>…</TI>) is followed by a list of ingredients (<IN>…</IN>) and a textual cooking instruction (<PR>…</PR>). Many internet platforms on cooking use a comparable XML representation of the cookery data. This semi-structured representation has been transformed into a more formal workflow representation for our CCC system by a human expert. Workflows describe processes by means of tasks (activities) that are organized within a control flow. The \(^1\) See also http://proj2.wi2.uni-trier.de/ CAKE workflow modeling language (compare [3]) consists of several types of workflow elements whose instances form block-oriented control flow structures. A workflow element can be a task, a start symbol, an end symbol, or a control flow element like an AND, XOR, loop etc. Control flow elements always define related blocks (AND-block, XOR-block etc.). Blocks cannot be interleaved but they can be nested. In addition to the control flow, a workflow can have a data flow, which is the flow of data objects from one workflow element to a successor workflow element. The data flow is specified by means of data links that connect the data objects with workflow elements. In a cookery workflow, every ingredient is considered a data object while the preparation steps form the workflow tasks within a control flow. Preparation steps and control flow are extracted from the textual cooking instruction as well as the data links that connect the data objects (ingredients) with the tasks (preparation steps). Fig. 2 a) illustrates this by showing the workflow representation for the sample recipe from Fig. 1. Some challenges concerning the control flow (1), the data objects (2), and the data links (3) had to be faced during the transformation: (1) The cooking instructions suffer from both, a granularity and a paraphrase problem. Granularity means that some authors describe instruction steps in great detail (e.g. peel and chop onions, melt butter in a large skillet, add onions and beef, wait until brown) while others aggregate minor steps to higher-level activities (e.g. brown beef and onions). Despite granularity models have been discussed in the workflow literature [4], we have decided to avoid the granularity problem in the work by harmonizing the granularity of workflow tasks by hand. However, the granularity problem has to be solved in future work on the automated transformation of recipes into the workflow representation. The second challenge when defining the control flow of cookery tasks is the vocabulary. The paraphrase problem occurs here as well, which addresses the non-uniform wording used in texts from different authors (e.g. sauté vs. brown). Using a task ontology would solve this problem to a large extent (compare the discussion at the end of Section 4). (2) The representation of data objects (ingredients) raises some issues concerning the preparation states, amounts, and double (or multiple) occurrences of ingredients. Although these issues have been discussed by former CCC systems already, they deserve special attention in the context of workflow adaptation. The preparation states of the ingredients (one piece, chopped, sautéed, cooked etc.) could be represented by multiple data objects (whole onions, sliced onions, chopped onions, etc.) or by the same data object with the preparation state represented as a variable property. The amounts of the ingredients can be represented as a data object property as well. Double (or multiple) occurrences of ingredients in different roles (e.g. butter to brown onions and flakes of butter for a topping) can be represented either as different objects (e.g. butter and flakes of butter, or butter one and butter two) or as a single object. For simplicity reasons, we have chosen the latter and to abstain from representing any properties (amounts, state of a) Original workflow: ![Original workflow diagram] b) Change request: Replace ground beef by spelt-grain PROBLEM PART c) Adaptation steps: **ADD-List:** 1) cook (pre) -> (post) 2) Spelt-grain (pre) -> (post) 3) Vegetable stock (pre) -> (post) 4) Spelt-grain (pre) -> (post) 5) Spelt-grain (pre) -> (post) 6) Vegetable stock (pre) -> (post) **DELETE-List:** 1) Ground beef (pre) -> (post) 2) Ground beef (pre) -> (post) SOLUTION PART d) Adapted workflow: ![Adapted workflow diagram] Fig. 2: Adaptation case on the sample cooking recipe from Fig. 1. Adaptation of Cooking Instructions Following the Workflow Paradigm Preparation tasks consume ingredients and produce transformed ingredients or aggregates of ingredients. Data links connect data objects with tasks. In general workflows, the data objects can play the role of input or output data for the tasks. A complete model would include an input connection between a data object and every task using it as an input and an output connection for every product resulting from a task. Data objects for aggregated objects (e.g. a document collection made from two different forms and a copy of a certificate, or a sauce made from onions and tomatoes) would be created by the task that aggregates the objects. However, textual cooking instructions tend to be underspecified due to the human principle of economy in language. Our analysis of the CCC recipe base has shown that the data links for the input of ingredients are more frequently described than for the outputs. Transformed ingredients and aggregates occur very seldom in the instructing texts. Hence, we do consider only inputs at their first occurrence, except for those ingredients that occur in multiple roles as described above. 3 Adaptation cases for cookery workflows An adaptation case represents the knowledge from a previous adaptation episode of a workflow. Our case representation consists of a problem and a solution part as following: The problem part contains a semantic description of the change request (specifies a desire to modify the recipe, e.g. to replace ground beef by spelt-grain to reduce the fat content) and the original workflow prior to the adaptation. The solution part contains the adapted workflow and the description of the adaptation steps that have been executed to transform the original workflow into the adapted workflow (added and deleted workflow elements, e.g. delete the ground beef with its data links and add new data objects for spelt-grain and stock plus the preparation steps for preparing them and the required data links). Fig. 2 shows a sample adaptation case by which baked spaghetti have been made according to the original recipe given in Fig. 2 a) but with spelt-grain instead of ground beef. Fig. 2 d) shows the adapted workflow: The spelt-grain has to be cooked in stock before it can be added. Please note that aggregates and preparation states are omitted in the current representation. A complete model would include an aggregated object for “cooked spelt-grain” with an output data link from the task “cook” and an input data link to the task “add”. Like in our previous work [1], the adaptation steps are organized within an add and a delete list. In extension of the previous work, an add or delete step may refer to a workflow element, a data object or a data link. The add and delete lists organize the add and delete steps in the form of chains. A chain encapsulates a set of adaptation steps on connected workflow elements, data objects, and data links. A chain is intended to be either fully applied or not applied at all while reusing the adaptation case. Further, each chain records a pair of anchors. A pre anchor is the workflow element or data object (in the original workflow) after which the add or delete steps from the chain have been applied. A special ‘null’ element is used as anchor in case an appropriate data or workflow element is not available, for instance if a data object is newly created or deleted. A post anchor is the workflow element from the original workflow following the last element of the chain. We assume that data objects do not play the role of a post anchor at the moment. This might change in future in case output data links will be included. Hence, the pre anchor describes the position after which the adaptation starts and the post anchor describes the first position in the original workflow that is not anymore affected by the adaptation described in the chain. Fig. 2 c) illustrates this by sample add and delete lists. We have decided to model very fine granular chains in order to apply as much of the adaptation steps as possible. Another modeling strategy would be to maximize the chains, for instance to combine the add chains 1, 2, 3, 4, and 6 to one chain with one anchor pair. This would reduce the effort for the anchor matching at the cost of a higher modeling effort to revise the workflow. 4 Applying adaptation cases to new situations We now focus on the retrieve and reuse phase of the CBR cycle [5]. The revise and retain phases are not yet considered to be automated. The retrieval of workflow adaptation cases that may be reused for the adaptation of new cookery workflows is also not within the scope of this paper. We refer to the literature [1] for a discussion of similarity measures that come into consideration for the retrieve phase. Fig. 3 depicts a sample query for which the adaptation case shown in Fig. 2 would be applicable. The reuse phase has to solve two main issues: First to determine the locations where the adaptation steps from the retrieved adaptation case should be applied to the target workflow and, second, to execute these adaptation steps. The change locations in the target workflow are determined by mapping the anchors from the retrieved case. The composite anchor mapping method described in our previous work [1] is extended in order to consider the data flow in addition. The mapping method consists of two steps, which we briefly sketch below: (1) Valid candidate positions for anchors are chosen within the set of workflow elements and data objects of the target workflow. A data object anchor can be mapped to the position of a data object in the target workflow only if the data objects are sufficiently similar (above a validity threshold for data objects). Workflow element anchors can be mapped only to similar workflow element positions analogously (above a validity threshold for workflow elements). Similarity functions for both, data objects and workflow elements will be discussed below. The valid candidate positions are described by a set of triples \([wfl\_el_{retrieved}, wfl\_el_{target\_wfl}, sim_{wfl\_el}(wfl\_el_{retrieved}, wfl\_el_{target\_wfl})]\) and \([data\_obj_{retrieved}, data\_obj_{target\_wfl}, sim_{data\_obj}(data\_obj_{retrieved}, data\_obj_{target\_wfl})]\) with similarity values higher than the specified validity thresholds. a) Original workflow: ![Original workflow diagram] b) Change request: Replace ground beef by spelt-grain Fig. 3: Sample query to adapt a recipe on a beef noodle casserole. ![Adaptation result (reference solution) diagram] Fig. 4: Adaptation result (reference solution). ![Adaptation result (actually generated solution) diagram] Fig. 5: Adaptation result (actually generated solution). (2) The best matching positions from the set of valid candidate positions are selected to construct the anchor mapping. Pairs of workflow element anchors (pre and post anchor) have to preserve their order in the target workflow according to a precedence relation of workflow elements that is induced by the control flow. Additionally, the mapped positions of pairs of workflow element anchors must be at direct neighbors according to the same precedence relation in case of a chain from the add list while they must not be direct neighbors in case of a chain from the delete list. The mapping algorithm employs a hill climbing search to find mapping positions with an optimal overall anchor similarity. The application of the adaptation chains is different for add and delete operations. The add operations are executed immediately for all chains of which at least one anchor has been mapped successfully. Chains from the delete list are applied only if a complete mapping of their particular delete steps to workflow and data flow elements can be constructed. The mapping algorithm for delete steps considers pairs of elements (workflow elements, data objects and data links) from the retrieved and the target workflow whose similarity value is above a threshold called delete threshold. Additionally, it is required that the elements to be deleted are organized in exactly the same structural order (control flow or data flow) than those in the chain. Thus, in order to be applied, the delete operations have to fulfill stronger constraints than the add operations. The similarity measures for workflow elements ($\text{sim}_{wfl,el}$), data objects ($\text{sim}_{data, obj}$) and data links ($\text{sim}_{data, links}$) that are required during the reuse phase are specified as follows: \[ \text{sim}_{wfl,el}(x, y) = \begin{cases} \text{sim}_{\text{task}}(x, y), & x, y \text{ are tasks} \\ \text{sim}_{\text{ctrl},\text{flow},el}(x, y), & x, y \text{ are control flow elements} \\ 0 & \text{else} \end{cases} \] $\text{sim}_{wfl,el}$ distinguishes tasks from control flow elements. In case of tasks it aggregates the similarity measure for the sets of input parameters ($\text{sim}_{\text{inputs}}$), the similarity measure for the task name ($\text{sim}_{\text{task},\text{name}}$), and the similarity of the task description ($\text{sim}_{\text{task},\text{descr}}$) by a weighted sum in $\text{sim}_{\text{task}}$. The three constituent measures of $\text{sim}_{\text{task}}$ are specified based on the Levenshtein distance. The Levenshtein distance is purely syntactic and measures the minimum number of edit operations on the character level that is required to transform one string into another. $\text{sim}_{\text{task},\text{name}}$ and $\text{sim}_{\text{task},\text{descr}}$ employ the Levenshtein distance directly on the task name and on the textual task description. $\text{sim}_{\text{inputs}}$ matches the input parameters (data links) based on the Levenshtein distance of the names of their according data objects by means of a hill climbing search. $\text{sim}_{\text{ctrl},\text{flow},el}$ measures the similarity of the set of tasks included in the block in case of a block-building control flow element like AND-split or AND-join (again, a hill climbing search is applied to map the two sets of tasks to each other). $\text{sim}_{\text{data, obj}}$ employs the Levenshtein distance on the names of the data objects as the data objects do not yet store any properties like amounts or preparation states. If the special data object ‘null’ is one of the arguments of $\text{sim}_{\text{data, obj}}$, the value is set to 0. and to 1 if both arguments are ‘null’. $sim_{data\_links}$ aggregates the local similarity measures $sim_{data\_obj}$ and $sim_{wfl\_el}$. Fig. 4 and 5 illustrate that the chosen similarity measures are still to be improved as the automatically generated solution in Fig. 5 could not transfer all adaptation steps from the case described in Fig. 2 c). For instance, the data links from spelt-grain to mix is missing as the syntactic similarity measure was not able to map the anchor at “add” in the retrieved workflow on “mix” in the target workflow. Also, the task names within the AND block of the retrieved workflow {“saute”, “add”, “simmer”, “place in baking dish”, “mix”} have not been similar enough to the task names within the AND block of the target workflow {“brown”, “mix”, “place in casserole”, “combine”} so that the post anchor for adding the task “cook” could not be positioned in the target workflow. Semantic similarity measures, for instance based on a task ontology, would probably provide better results. 5 Experimental evaluation and discussion The system is implemented in a demo version. As starting point for a first evaluation we use a reduced recipe base containing 39 pasta recipes from in the CCC recipe base. Based on seven recipes taken from this pasta excerpt 30 different change requests were constructed by using our own experiences in cooking. Typical change requests replace one ingredient by another or avoid a certain ingredient. Thereby, an experimental case base of 30 adaptation cases (see Chapter 4 for the case structure) was created by hand. Most of the included adaptation steps concern the data objects and the data links, some adaptations involve adding or deleting workflow elements (preparation steps). We conducted two experiments described below. The aim of the first experiment was to evaluate whether the suggested adaptation method is able to correctly reconstruct the adapted workflows from the adaptation steps described in the 30 adaptation cases. This requires finding the proper anchor positions, which should not be affected by the chosen similarity measures as only identical matches are required for this reconstruction. In line with our hypothesis, all adapted workflows could be reconstructed correctly. In the second experiment the adaptations cases were applied to new recipes. For this purpose, 14 test queries have been created by arbitrarily selecting 14 different change requests from the adaptation case base of the first experiment and combining them with other cookery workflows. We selected recipes from the pasta recipe base for that the change requests can be tastefully applied. For instance, a change request on replacing ground beef by spelt-grain applies to a recipe only that contains ground beef as an ingredient and that would still be tasty when becoming vegetarian. Mirjam Minor, Ralph Bergmann, Sebastian Görg, Kirstin Walter As the retrieval is not part of this evaluation the best-fitting adaptation case is chosen for each query by hand. The resulting adapted workflow is compared to a solution obtained by manually transferring the respective adaptation steps to the workflow (see Fig. 6). Altogether, half of the test cases were adapted totally correctly. In the remaining cases, at least the data objects could be added correctly, while the correctness of added data links is only approximately 50%. Also only about 60% of the possible delete operations of data links are applied. This is a clear indication that the currently used syntactic similarity measure is insufficient for our purposes. Further experiments with improved similarity measures using the CookingCake’s ingredient ontology [2] as well as a task ontology (for preparation steps) will be necessary. However, we believe that the first results are quite promising and confirm that our idea on following the workflow paradigm during the automated adaptation of cookery instructions is feasible in principle. 6 References
{"Source-Url": "http://www.wi2.uni-trier.de/shared/publications/2010_CCC.pdf", "len_cl100k_base": 4935, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20653, "total-output-tokens": 5714, "length": "2e12", "weborganizer": {"__label__adult": 0.0004420280456542969, "__label__art_design": 0.00141143798828125, "__label__crime_law": 0.0006113052368164062, "__label__education_jobs": 0.0063018798828125, "__label__entertainment": 0.00017333030700683594, "__label__fashion_beauty": 0.0003879070281982422, "__label__finance_business": 0.0013637542724609375, "__label__food_dining": 0.00678253173828125, "__label__games": 0.0008683204650878906, "__label__hardware": 0.001140594482421875, "__label__health": 0.000911235809326172, "__label__history": 0.0004916191101074219, "__label__home_hobbies": 0.000370025634765625, "__label__industrial": 0.001628875732421875, "__label__literature": 0.0011873245239257812, "__label__politics": 0.00037288665771484375, "__label__religion": 0.0007510185241699219, "__label__science_tech": 0.111083984375, "__label__social_life": 0.0002524852752685547, "__label__software": 0.10638427734375, "__label__software_dev": 0.755859375, "__label__sports_fitness": 0.00022232532501220703, "__label__transportation": 0.0005321502685546875, "__label__travel": 0.00030112266540527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23987, 0.0235]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23987, 0.28007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23987, 0.90893]], "google_gemma-3-12b-it_contains_pii": [[0, 2358, false], [2358, 4801, null], [4801, 8144, null], [8144, 8702, null], [8702, 11951, null], [11951, 15094, null], [15094, 15488, null], [15488, 19138, null], [19138, 21993, null], [21993, 23987, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2358, true], [2358, 4801, null], [4801, 8144, null], [8144, 8702, null], [8702, 11951, null], [11951, 15094, null], [15094, 15488, null], [15488, 19138, null], [19138, 21993, null], [21993, 23987, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23987, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23987, null]], "pdf_page_numbers": [[0, 2358, 1], [2358, 4801, 2], [4801, 8144, 3], [8144, 8702, 4], [8702, 11951, 5], [11951, 15094, 6], [15094, 15488, 7], [15488, 19138, 8], [19138, 21993, 9], [21993, 23987, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23987, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b73bb967459ac40f761836742741f207911fefd3
The Application of DEA to Measure the Efficiency of Open Source Security Tool Production Richard Mathieu James Madison University Barry Wray University of North Carolina at Wilmington Follow this and additional works at: http://aisel.aisnet.org/amcis2007 Recommended Citation http://aisel.aisnet.org/amcis2007/118 This material is brought to you by the Americas Conference on Information Systems (AMCIS) at AIS Electronic Library (AISeL). It has been accepted for inclusion in AMCIS 2007 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more information, please contact elibrary@aisnet.org. The Application of DEA to Measure the Efficiency of Open Source Security Tool Production Barry A. Wray Cameron School of Business University of North Carolina Wilmington wrayb@uncwil.edu Richard G. Mathieu (* corresponding author) College of Business James Madison University mathierg@jmu.edu Abstract There are a wide variety of open source security tools available for deployment within the enterprise. Despite the success of many security-based open source software (OSS) projects, large numbers of these projects become inactive and are eventually abandoned. The purpose of this research is to develop an empirical study to determine the relative efficiency of security-based OSS projects. A security-based OSS projects utilize two key resources in their development, the number of software developers and the number of users submitting bugs found when using OSS code. This research develops a model to measure the relative efficiency of each project and determines the number of inefficient projects benchmarking each efficient project. The result of this research is a model that can be used by project developers to evaluate the relative efficiency of their projects. Our empirical study will be based on the analysis of publicly available data from a repository of OSS project at sourceforge.net. Keywords: open source software, software production, DEA The Application of DEA to Measure the Efficiency of Open Source Security Tool Production Introduction Open source software (OSS) projects permit users the freedom to use the software code for any purpose. The code can be studied, modified, and freely redistributed. Even though OSS is free, the profit potential of OSS projects is becoming very attractive to software development companies. Venture capitalists have pumped nearly $400 million into 50 open-source companies in the last 18 months -- and more are on the way (Lacey, 2005). These products are satisfying business customer needs and giving birth to for-profit companies like SugarCRM, Greenplum, and Pentaho. These companies are building a new generation of business applications for managing Web content, customer relations, and enterprise resources that are cheaper and may be more dynamic than their commercial counterparts (Greenemeier, 2005). The potential financial gain for a developer lies in the support/maintenance and proprietary add-on features they can provide for their product. The success of open source software (OSS) projects has been attributed to the quality, portability and scalability of the software product (Stemelos et al., 2002; Crowston et al., 2003; Kalina and Czyzycki, 2005) and to the commitment, expertise and speed of development of the software developers (Scacchi, 2002; Crowston et al., 2003). Despite the success of many OSS projects, large numbers of these projects become inactive and are eventually abandoned. Payne (2002) and Salkever (2001) examined the security of OSS projects. Crowston et al. (2003) developed a model of OSS success based upon measures of project output, process, and outcomes for project members. A common way of examining the relative efficiency of software projects is through the use of data envelope analysis (DEA). DEA is a nonparametric technique which takes input and output data related to individual operational units and identifies an efficient frontier representing optimal performance as a ratio of output to input. DEA is well suited for the comparison and benchmarking of similar operational units. The result is a ranked list of operational units in terms of their relative efficiency and indicators of the variables with the largest influence on the operational unit rankings. In the software management literature, DEA operational units are typically defined as software projects. The DEA results have been used to determine the most productive scale size for a software development project (Banker and Kemerer, 1989) and to study the effects of project characteristics on software maintenance (Banker et al., 1991; Elam, 1991; Paradi et al., 1997). Open Source Security Tools There are a wide variety of open source security tools available for deployment within the enterprise. Currently there are quite a few mature and well-developed open-source security tools that are on par with commercial security tools (Mogull and Girard, 2006). These OSS security tools are typically publicly visible in all phases of their lifecycle and have many contributors. Mookhey (2004) defines nine categories of open source tools for security and control assessment (vulnerability assessment tools, network auditing tools, host-based auditing tools, password cracking tools, forensic tools, log analysis tools, software auditing tools, web application testing tools, and process auditing tools) that can be useful to information security auditors. Three categories of open source security tools have reached the ‘plateau of productivity’ in Gartner’s Hype Cycle. Specifically, open source instruction detection software (Snort IDS), web security and encryption software (OpenSSL), and vulnerability assessment software (Nessus) has demonstrated and accepted benefits and are broadly accepted in the marketplace (Drakos et al., 2004). All three open source software tools are considered ‘early mainstream’ maturity with market penetration estimated between 5% to 50% of the marketplace. A recent Gartner Report (Mogull and Girard, 2006) notes that many open source security tools are initially created, but that only a few mature to the point where they have increased documentation, ease of use and functionality. All open source security tools seem to follow a consistent timeline in terms of maturity and public visibility. Data Envelope Analysis A common way of examining the relative efficiency of software projects is through the use of Data Envelopment Analysis (DEA). DEA is a nonparametric linear programming formulation technique that takes multiple inputs and multiple outputs related to individual operational units and identifies an efficient frontier representing optimal performance as a ratio of output to input. DEA is an extreme point method that compares decision making units or DMUs (the DMUs for this research are security-based OSS projects) with only the "best" DMUs. Each DMU must utilize the same inputs to produce the same outputs in order for the DEA model to evaluate the relative performance of a set of DMUs. The DEA model will produce a ranking of the DMUs according to how efficiently each DMU utilizes its inputs to produce its outputs. As the earlier list of applications suggests, DEA can be a powerful tool when used wisely. A few of the characteristics that make it powerful are: - DEA can handle multiple inputs and multiple outputs for each operational unit. - No assumption of a functional form relating inputs to outputs is necessary. - Projects are directly compared against peer operational units or a combination of peer operational units. - The model inputs and outputs can be in different operational units. There are certain conditions that limit the use of DEA. This tool should not be used if any of the following conditions exist: - Noisy data can cause significant problems. - DEA is good at estimating "relative" efficiency of the operational units but it does not allow comparison to a theoretical maximum. - Large problems can be computationally intensive. In previous DEA studies of traditional software projects, operational unit input has been captured as a measure of labor hours or cost. One study also captured additional input measures of project expense (Elam, 1991). Measures of output have focused on project size through either function points or lines of code. Other measures of output include software quality and time to market. See Table 1. <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Banker and Kemerer, 1989</td> <td>X</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Elam, 1991</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td></td> <td></td> </tr> <tr> <td>Banker, Datar and Kemerer, 1991</td> <td>X</td> <td>X</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Paradi, Reese and Rosen, 1997</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td></td> </tr> </tbody> </table> From a project management perspective, there are significant differences between an open source software project and a traditional software project. First, since labor is typically donated by project contributors there are typically no monetary measures of labor cost or project expense. Rather the ‘labour’ that goes into an OSS project can be viewed as the number of persons that contribute to the project. Hahn and Zang (2005) adjust their input through an assessment of ‘project age’. A contribution of our research study is the inclusion of a count of the number of unique “software bug” contributors. These contributors are typically not on the development team but are users of the software making valuable contributors to the success of the project. The output of an Open Source software project should certainly contain a measure of project size. Hahn and Zang (2005) measure the size of all files in the project. In this study we take a user oriented approach to project size by including a measure of project size per download. Hahn and Zang (2005) include a measure of ‘development status’. This study includes two measures of project quality: number of downloads and project rank (as determined by sourceforge.net). Table 2: Review of DEA Studies – Open Source Software (OSS) Projects <table> <thead> <tr> <th>Study</th> <th>Input: Number of Developers</th> <th>Input: Project Age</th> <th>Input: Number of Bug Contributors</th> <th>Output: Project Size</th> <th>Output: Quality</th> <th>Output: Development Status</th> </tr> </thead> <tbody> <tr> <td>Hahn and Zang, 2005</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>this research</td> <td>X</td> <td></td> <td>X</td> <td>X</td> <td>X</td> <td></td> </tr> </tbody> </table> Objectives of Research The purpose of this research is to develop and test a model of the relative efficiency of OSS security based projects. The focus on one type of OSS project is driven by the fact that DEA is well suited for the comparison and benchmarking of similar operational units. Focusing on one type of project reduced the variance in the technical characteristics of the software development projects and made for better comparisons between projects. Our empirical study will be based on the analysis of publicly available data from a repository of OSS projects at sourceforge.net. Currently sourceforge.net contains a repository of 142,869 OSS projects. Research Methods This research evaluates the relative efficiency of security-based OSS projects by evaluating multiple project inputs and multiple project outputs. Data were collected on 35 security-based OSS software projects on Sourceforge.net in August, 2006. The data were manually tabulated from the Sourceforge.net website and entered into an Excel spreadsheet. For brevity reasons data collected for only the 5 highest ranked projects are given in Table 3 (data for the remaining 30 projects is similar). The inputs considered for the 35 projects are the total number of developers for the project and the number of unique users that have submitted software bugs. Unlike traditional software projects, the number of bug submitters is an important input into the OSS production process. The outputs for each project are the Sourceforge.net rank for the project, the number of downloads from Sourceforge.net, and the number of Kilobytes per download. Table 3: Security-Based OSS projects Collected from Sourceforge.net <table> <thead> <tr> <th>Project</th> <th>Bug {IN}</th> <th>people {I}</th> <th>rank {O}</th> <th>downloads {O}</th> <th>Kperdownload {O}</th> </tr> </thead> <tbody> <tr> <td>Open Computers and Software Inventory</td> <td>66</td> <td>7</td> <td>4</td> <td>39,425</td> <td>35,510</td> </tr> <tr> <td>Endian Firewall</td> <td>24</td> <td>10</td> <td>18</td> <td>9,033</td> <td>106,310</td> </tr> <tr> <td>KeePass Password Safe</td> <td>44</td> <td>8</td> <td>27</td> <td>82,098</td> <td>688</td> </tr> <tr> <td>Password Safe</td> <td>64</td> <td>29</td> <td>34</td> <td>34,925</td> <td>1,354</td> </tr> <tr> <td>Ophcrack</td> <td>1</td> <td>4</td> <td>36</td> <td>66,146</td> <td>338,645</td> </tr> </tbody> </table> Data envelopment analysis (DEA) is a linear programming formulation for frontier analysis that defines a nonparametric relationship between multiple outputs and multiple inputs by building an efficiency frontier (Charnes et al., 1978). DEA is an extreme point method that compares decision making units or DMUs (the DMUs for this research are security-based OSS projects) with only the "best" DMUs. The efficient security-based OSS projects have an efficiency score of one whereas the inefficient projects have an efficiency score less than one but greater than zero. The EMS (efficiency measurement system) software version 1.3 (Scheel, 2000) is used in this research. This software can be found at: http://www.wiso.uni-dortmund.de/lsgf/or/scheel/ems/. The number of developers is a “standard” input for the DEA model while the number of unique bug submitters is a “non-discretionary” input (i.e., data which are not controlled by a project). The DEA model evaluates and compares these inputs and produces an efficiency score for each project based on project outputs. The method of producing an efficiency score is based on the convex envelopment technology structure and the efficiency measure chosen. The efficiency measure quantifies a “distance” to the efficient frontier. The orientation for the DEA model used in this research utilizes an “input-oriented” measurement for efficiency (a measure of input reduction necessary for a project to become efficient holding the outputs constant). This measure is chosen because of the primal interpretation of the efficiency score with respect to the input quantities and the axiomatic properties of the efficiency measure. Two distance measurements are used and the resulting project efficiencies are compared. The notation below uses \( \tau \) to denote the technology used and \((X^k, Y^k)\) to denote the input output data of the project under evaluation. The first measure is the Debreu-Farrell “radial” measure (Farrell, 1957). This measure indicates the necessary improvements when all relevant factors are improved by the same factor equiproporionally. The objective for this model is: \[ \min \{ \theta \mid (\theta X^k, Y^k) \in \tau \} \] The second model chosen is a “minAverage” measure that quantifies the minimal average of relative improvements necessary for a project to become weakly efficient. To become weakly efficient there must not exist a point in the technology set which is better in every input and output (Charnes at al., 1996). The objective function for this model is: \[ \max \left\{ \frac{\sum X_i^k > 0^\theta}{\sum X_i^k > 0^\theta} \mid (\theta X^k, Y^k) \in \tau, \theta \leq 1 \right\} \] Results The efficiencies of the security-based OSS projects were evaluated using the EMS software. The set of efficient projects identified by both distance measures was identical and listed in Table 4. Likewise, the number of inefficient projects choosing an efficient project as a benchmark was the same for all projects under both distance measures (Table 4). Table 4 – Benchmarking of Security-based OSS projects <table> <thead> <tr> <th>Efficient Project Name</th> <th># of Inefficient Projects which have chosen the Efficient Project as a Benchmark</th> </tr> </thead> <tbody> <tr> <td>ShellTer</td> <td>18</td> </tr> <tr> <td>Simple Python Keylogger for Windows</td> <td>17</td> </tr> <tr> <td>ClamWin Free Anivirus</td> <td>10</td> </tr> <tr> <td>Network Security Toolkit</td> <td>5</td> </tr> <tr> <td>ophcrack</td> <td>3</td> </tr> <tr> <td>J2EE Certificate Authority, EJBCA</td> <td>0</td> </tr> <tr> <td>Another file integrity checker</td> <td>0</td> </tr> <tr> <td>BlockSSHD</td> <td>0</td> </tr> </tbody> </table> The DEA model determines which efficient projects are used by other inefficient projects as a benchmark for efficiently transforming inputs to outputs. The model produces an index of corresponding intensities linking an inefficient project to its benchmark efficient project(s). The eight projects in Table 4 were chosen by the DEA model as “efficient benchmark projects”. These eight projects were found to be on the efficient frontier for both distance measures (Radial and MinAverage objective functions). The DEA model is only concerned with how efficiently each project produces its outputs. The SourceForge ranking does not consider efficiency in determining project rank. This can be shown by examining the two most benchmarked projects, “ShellTer” and “Simple Python Keylogger for Windows.” These two projects were not at the top of the SourceForge ranking. In fact, only the ‘ophcrack’ project of the top five SourceForge ranked security-based projects was identified as an efficient benchmark project by the DEA model. The two highest SourceForge ranked projects (“Open Computers and Software Inventory” and “Endian Firewall”) have 66 and 24 total bug submitters and 7 and 10 developers, respectively. These results demonstrate how the DEA model selects benchmark projects by determining how efficiently a project uses resources to produce outputs. The SourceForge ranking system does not consider a measure for efficiency. A project manager can use these results to critically evaluate resources for a project and the relative efficiency of the resources. Based on the results managerial decisions on efforts to increase/decrease resources can be made and appropriate strategies developed to achieve these goals can be developed. This DEA modeling approach can be used by security-based OSS project managers to determine the relative efficacy of their project against other similar projects. These results can be used to make decisions on increasing or decreasing controllable inputs (the number of project developers) and to set goals for project outputs. Critical decisions on allotment of work effort can be made and efforts directed to more efficient projects and away from inefficient projects. References
{"Source-Url": "http://aisel.aisnet.org/cgi/viewcontent.cgi?article=1628&context=amcis2007", "len_cl100k_base": 4236, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 17094, "total-output-tokens": 5483, "length": "2e12", "weborganizer": {"__label__adult": 0.0003724098205566406, "__label__art_design": 0.00030732154846191406, "__label__crime_law": 0.0006775856018066406, "__label__education_jobs": 0.0019969940185546875, "__label__entertainment": 8.630752563476562e-05, "__label__fashion_beauty": 0.000141143798828125, "__label__finance_business": 0.0035114288330078125, "__label__food_dining": 0.00034236907958984375, "__label__games": 0.00069427490234375, "__label__hardware": 0.0006890296936035156, "__label__health": 0.0005440711975097656, "__label__history": 0.00022923946380615232, "__label__home_hobbies": 0.00011157989501953124, "__label__industrial": 0.000476837158203125, "__label__literature": 0.0002715587615966797, "__label__politics": 0.00027871131896972656, "__label__religion": 0.0002841949462890625, "__label__science_tech": 0.040863037109375, "__label__social_life": 0.0001513957977294922, "__label__software": 0.0277099609375, "__label__software_dev": 0.91943359375, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0004427433013916016, "__label__travel": 0.00020170211791992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23929, 0.02263]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23929, 0.36384]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23929, 0.87283]], "google_gemma-3-12b-it_contains_pii": [[0, 797, false], [797, 2176, null], [2176, 6538, null], [6538, 10854, null], [10854, 15369, null], [15369, 19380, null], [19380, 23027, null], [23027, 23929, null]], "google_gemma-3-12b-it_is_public_document": [[0, 797, true], [797, 2176, null], [2176, 6538, null], [6538, 10854, null], [10854, 15369, null], [15369, 19380, null], [19380, 23027, null], [23027, 23929, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23929, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23929, null]], "pdf_page_numbers": [[0, 797, 1], [797, 2176, 2], [2176, 6538, 3], [6538, 10854, 4], [10854, 15369, 5], [15369, 19380, 6], [19380, 23027, 7], [23027, 23929, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23929, 0.23276]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
e5d0ad8de46e0545ab6eb44c9f549d1768428a30
Bluefruit LE Python Library Created by Tony DiCola https://learn.adafruit.com/bluefruit-le-python-library Last updated on 2023-08-29 02:59:18 PM EDT # Table of Contents <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Overview</td> <td>3</td> </tr> <tr> <td>Hardware</td> <td>4</td> </tr> <tr> <td>Installation</td> <td>5</td> </tr> <tr> <td>• Mac OSX</td> <td></td> </tr> <tr> <td>• Linux &amp; Raspberry Pi</td> <td></td> </tr> <tr> <td>• Library Installation</td> <td></td> </tr> <tr> <td>Library Usage</td> <td>6</td> </tr> <tr> <td>• Examples</td> <td></td> </tr> <tr> <td>• Usage</td> <td></td> </tr> </tbody> </table> Overview This library is deprecated in favor of the Adafruit_Blinka_bleio library. See this guide for more information: https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer So you've got a Bluefruit LE device that's ready to be the next awesome wireless IoT gadget, but how do you actually talk to it from your computer? The Bluefruit LE Python library is just what you need to write code that reads and writes data with a Bluefruit LE device! This Python library allows you to write simple code to talk to a Bluefruit LE UART from a Mac OSX computer or Linux machine, like a Raspberry Pi. This library is great for logging sensor data, controlling your device, and much more through the wireless magic of Bluetooth low energy! To use this library you'll need to be running a Mac OSX or Linux machine with a Bluetooth 4.0 low energy adapter. Sorry Windows is not yet supported because Bluetooth low energy support is still a little bit too new to the platform (only the very latest Windows 10 release exposes enough BLE APIs to completely control a device). In the future Windows support might be added, but for now stick with a Linux machine, like a Raspberry Pi, or a Mac OSX desktop/laptop. You'll also want to be somewhat familiar with the Python programming language and Bluetooth low energy. The Hitchhiker's Guide to Python has a great learning python section with links to books and free resources for learning the language. For Bluetooth low energy be sure to read this excellent intro guide and even consider picking up a book on the topic (http:// adafruit.it/1978). When you're ready continue on to learn about what hardware you'll need to use the Bluefruit LE Python library. Hardware This library is deprecated in favor of the Adafruit_Blinka_bleio library. See this guide for more information: https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer To talk to a Bluefruit LE from your Linux or Mac computer you'll need to make sure you have a Bluetooth 4.0+ low energy (sometimes called Bluetooth Smart) adapter. For Mac OSX you don't have many options since Bluetooth support is built in to the hardware. Make sure you're using a device that has Bluetooth 4.0 low energy support. Most Mac laptops made since ~2012 should have Bluetooth 4.0 low energy support. For a Linux machine like a Raspberry Pi this **CSR8510 Bluetooth 4.0 USB dongle** is exactly what you want to use. In general if **BlueZ** supports your Bluetooth hardware and it's 4.0 low energy then it should work (but no guarantees, we've only tested with the CSR8510 dongle). Finally you'll need a Bluefruit LE device to talk to when using the library. Some of the options include: - **Bluefruit LE UART** or **Bluefruit LE SPI friend** ([http://adafruit.it/2633](http://adafruit.it/2633)) - These devices connect to an Arduino through a UART or SPI connection respectively and allow the Arduino to expose itself as a BLE UART or other peripheral. - **Bluefruit LE Micro** - This is an all-in-one Bluefruit SPI friend connected to a ATmega32u4 processor (like a FLORA) and is a great option for a small BLE project. - **Bluefruit LE USB friend** - This is a Bluefruit LE UART friend that's connected to a serial to USB converter so it can be accessed by a computer. This is a good option if you have a computer like a Raspberry Pi that you want to turn into a BLE peripheral. If you aren't sure which one to pick, consider using the **Bluefruit LE SPI friend** ([http://adafruit.it/2633](http://adafruit.it/2633)) if you already have an Arduino or a **Bluefruit LE Micro** if you don't have an Arduino. Continue on to learn how to install the Python library that will talk to the Bluefruit LE. ## Installation This library is deprecated in favor of the Adafruit_Blinka_bleio library. See this guide for more information: [https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer](https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer) Follow the appropriate steps below depending on your platform to install the Bluefruit LE Python library. ### Mac OSX On Mac OSX nothing extra needs to be installed to use the library. The library makes use of the [PyObjC library](https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer) that Apple includes with the version of Python installed in OSX. Note that if you're using a non-Apple version of Python, like one installed with Homebrew, you might need to [manually install PyObjC](https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer)! Skip down to the [Library Installation section](https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer) at the bottom to continue. ### Linux & Raspberry Pi The Raspberry Pi 3, 4, and Pi Zero W include Bluetooth capability. Recent versions of Raspbian include the bluez package. If it is not already installed, you can install it by doing: ``` apt list bluez # See whether bluez is already installed. # If not, do this: sudo apt update sudo apt install bluez ``` After you have installed bluez, you need to add yourself to the bluetooth user group: ``` sudo useradd -G bluetooth $USER ``` Then reboot. Library Installation To install the library globally, use pip: ```sh pip install --user Adafruit-BluefruitLE ``` Alternatively, you can clone it from its home on GitHub and then run its setup.py as below. Assuming you have git installed you can run the following in a terminal to clone the library and install it: ```sh git clone https://github.com/adafruit/Adafruit_Python_BluefruitLE.git cd Adafruit_Python_BluefruitLE sudo python setup.py install ``` That's it, the library should be installed globally and ready to use with any Python script on your system. Alternatively you can install with `sudo python setup.py develop` to put the library into develop mode where changes to the code are immediately reflected without the need to reinstall. This is handy if you're modifying the code or updating it frequently from GitHub. Continue on to learn about the examples included with the library and how to use it in your own code. Library Usage This library is deprecated in favor of the Adafruit_Blinka_bleio library. See this guide for more information: https://learn.adafruit.com/circuitpython-ble-libraries-on-any-computer Examples To demonstrate the usage of the library look at the included examples in the examples subdirectory: - list_uarts.py - This example will watch for any BLE UART devices and print out their name and ID when found. This is a good example of searching for devices. - uart_service.py - This is an example of talking to a BLE UART device. When run the example will connect to the first BLE UART device it finds, send a string, and then wait to receive a string. - low_level.py - This is similar in functionality to uart_service.py but implemented using a lower level direct interaction with BLE GATT services and characteristics. This is a good starting point for interacting with a custom BLE service or device. - device_info.py - This example will connect to the first found BLE UART and print out information from its device info service, like serial number, hardware version, etc. Note this example only works on Mac OSX--a bug/issue in BlueZ currently prevents access to the device info service. Since all of the examples deal with a BLE UART device you'll want to make sure you first have a Bluefruit LE device running as a BLE UART: - Bluefruit LE UART friend - Connect it to an Arduino and run its BLEUart example(). Make sure its switch is in the CMD position if you're using the bleuart_cmdmode example, and DATA position if you're using bleuart_datamode. - Bluefruit LE SPI friend - Connect it to an Arduino and run its BLEUart example(). - Bluefruit LE micro - Load the BLEUart example(). - Bluefruit LE USB friend - Make sure the switch is in the DATA position, connect the device to your computer, and open its serial port with a terminal(). Make sure to open the serial terminal in the Arduino IDE if you're running the BLEUart example. You'll want the terminal open so you can see the data received and send data to the BLE UART example code in the next step. To run an example you just need to invoke it using the Python interpreter. On Linux make sure to run as root using sudo, and on Mac OS X it's not necessary (but will still work fine) to run with sudo. For example to run the uart_service.py example open a terminal, navigate to the examples directory of the library, and run: ``` sudo python uart_service.py ``` After a short period you should see the example run and start printing status messages out like the following: ``` Using adapter: BlueZ 5.33 Disconnecting any connected UART devices... Searching for UART device... Connecting to device... Discovering services... Sent 'Hello world!' to the device. Waiting up to 60 seconds to receive data from the device... ``` Note if you see an error 'org.freedesktop.DBus.Error.UnknownMethod: Method "Set" with signature "ssb" on interface "org.freedesktop.DBus.Properties" doesn't exist' run the example again. It appears this is a transient or one time error from setting up the DBus connection with BlueZ. The example will attempt to connect to the first BLE UART device it finds and send & receive data with it. On Linux with BlueZ be patient as the searching for UART device and the discovering services phase can take around 30-60 seconds the first time it runs. On Mac OSX BLE actions are generally quite fast and happen in a few seconds. Once you see the example print "Waiting up to 60 seconds to receive data from the device..." then check the serial terminal of the Bluefruit LE device. You should see it received the string 'Hello World!' (without quotes). For example the Bluefruit LE UART / SPI / micro terminal will show: ``` H [0x48] e [0x65] l [0x6C] l [0x6C] o [0x6F] w [0x77] r [0x72] d [0x64] ! [0x21] [0x0D] [0x0A] ``` Now while the uart_service.py example is still running enter some text in the serial terminal and press send. You should see the text received and the uart_service.py example exit: ``` Received: test ``` Congrats, you've successfully run the uart_service.py example! If you run into issues with the example first make sure the Bluefruit LE Python library was successfully installed from the previous page. Also make sure only one BLE UART device is running--if more than one are running the code might get confused and connect to a device you don't expect (it just connects to the first device it finds). You can run the other 3 examples in the same way as you ran the uart_service.py example. Note that the device_info.py example only runs on Mac OSX right now. There's a bug or issues with BlueZ that prevents accessing the device information service. Usage To understand how to use the library to send and receive data with a BLE UART I'll walk through the uart_service.py example below. Open the file () in a text editor and follow along: ```python import Adafruit_BluefruitLE from Adafruit_BluefruitLE.services import UART The first part of the code is the import section that pulls in the Bluefruit LE library. Notice that both the main library is imported with the first line, and a special UART service implementation is imported with the second line. ```python # Get the BLE provider for the current platform. ble = Adafruit_BluefruitLE.get_provider() ``` Next a BLE provider is created by calling the Adafruit_BluefruitLE.get_provider() function. This will grab the appropriate BLE provider for your platform, either Linux or Mac OS X. You only need to call this once at the very start of your program to get the provider that will be used in future BLE calls. After the provider is created you'll notice a main function is defined. We'll actually skip this function for now and come back to it. Scroll down past the main to the end of the file so you can see where execution of the script starts: ```python # Initialize the BLE system. MUST be called before other BLE calls! ble.initialize() # Start the mainloop to process BLE events, and run the provided function in # a background thread. When the provided main function stops running, returns # an integer status code, or throws an error the program will exit. ble.run_mainloop_with(main) ``` First the code calls the initialize() function on the provider to setup BLE communication. This is required before you make any other calls to the library. Next the code calls run_mainloop_with() and passes it the main function that was created above. One very important thing to realize with BLE on desktop platforms is that it generally requires a full GUI event loop to run in the background. Actions like connecting to a device, searching for a device, etc. are actually asynchronous and start and stop at different times. An event loop is necessary to make sure these asynchronous events are processed. However an event loop complicates your program, especially if you just want to write a simple script that performs synchronous actions against BLE devices. The Bluefruit library attempts to hide all of this event loop logic and just present an easy to use synchronous interface. By calling the run_mainloop_with function the library will setup all the platform-specific event loop logic and then invoke the provided function in a background thread. The event loop runs in the main program thread and processes BLE events and your code runs in a background thread. A side effect of your code running in a background thread is that you need to be careful that any callbacks from the Bluefruit library are made to be thread safe. In normal interaction with the UART service you don't need to worry about thread safety as the service takes care of moving data between the main and background thread with a queue. However if you dig into lower level direct access with BLE characteristics be aware that their callbacks won't be performed on the same thread as your main code. It's also good to realize that the Bluefruit library is really targeted at simple scripts that interact synchronously with a BLE UART. For example a script that polls a BLE device to read sensor data and save it to a file or push to a web service is a good example of what the library can do. If you're building a GUI application that presents a user interface then the library might not be the best option for you. When you build a GUI app you want control over the main event loop yourself and likely want to choose how BLE events are processed, like in a background thread. You also likely want calls to the BLE device to happen asynchronously, that is in the background instead of blocking execution until they finish. When a GUI application uses a synchronous or blocking API like this Bluefruit library exposes it can be a poor user experience with a slow GUI. If you're building a GUI application consider talking directly to your platform's BLE APIs (like CoreBluetooth on OSX or BlueZ on Linux). Now let's go back up and look at the main function that was defined earlier: ```python # Main function implements the program logic so it can run in a background thread. Most platforms require the main thread to handle GUI events and other asynchronous events like BLE actions. All of the threading logic is taken care of automatically though and you just need to provide a main function that uses the BLE provider. def main(): # Clear any cached data because both bluez and CoreBluetooth have issues with caching data and it going stale. ble.clear_cached_data() ``` The function will run in a background thread and performs all of the main logic of the script. To start the main function calls the `clear_cached_data()` function to reset any platform BLE state. Unfortunately this is a bit of a hack that's necessary on most platforms as BLE support is still somewhat immature. On Linux with BlueZ this function will remove any known BLE devices so they can be rediscovered again fresh. This is necessary because of bugs and issues with heavy caching of devices (for example if a BLE UART is turned off BlueZ will still remember it for some time and confuse your program). On Mac OSX with CoreBluetooth this function will perform these steps () to clear its cache. Again this is necessary because CoreBluetooth has heavy caching which can confuse your program when devices appear and disappear. ``` # Get the first available BLE network adapter and make sure it's powered on. adapter = ble.get_default_adapter() adapter.power_on() print('Using adapter: {0}'.format(adapter.name)) ``` Next the BLE adapter for the platform is retrieved with the `get_default_adapter()` function, and it's powered on by calling the `power_on()` function. These are necessary to make sure the computer has Bluetooth turned on and ready to find devices. ``` # Disconnect any currently connected UART devices. Good for cleaning up and # starting from a fresh state. print('Disconnecting any connected UART devices...') UART.disconnect_devices() # Scan for UART devices. print('Searching for UART device...') try: adapter.start_scan() # Search for the first UART device found (will time out after 60 seconds # but you can specify an optional timeout_sec parameter to change it). device = UART.find_device() if device is None: raise RuntimeError('Failed to find UART device!') finally: # Make sure scanning is stopped before exiting. adapter.stop_scan() ``` The next block of code deals with searching for a UART device. First any connected UARTs are disconnected by calling the `UART.disconnect_devices()` function. This is necessary because a BLE central device (your computer) can only be connected to a single BLE peripheral (your BLE UART devices) at a time. If your computer was still connected to a BLE UART then it would fail to find any new ones. After disconnecting UART devices the code calls `adapter.start_scan()` to turn on device scanning. Then the `UART.find_device()` function is called to find the first available UART device. Note that if you'd like to find a specific UART device, perhaps one that has a different name, you can instead call `UART.find_devices()` (plural) and it will immediately return a list of all known UART devices. Enumerate the list and look for a device that you'd like to connect to, or keep calling the `find_devices` function periodically in a loop to scan for new devices (see the `list_uarts.py` example for this kind of logic). Finally notice the block ends with a `finally` block that ensures the `adapter.stop_scan()` function is called. This is good practice to make sure the adapter doesn't stay in a scanning mode if the program should exit prematurely. Also before you connect to a device (the next code) you'll want to make sure scanning is turned off. ```python print('Connecting to device...') device.connect() # Will time out after 60 seconds, specify timeout_sec parameter # to change the timeout. ``` Now a connection is created with the device that was found by calling its `connect()` function. This function will wait up to 60 seconds for the device to respond and connect (note you can change this timeout by passing in a `timeout_sec` parameter with a new value). ```python # Once connected do everything else in a try/finally to make sure the device # is disconnected when done. try: # Wait for service discovery to complete for the UART service. Will # time out after 60 seconds (specify timeout_sec parameter to override). print('Discovering services...') UART.discover(device) # Once service discovery is complete create an instance of the service # and start interacting with it. uart = UART(device) Next a new `try/finally` block is created to ensure the device connection is closed, and the main UART logic starts to run. First the UART services are discovered by calling `UART.discover()` and passing it the connected device. When connecting to a device you must ensure all the services you'll use have been discovered so that the platform knows what characteristics, etc. are available on the service. After the service discovery completes a UART object is created by passing it the connected device. This UART object implements a simple send and receive interface to interact with the connected UART. ```python # Write a string to the TX characteristic. uart.write('Hello world!\r\n') print("Sent 'Hello world!' to the device.") ``` Data is sent to the UART device by calling the UART object's write function. Pass the function a string and it will ensure the data is written to the connected device--easy! ```python # Now wait up to one minute to receive data from the device. print('Waiting up to 60 seconds to receive data from the device...') received = uart.read(timeout_sec=60) if received is not None: # Received data, print it out. print('Received: {0}'.format(received)) else: # Timeout waiting for data, None is returned. print('Received no data!') ``` Data is received by calling the UART object's read function. Note that a timeout of 60 seconds is passed in which tells the function to wait up to 60 seconds for data to be received. If something is received within that time then it will be returned, otherwise the None result is returned which signifies the timeout elapsed before data was received. If you don't want any timeout just call read with no extra parameters. The default timeout value of 0 will be used which means it does a quick check for any received data and instantly returns. In the background the UART object uses a queue to keep track of data received from the UART device so you don't need to worry about constantly calling read and potentially missing data. Just make sure to call read when you're ready to get data. ```python finally: # Make sure device is disconnected on exit. device.disconnect() ``` The very last part of the function is the finally block that disconnects the device. It's good practice to make sure the device is disconnected at the end of the program execution by putting it in a finally block. Note that on some platforms like Linux if you stop a program with Ctrl-C it might not actually invoke this finally block. The reason is with the main thread being used to process GUI events. As a side effect this means Ctrl-C will stop the main thread and the entire program so your finally block in a background thread won't run. You can work around this by using Python's `atexit module` to register a cleanup function. that's called when your program ends. See the list_uarts.py example for a demonstration of atexit for cleanup. In general it's best to not rely on Ctrl-C to stop a program using the library, instead use an explicit action like an input from a user to end the program or just run for a specific amount of time. Also be aware Ctrl-C to exit can be slow on OSX, perhaps taking 30 seconds or more to shut down the main loop (and even throwing crash errors on Mac OSX Yosemite). That's all there is to using the Bluefruit LE Python library to talk to a BLE UART device! The library works great for connecting to a BLE UART to read sensor data, send it actions, etc--the sky is the limit with what you can do using a Bluefruit LE device and your computer!
{"Source-Url": "https://cdn-learn.adafruit.com/downloads/pdf/bluefruit-le-python-library.pdf", "len_cl100k_base": 5302, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 24478, "total-output-tokens": 5922, "length": "2e12", "weborganizer": {"__label__adult": 0.000492095947265625, "__label__art_design": 0.00029587745666503906, "__label__crime_law": 0.00025534629821777344, "__label__education_jobs": 0.00016963481903076172, "__label__entertainment": 8.404254913330078e-05, "__label__fashion_beauty": 0.0001982450485229492, "__label__finance_business": 7.05718994140625e-05, "__label__food_dining": 0.0005097389221191406, "__label__games": 0.0007567405700683594, "__label__hardware": 0.01148223876953125, "__label__health": 0.0003190040588378906, "__label__history": 0.00014281272888183594, "__label__home_hobbies": 0.0001685619354248047, "__label__industrial": 0.0005540847778320312, "__label__literature": 0.00011914968490600586, "__label__politics": 0.00014984607696533203, "__label__religion": 0.0005497932434082031, "__label__science_tech": 0.0084991455078125, "__label__social_life": 7.218122482299805e-05, "__label__software": 0.010498046875, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.0003769397735595703, "__label__transportation": 0.0005497932434082031, "__label__travel": 0.0001957416534423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23862, 0.00621]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23862, 0.32284]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23862, 0.92731]], "google_gemma-3-12b-it_contains_pii": [[0, 152, false], [152, 517, null], [517, 2117, null], [2117, 4146, null], [4146, 5716, null], [5716, 6853, null], [6853, 9099, null], [9099, 11248, null], [11248, 13499, null], [13499, 16212, null], [16212, 18426, null], [18426, 20704, null], [20704, 23111, null], [23111, 23862, null]], "google_gemma-3-12b-it_is_public_document": [[0, 152, true], [152, 517, null], [517, 2117, null], [2117, 4146, null], [4146, 5716, null], [5716, 6853, null], [6853, 9099, null], [9099, 11248, null], [11248, 13499, null], [13499, 16212, null], [16212, 18426, null], [18426, 20704, null], [20704, 23111, null], [23111, 23862, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23862, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23862, null]], "pdf_page_numbers": [[0, 152, 1], [152, 517, 2], [517, 2117, 3], [2117, 4146, 4], [4146, 5716, 5], [5716, 6853, 6], [6853, 9099, 7], [9099, 11248, 8], [11248, 13499, 9], [13499, 16212, 10], [16212, 18426, 11], [18426, 20704, 12], [20704, 23111, 13], [23111, 23862, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23862, 0.04955]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
7d6f5f372798079207c0bf399bf7df697ece74bd
Advanced Computer Graphics Intersection Acceleration Matthias Teschner Computer Science Department University of Freiburg Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation Motivation - a large number of rays has to be checked for intersection with a large number of objects / shapes / primitives - in 1968, Appel's approach spent up to 95% of the computation time for intersection computations - use spatial data structures to accelerate the intersection computation - minimize the number of potential intersection candidates by efficiently rejecting large parts of the scene geometry Spatial Data Structures - bounding volumes - spheres - axis-aligned bounding boxes - object-oriented bounding boxes - k-dops - bounding volume hierarchies - space subdivision - uniform grids - octrees - k-d trees - BSP trees Spatial Data Structures - Examples - space subdivision with a uniform grid (space oriented) - bounding volume hierarchy with spheres (object oriented) [Allen Chang] Spatial Data Structures - Efficiency - determined by - generation - query - update - generation is usually a pre-processing step - query implements a traversal of the data structure to compute the first intersection of the scene with a ray - update is only relevant for dynamic scenes - static scenes: efficiency dominated by the query - dynamic scenes: efficiency is determined by query and update Spatial Data Structures - Applications - e.g., ray tracing, collision detection in animation, neighborhood search in particle-based fluid animation - generally, objects or primitives are represented in a spatial data structure to accelerate a certain query - the generation of a particular spatial data structure is quite independent from the actual application - the query of the data structure differs - first ray-primitive intersection in a raytracer - all primitive-primitive intersections for collision detection - all particles within a certain radius of a particle for fluid animation Spatial Data Structures - Implementation - depending on the query, different implementations can be appropriate - e.g. for uniform grids - explicit representation - spatial hashing - compact hashing - index sort - z-index sort - e.g. for bounding volume hierarchies - priority queues - skip lists Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation Bounding Volumes - simple geometric shapes that enclose complex geometry - if the simple geometry is not hit by the ray, then the complex geometry is also not hit by the ray - for efficiency - intersection test should be fast - bounding volume should be tight-fitting - a) \implies d) improved fitting - d) \implies a) improved computation time a) sphere b) axis-aligned box c) object-oriented box d) slabs, k-DOP (discrete orientation polytope) Bounding Volumes - intersection - if a ray hits all bounding volumes, the enclosed geometry is tested - union - if a ray hits one of the bounding volumes, the geometry is tested [Allen Chang] Bounding Volume Hierarchies - tree with a root node - each node represents a bounding volume - the union of all bounding volumes in a tree level encloses the entire geometry - parts of the geometry (primitives) are stored in leaf nodes [Allen Chang] Bounding Volume Hierarchies - Construction - degrees of freedom - top down / bottom up - number of children (branching factor) - goals - balanced tree - minimal surface of bounding volumes per level - minimal overlap of bounding volumes per level - strategies - group primitives close to each other (using sorted primitives) - split primitives into groups with similar numbers of primitives Bounding Volume Hierarchy Ray Traversal - Priority Queue / Heap - B - bounding volume of the root node - heap - Min Heap / Priority List heap = empty; intersection = inf; if B.intersection(ray)<intersection then heap.add(B); while heap.notEmpty() and heap.min.intersection(ray)<intersection do cand = heap.min; heap.remove (heap.min); if cand.leaf() then intersection = cand.minIntersection(ray); else foreach cand.child do if cand.child.intersection(ray)<intersection then heap.add(cand.child); return intersection; depth-first, near-to-far Bounding Volume Hierarchy - Ray Traversal - if bounding volumes in one layer overlap, the closest intersection of a ray with a primitive is not necessarily in the bounding volume that is entered first [Kay, Kajiya] Bounding Volume Hierarchy Ray Traversal - Priority Queue / Heap If an intersection would have been found within bounding volume 2, bounding volume 3 would not be processed, as the bounding volumes in this level of the hierarchy do not overlap. Updating the heap is in $O(\log n)$. The tree-structure of the heap can be linearly represented with an array. [Allen Chang] If an intersection would have been found within bounding volume 2, bounding volume 3 still has to be processed, as the bounding volumes in this level of the hierarchy overlap. Bounding Volume Hierarchy Ray Traversal - Skip List If (1) is hit, proceed with (2), otherwise proceed with (null). If (2) is hit, proceed with (4), otherwise proceed with (3) [Allen Chang] Bounding Volume Hierarchy Ray Traversal - Skip List (a) Bounding Volume Hierarchy (b) Skip List Tree (c) Skip List Traversal [Allen Chang] Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation Uniform Grids - Construction - subdivide 3D space into cubes / cuboids - associate references to primitives that intersect a cell with this cell - if a ray hits a cell, only the few primitives inside this cell are tested for intersection - parameters - the cell size is essential for the performance - small cells: primitives are spread over many cells - large cells: too many primitives in one cell - number of cells often proportional to the number of primitives - \( M = \rho N \) with 2 \( \leq \rho \leq 10 \) Uniform Grids - Variants Basic concept: References to primitives are stored in cells. To accelerate the traversal, empty cells can be combined to larger areas ($2^i \times 2^i$). Several grids with varying resolution can be used in parallel. Grids are commonly combined with bounding volumes to simplify the primitive-cell mapping. [Allen Chang] Uniform Grids - Traversal - unfortunately not as efficient as Bresenham's line algorithm [Allen Chang] Uniform Grids - Traversal \( \partial x, \partial y, \partial z \) parametric distance along the ray between two grid planes perpendicular to x, y, z (infinite, if a ray is parallel to a principal axis of the grid) \( dx, dy, dz \) parametric value of the ray at the next intersection with a grid plane perpendicular to x, y, z (infinite, if a ray is parallel to a principal axis of the grid) \( i, j, k \) indices of the current grid cell \( p_x, p_y, p_z \in \{-1, 1\} \) increments of cell indices if a grid plane is intersected Simplified 2D algorithm \[ \begin{align*} px & = +1 \\ py & = -1 \\ \text{initialize } & \partial x, \partial y, dx, dy, i, j \\ \text{repeat} & \\ \text{if } & \partial x \leq \partial y \\ \quad & \text{begin} \\ \quad & i := i + px; \\ \quad & dx := dx + \partial x; \\ \quad & \text{end} \\ \text{if } & \partial x \geq \partial y \\ \quad & \text{begin} \\ \quad & j := j + py; \\ \quad & dy := dy + \partial y; \\ \quad & \text{end} \\ \text{until } & \text{intersection in cell } i, j; \end{align*} \] Uniform Grids Traversal - Initialization - $dx, dy, dz$ have to be initialized with the first intersections of a ray with planes perpendicular to x, y, z inside the grid [Cleary] Uniform Grids - Mailboxing - avoids multiple tests of one ray with the same object that is associated with different cells - each object stores a reference to the last ray that has been tested - not necessarily useful for cheap intersection tests Ray k is tested with C. A reference to k is stored with C. Ray k is tested with A, B. It is not tested with C. A reference to k is stored with A. Ray k is not tested with A, B, as A and B store a reference to k. [Cleary] Uniform Grids - Robust Traversal - standard traversal discards intersections outside the currently visited cell - the object is partially inside the cell, thus it is associated with the cell, but the actual intersection might be outside the cell - standard traversal can miss intersections close at the cell border due to numerical issues - $\varepsilon$-band might be used - robust grid traversal - keep the closest intersection for a cell, even if the intersection is outside this cell - terminate ray traversal, if the maximum ray parameter of a cell is larger than the ray parameter of the closest intersection Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation **kd-Trees - Construction** - recursively subdivide 3D space in half spaces using planes - planes are perpendicular to the x-, y-, z-axes - steps - subdivide space with respect to the x-axis - subdivide the resulting subspaces with respect to y - continue with z, x, y, z, x, y, ... until there is only a small number of primitives in each subspace - balanced tree $\Rightarrow$ minimized depth [Allen Chang] **kd-Trees - Traversal** - check the global bounding box for intersection - compute the intersection with the plane represented in the root node - recursively traverse the two children in front-to-back order - if a node is a leaf, check the primitives - stop, if a ray-primitive intersection has been found or no further spaces have to be processed [Pharr, Humphreys] **kd-Trees - Traversal** - given a position on the ray, the kd-tree can be traversed to decide the processing order of the subspaces - if the intersection with a splitting plane is outside the considered cuboid, the processing of half spaces can be accelerated Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation Octrees - Construction - start with a global bounding box - recursively subdivide the box using three planes into eight sub boxes until each sub box contains less than a certain number of primitives [Allen Chang] Octrees - Traversal - e.g. - if the global bounding box is hit, process the octree in a BVH manner - as sub boxes in the same layer do not overlap, processing can be stopped if an intersection has been found and if the processing order of sub boxes is determined with a priority list Outline - introduction - bounding volume hierarchies - uniform grids - kd-trees - octrees - implementation Uniform Grid Linked List and Dynamic Array 2D grid and linearized representation Linked list: - minimal memory overhead - bad locality - frequent memory allocations - single primitives can be inserted Dynamic array: - keeps track of capacity and size - less memory efficient - improved data locality - minimized memory allocations - single primitives can be inserted [ Lagae] Uniform Grid - Compact Grid 2D grid and linearized representation Compact grid: - two arrays - useful for dynamic scenes - generation rather expensive - generation + traversal more efficient than linked lists or dynamic arrays - single primitives cannot be inserted or deleted [Lagae] Uniform Grid - Compact Grid - Generation - generate C - store the number of primitives in each cell of C - loop over all primitives and increment the respective value in C - accumulate the values in C - generate L - associate primitive i with cell j: \( L[|--C[j]|]=i \) - stores the primitives in reversed order into L - after insertion C contains the correct offsets [Source: Lagae] Uniform Grid - Hashed Grid - motivated by a large number of empty cells - only stores filled grid cells in a hash table **generation** - process all primitives - compute the cell id - evaluate the hash function for the cell id - place the primitive into the hash table (use a dynamic array per hash table entry) **traversal** - compute the cell id that is intersected by a ray - evaluate the hash function for the cell id - look-up all relevant primitives from the hash table Uniform Grid - Hashed Grid - To avoid hash collisions, perfect hashing can be used. - E.g., row displacement compression. - Identify occupied grid cells. - Compute an offset for each row (O) to store all rows with non-overlapping occupied cells in a hash table (H). - Index into the hash table is $O[\text{row}] + \text{column}$. [Lagae] # Uniform Grid - Performance <table> <thead> <tr> <th>Scene Statistics</th> <th>Thai Statue</th> <th>Lucy</th> <th>Nature</th> </tr> </thead> <tbody> <tr> <td>#tri’s</td> <td>10.00 M</td> <td>28.06 M</td> <td>41.35 M</td> </tr> <tr> <td>memory</td> <td>343.32 MB</td> <td>963.22 MB</td> <td>1.39 GB</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Grid Statistics</th> <th>Thai Statue</th> <th>Lucy</th> <th>Nature</th> </tr> </thead> <tbody> <tr> <td>grid res</td> <td>302x508x261</td> <td>485x278x832</td> <td>906x202x902</td> </tr> <tr> <td># cells</td> <td>40.04 M</td> <td>112.18 M</td> <td>165.08 M</td> </tr> <tr> <td>% empty cells</td> <td>98.44 %</td> <td>99.00 %</td> <td>92.04 %</td> </tr> <tr> <td>avg # tri’s / n-emp cell</td> <td>29.25</td> <td>41.50</td> <td>28.52</td> </tr> <tr> <td>avg # cells / tri</td> <td>1.83</td> <td>1.66</td> <td>9.06</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Render Statistics</th> <th>Thai Statue</th> <th>Lucy</th> <th>Nature</th> </tr> </thead> <tbody> <tr> <td>avg # n-emp cells / isect ray</td> <td>2.38</td> <td>2.06</td> <td>12.12</td> </tr> <tr> <td>avg # isect tests / isect ray</td> <td>73.91</td> <td>99.28</td> <td>302.23</td> </tr> </tbody> </table> Uniform Grid - Performance - **1024 x 1024 x 1 rays** - **diffuse shading** <table> <thead> <tr> <th></th> <th>Thai Statue</th> <th>Lucy</th> <th>Nature</th> </tr> </thead> <tbody> <tr> <td><strong>compact grid statistics</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>mem cells</td> <td>152.75 MB</td> <td>427.93 MB</td> <td>629.72 MB</td> </tr> <tr> <td>mem tri lists</td> <td>69.78 MB</td> <td>178.06 MB</td> <td>1.40 GB</td> </tr> <tr> <td>build time</td> <td>1.17 s</td> <td>3.15 s</td> <td>9.12 s</td> </tr> <tr> <td>render time</td> <td>1.55 s</td> <td>1.90 s</td> <td>10.75 s</td> </tr> <tr> <td>time to image</td> <td>2.72 s</td> <td>5.05 s</td> <td>19.87 s</td> </tr> <tr> <td>memory</td> <td>222.53 MB</td> <td>605.98 MB</td> <td>2.01 GB</td> </tr> </tbody> </table> | **hashed grid statistics** | | | | | data density | 1.56 % | 1.00 % | 7.96 % | | hash table size | 967.47 K | 1.76 M | 28.82 M | | hash table load factor| 64.64 % | 63.76 % | 45.57 % | | mem domain bits | 4.77 MB | 13.37 MB | 19.68 MB | | mem offset table | 517.92 Kb | 903.50 Kb| 711.73 Kb | | mem hash table | 3.69 MB | 6.73 MB | 109.94 MB | | mem cells | 8.97 MB | 20.98 MB | 130.31 MB | | compression ratio | 1700 % | 2040 % | 483.24 % | | mem tri lists | 69.78 MB | 178.06 MB| 1.40 GB | | build time | 1.76 s | 4.77 s | 21.23 s | | render time | 1.43 s | 1.53 s | 10.07 s | | time to image | 3.18 s | 6.30 s | 31.30 s | | memory | 78.75 MB | 199.04 MB| 1.52 GB | Summary - spatial data structures minimize the number of potential intersection candidates by efficiently rejecting large parts of the scene geometry - bounding volumes - bounding-volume hierarchies - uniform grids - k-d trees - octress - the actual implementation of the data structure significantly influences the performance
{"Source-Url": "https://cg.informatik.uni-freiburg.de/course_notes/graphics2_05_intersectionAcceleration.pdf", "len_cl100k_base": 4247, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 62189, "total-output-tokens": 5695, "length": "2e12", "weborganizer": {"__label__adult": 0.00046133995056152344, "__label__art_design": 0.0016536712646484375, "__label__crime_law": 0.0005178451538085938, "__label__education_jobs": 0.001190185546875, "__label__entertainment": 0.00014793872833251953, "__label__fashion_beauty": 0.0002294778823852539, "__label__finance_business": 0.0002548694610595703, "__label__food_dining": 0.00034499168395996094, "__label__games": 0.0013341903686523438, "__label__hardware": 0.0024929046630859375, "__label__health": 0.0005884170532226562, "__label__history": 0.0005030632019042969, "__label__home_hobbies": 0.00012934207916259766, "__label__industrial": 0.0006351470947265625, "__label__literature": 0.0003027915954589844, "__label__politics": 0.0002377033233642578, "__label__religion": 0.0006465911865234375, "__label__science_tech": 0.11090087890625, "__label__social_life": 0.00010150671005249023, "__label__software": 0.01401519775390625, "__label__software_dev": 0.8623046875, "__label__sports_fitness": 0.0003597736358642578, "__label__transportation": 0.0006122589111328125, "__label__travel": 0.0002770423889160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15407, 0.01797]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15407, 0.27484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15407, 0.77904]], "google_gemma-3-12b-it_contains_pii": [[0, 125, false], [125, 233, null], [233, 647, null], [647, 889, null], [889, 1056, null], [1056, 1462, null], [1462, 2061, null], [2061, 2373, null], [2373, 2481, null], [2481, 2933, null], [2933, 3132, null], [3132, 3384, null], [3384, 3796, null], [3796, 4409, null], [4409, 4626, null], [4626, 4998, null], [4998, 5174, null], [5174, 5366, null], [5366, 5507, null], [5507, 5615, null], [5615, 6142, null], [6142, 6493, null], [6493, 6598, null], [6598, 7645, null], [7645, 7826, null], [7826, 8301, null], [8301, 8923, null], [8923, 9031, null], [9031, 9448, null], [9448, 9818, null], [9818, 10081, null], [10081, 10189, null], [10189, 10404, null], [10404, 10693, null], [10693, 10801, null], [10801, 11181, null], [11181, 11469, null], [11469, 11862, null], [11862, 12343, null], [12343, 12688, null], [12688, 13538, null], [13538, 15069, null], [15069, 15407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 125, true], [125, 233, null], [233, 647, null], [647, 889, null], [889, 1056, null], [1056, 1462, null], [1462, 2061, null], [2061, 2373, null], [2373, 2481, null], [2481, 2933, null], [2933, 3132, null], [3132, 3384, null], [3384, 3796, null], [3796, 4409, null], [4409, 4626, null], [4626, 4998, null], [4998, 5174, null], [5174, 5366, null], [5366, 5507, null], [5507, 5615, null], [5615, 6142, null], [6142, 6493, null], [6493, 6598, null], [6598, 7645, null], [7645, 7826, null], [7826, 8301, null], [8301, 8923, null], [8923, 9031, null], [9031, 9448, null], [9448, 9818, null], [9818, 10081, null], [10081, 10189, null], [10189, 10404, null], [10404, 10693, null], [10693, 10801, null], [10801, 11181, null], [11181, 11469, null], [11469, 11862, null], [11862, 12343, null], [12343, 12688, null], [12688, 13538, null], [13538, 15069, null], [15069, 15407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15407, null]], "pdf_page_numbers": [[0, 125, 1], [125, 233, 2], [233, 647, 3], [647, 889, 4], [889, 1056, 5], [1056, 1462, 6], [1462, 2061, 7], [2061, 2373, 8], [2373, 2481, 9], [2481, 2933, 10], [2933, 3132, 11], [3132, 3384, 12], [3384, 3796, 13], [3796, 4409, 14], [4409, 4626, 15], [4626, 4998, 16], [4998, 5174, 17], [5174, 5366, 18], [5366, 5507, 19], [5507, 5615, 20], [5615, 6142, 21], [6142, 6493, 22], [6493, 6598, 23], [6598, 7645, 24], [7645, 7826, 25], [7826, 8301, 26], [8301, 8923, 27], [8923, 9031, 28], [9031, 9448, 29], [9448, 9818, 30], [9818, 10081, 31], [10081, 10189, 32], [10189, 10404, 33], [10404, 10693, 34], [10693, 10801, 35], [10801, 11181, 36], [11181, 11469, 37], [11469, 11862, 38], [11862, 12343, 39], [12343, 12688, 40], [12688, 13538, 41], [13538, 15069, 42], [15069, 15407, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15407, 0.1027]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
26b9d64b073dbf86eb97287a8cee72771e735fdb
iXen: context-driven service oriented architecture for the internet of things in the cloud Xenofon Koundourakis, Euripides G.M. Petrakis School of Electrical and Computer Engineering, Technical University of Crete (TUC), Chania, Crete, Greece Abstract iXen’s ambition is to overcome the limits of existing IoT platforms in the cloud and deal with challenges of security and interoperability. Therefore, iXen is interoperable and expandable (i.e. services can be added or removed) while being secure by design: access to services is granted only to authorized users (or other services) based on user roles and access policies. Leveraging principles of Service Oriented Architectures (SOA) and the most recent EU standards for context information management, iXen is implemented as a composition of RESTful micro-services in the cloud. iXen adopts a 3-tier architecture design model. The first layer supports connectivity of the vast diversity of IoT devices with the cloud. The second (middle) layer implements IoT data functionality including, database, security and context management services allowing devices to publish information and, users (or other services) subscribed to devices to get notified about the availability of this information. Flow-based programming services in the middle layer allow fast development of new applications by wiring together IoT devices and services. The third layer makes IoT applications available to customers based on subscriptions. The experimental analysis shows that iXen is responding in real-time to complex service requests under heavy workloads. Keywords: cloud computing; IoT architecture; service oriented architecture; micro-services; context management; Securing IoT infrastructures is a challenging task. Potential risks and counter-measures for dealing with security have been identified by the industry, by regulatory entities and the literature [2]. The principles of Security by Design and, Security and Privacy by Default [3] must be applied since the design phase of a system. The cloud infrastructure is exposed to risks due unauthorized attempts to access services. These are handled successfully with the aid of traditional security methods (e.g. encryption, authorization, auditing). However, an IoT system is also vulnerable due to malicious devices operating at the edge of the network. The security mechanisms for the cloud must be complemented with trust evaluation methods for dealing with these risks as well [4]. This creates new challenges for dealing with the cause and point of system failure if security fails [5]. The Industrial Internet Consortium (IIC) [6] emphasizes the need for monitoring devices, networks, applications and the cloud. Solutions to malicious behavior or malfunction detection, suggest continuous monitoring of, the state of IoT nodes, of the cloud components or, periodically monitoring system logs or, all of the above. iXen focuses on securing the cloud infrastructure from unauthorized access to services and data. Securing the IoT network is outside the scope of this paper. iXen’s ambition is to overcome the limits of existing IoT architectures in the cloud and deal with challenges of security, openness and interoperability. iXen architecture is highly configurable and modular and supports generation of fully customizable applications by re-using services and devices. Leveraging flow-based programming [7], new applications can be generated with the aid of user friendly interfaces. The interest of a developer in sensors and services for composing a new application is expressed by means of queries specifying the desired device and service properties. iXen services are re-usable, implement fundamental functionality and offer a public interface allowing secure connections with other services (even third party ones). Therefore, iXen is interoperable and expandable (i.e. services can be added or removed) while being secure by design: all services are protected by an OAuth2.0 mechanism. Access to services is granted only to authorized users (or authorized services) based on user roles and access policies. This mechanism is realized as a synthesis of security micro-services which are both, generic and re-usable (i.e. the same mechanism is applied for securing all services offered by the platform). iXen features an elaborate 3-tier architecture design model. Each layer implements functionality addressing the needs of different users type, namely infrastructure owners (i.e. device owners), application owners (i.e. applications developers) and customers (who subscribe to applications). The same user may have more than one roles in iXen. Infrastructure owners are entitled to install and make devices available to application owners which, in turn, subscribe to devices in order to create applications; finally, customers (i.e. end-users) subscribe to applications. The first level of the architecture allows devices to connect to iXen in the cloud. Captured data from devices are encrypted and streamed to the cloud. iXen is capable of handling large collections of devices of any type. This is the only part of the system which is affected by the property of a device (e.g. a sensor) to use a specific IoT protocol (e.g. Bluetooth, Zigbee). The rest of the system is sensor agnostic (i.e. data are processed in JSON which is a sensor agnostic format). The second (middle) layer implements advanced data processing functionality including, database, security, flow-based programming for creating applications and, event-driven publish-subscribe (i.e. context) services allowing devices to publish information and users (or other services) to be notified when this information becomes available (i.e. only subscribed users or services get notified). The third layer makes applications available to customers based on subscriptions. Devices and applications are easily discoverable by means of user friendly query mechanisms (a feature which is of particular interest for large scale IoT systems). Fig. 1 illustrates an example 3-tier system structure, the physical entities and their interaction: (a) four devices in layer 1 connect to gateways and from there to the cloud. The application owner in layer 2 makes three applications available to customers in layer 3. The customer in layer 3 subscribes to one application. Leveraging this 3-layer design, iXen is ready to incorporate a business logic (e.g. billing policies) for different types of users and become self-sustainable. All users may benefit from their participation in iXen based on their offerings or be charged based on a pay-per-use cost model (left as future work). iXen is a research prototype and as such, it is not intended to compete with existing commercial platforms in terms of services offering or performance, but rather, to show how a cost effective and self-sustainable IoT eco-system can be designed based on principles of SOA design and cloud micro-services, using well established, open-source technologies. iXen design relies on the most recent EU standards for context information management and IoT systems. 1 https://oauth.net/2/ 2 https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/EU+Standards design. iXen prototype is implemented in OpenStack and Fiware\footnote{https://www.fiware.org}, the open-source distributed cloud infrastructure of the EU. iXen is implemented as a composition of modular cloud micro-services implementing fundamental functionalities and communicating with each other using REST. More services can be added on demand or, any service can be replaced or moved to a different VM (on the same or different cloud) with minimum overhead (i.e. only the IP of the service will change). In regards to similar work in the literature, iXen concept resembles DIAT model \cite{8} for IoT architectures addressing the challenges of interoperability and scalability. It encompasses a usage control policy model to support security and privacy in a distributed environment. DIAT is a layered model architecture for three different component layers (referred to as Virtual Object, Composite Virtual Object and Service layer) similar to SOA. The work is positively evaluated as a model but it is neither a cloud nor a service oriented architecture. It is not accompanied by implementation and its performance has not been assessed in a real setting. We run an exhaustive set of experiments using real and simulated (but realistic) data aiming to evaluate both, iXen response time and scalability. We stressed iXen with high data streams and many simultaneous requests. The experimental analysis show that iXen is capable of responding in real-time under heavy work loads (i.e. many users applying several requests per second). Issues related to iXen design and implementation are discussed in Sec. 2 followed by an analysis of performance in Sec. 3. Conclusions and issues for future work are discussed in Sec. 4. 2. Design and architecture We followed a valid design approach \cite{9} that identified functional and non-functional system requirements and specifically, (a) functional components and their interaction, (b) information that is managed and how it is acquired, transmitted, stored and analysed, (c) different types of users and how they interact with the system, (d) requirements for assuring data, network and user security and privacy. Detail on system design (including a full set of use case, activity and deployment UML diagrams) can be found in \cite{10}. 2.1. User groups Each user belongs to a user class. Each user class is assigned a role encoding authorization to access other services. The following user groups and functional requirements associated with each group are identified: \textit{Systems administrators}: they configure, maintain and monitor the cloud. Except their competence to providing cloud services, they are responsible for performing Create, Read, Update, Delete (CRUD) operations on (a) users (e.g. they can register new users to the system and define their access rights) and, (b) devices (e.g. they can register new devices to the system). They are responsible for monitoring system operations at all times. \textit{Infrastructure owners}: they subscribe to the cloud for a fee and are granted permission (by the cloud administrator) to register, configure, monitor or remove devices in their possession. Application owners: they subscribe to the cloud and to a set of devices for a fee. Once subscribed to devices they can create applications by means of flow-based programming. iXen provides query mechanisms for selecting devices of interest using device properties such as, device type, location, purpose etc. An application is defined by wiring together the outputs of selected devices. Customers: they subscribe to applications for a fee. Once subscribed to an application they are granted access to the application over the Web. iXen provides query mechanisms for selecting applications available for subscriptions based on criteria such as, location, functionality etc. Customers are granted only access rights to applications. 2.2. iXen architecture iXen is designed as a composition of autonomous RESTful micro-services communicating with each other over HTTP. They are organized in groups of services. Network delays are expected due to the nature of this design. However, as shown in Sec. 3, iXen is capable of responding in real time under heavy workloads. Fig. 2 illustrates iXen architecture. In the following, groups of services implementing the same functionality are discussed together. 1) Sensor services: IoT devices are connected to iXen using Sensor interface service. It collects data from gateways (where sensors are connected) using an IoT IP protocol (e.g. MQTT, CoAP). It is implemented using the IDAS backend device management service of Fiware. It is the only service which is affected by the property of devices to use a specific protocol. Following Sensor interface service, data are communicated in NGSI, a data exchange format based on JSON. It is the standard of the EU for handling context information. It describes information being exchanged and entities involved (e.g. sensors that publish measurements and users or services that subscribe to this information). Sensor interface service publishes IoT context information to Publish-Subscribe service in NGSI format. Only devices registered to this service can publish data to iXen. ORION Context Broker is a reference implementation of this service and the service standard of the EU for handling context information. Publish-Subscribe service receives measurements from devices registered to Sensor interface service and makes this information available to other services and users based on subscriptions. Sensors register to Publish-Subscribe service as NGSI “public entities” and users or other services can subscribe to these entities to get notified on value changes or, when new values become available. Each time a new sensor registers to iXen, a new entity is created in Publish-Subscribe service. Each time a new sensor value becomes available, this component is updated and a notification is sent to entities subscribed to the sensor. The service holds the most recent values from all registered sensors (i.e. current values are stored in a non-SQL database). History (past) measurements are forwarded to Data storage service and from there to History database. 2) Database services: iXen implements databases for devices, device data, users and applications. Access is facilitated by database interface services. Database and database interface services in Fig. 2 are illustrated in green color. Publish-Subscribe storage holds (in NGSI format) published context and subscription information (e.g. devices that publish data, active subscriptions to devices) and, descriptions of IoT devices along with their most recent measure- --- 5 https://ec.europa.eu/cefdigital/wiki/display/CEFDIGITAL/EU+Standards ments. It is implemented using MongoDB (i.e. it suits better than a relational database to the semi-structured nature of this information). Requests addressing this information are issued by the Sensors query service using a (close to) natural language syntax involving custom data types (defined in iXen), attribute values and conditional operators (i.e. “and”, “or”, “not”, “equals”, “less”, “greater than” etc.). Alternatively, query formulation is facilitated by a graphical user interface providing query forms and the user is prompted to select properties and query operators. Before submitted to Publish-Subscribe service, queries are parsed and are translated to equivalent MongoDB queries involving iXen data types using Mongo Query Generator. Table 1 shows data types (i.e. for devices and their properties) to be used by Sensors query service for hiding the complexity of MongoDB queries. The following query will retrieve temperature and humidity measurements acquired by weather sensors installed in the city of Chania: \[ \text{observa:s:temperature} \|| \text{observa:s:humidity} \&\& \text{isModel} == \text{"Estimote beacon"} \&& \text{isInCity} == \text{"Chania"}. \text{The equivalent MongoDB query is:} \text{\$and}: [\{\text{\$or}: [\{\text{\$attribute: temperature}: \text{\$exists}: \text{true}, \text{\$attribute: humidity}: \text{\$exists}: \text{true}, \text{\$Model: \$eq}: \text{"Estimote"}, \text{\$City: \$eq}: \text{"Chania"}\}]. \] Data storage service collects data flows (history values) from Publish-Subscribe service. The time series created from the history of data are stored in History database as (a) raw (unprocessed) values as received from devices and, (b) aggregated (processed) values (i.e. statistics). More specifically, maximum, minimum and average values over predefined time intervals (e.g. every hour, day, week etc.) are stored. The Data storage service is implemented using Cygnus, the EU standard for handling history of context data in NGSI format. The History database is implemented using MongoDB. The History query service provides a query interface to the History database: query requests are expressed using the syntax explained earlier and are translated to MongoDB queries. Application storage is a non-relational database that holds information for applications available to customers for subscriptions. They are created by application owners using Mashup service. Applications are stored in JSON in a non-relational database (i.e. MongoDB). Similar to Sensors query service, the database can be searched by properties (i.e. using the data types of Table 1), by name or by owner. Alternatively, a list with all applications can be displayed (together with their descriptions) and the user is prompted to select applications for subscription. Users database is a relational (MySQL) database which holds users login and authorization information (i.e. users profile data, roles, session information and session history). For each user, ownership and subscription information is also stored (i.e. customers subscribing to applications, application owners subscribing to sensors, infrastructure owners providing sensors for subscription). Before a user submits a service request, his/her role (i.e. a token corresponding to a role) is retrieved and attached in the header of the request. Subsequently, the token will be checked by the target service to verify that the user has the right to access the service (the mechanism is described in Paragraph 6). 3) Mashup services: application owners are entitled to create new applications. The service is realized with the aid of Node-Red, an open-source flow-based programming tool for the IoT allowing for defining applications as a sequence of customizable templates selected from a list. Applications are defined as a sequence of four steps namely, Endpoint, Functionality, Calculations and Response. The name and IP address of the service being created, as well as the REST methods (notably GET, PUT, POST) for accessing the service are declared in Endpoint. The application is defined as a composition of methods (i.e. functions) receiving inputs from specific devices which are declared in Functionality. Calculations contains the implementation of the methods (i.e. the software) declared in functionality. The methods implemented in Calculations provide current values and value statistics (i.e. average, minimum and maximum values. --- Table 1: Data types and properties to be used in user queries. <table> <thead> <tr> <th>Data type</th> <th>Property</th> </tr> </thead> <tbody> <tr> <td>isModel</td> <td>Device type (e.g. “proximity beacon”)</td> </tr> <tr> <td>Observes</td> <td>Value type (e.g. “temperature, humidity”)</td> </tr> <tr> <td>isInCity</td> <td>Where (e.g. “Chania”)</td> </tr> <tr> <td>Owner</td> <td>Infrastructure owner (e.g. “Estimote”)</td> </tr> <tr> <td>When</td> <td>Date, time or time interval (e.g. “15/4/2019”)</td> </tr> </tbody> </table> --- 7 https://www.npmjs.com/package/mongo-query-generator 8 https://cefdigital.eu/wiki/display/CEFDIGITAL/Cygnus 9 https://nodered.org over 1 hour, 24 hours, week and month). Finally, Response specifies a URL where the output will be forwarded (typically the address of an application on the Web). Each step forwards information to the next. The application is stored as a JSON entity in Application storage (i.e. a MongoDb). Fig. 3 declares Functionality of IntelligenceLab application which computes the maximum (over 24 hours) temperature values from sensors 1, 2 and minimum (over 24 hours) humidity values from sensors 3 and 4. In order to select sensors to be used in an application, the user issues queries to Sensors query service. The query is translated to MongoDB syntax and is forwarded to Publish-Subscribe storage. Typically, an application will operate on history data by the selected sensors. The application of Fig. 3 will retrieve maximum and minimum values of temperature and humidity over the last 24 hours from History database. The output will be generated in HTML/Javascript that will run on a Web interface to display the results using Google Charts. 4) Application logic: its purpose is to orchestrate, control and execute services running in the cloud. When a request is received (from a user or service), it is dispatched to the appropriate service. User requests are issued on the Web interface. First, a user logs in to ixen using a login name and password. The user is then assigned a role (by the cloud administrator) and receives a token encoding his/her access rights (i.e. authorization to access ixen services). This is a responsibility of the User identification and authorization service. Each time application logic dispatches the request to another service, the token is attached to the header of the request. It is a responsibility of the security mechanism to approve (or reject) the request. In ixen all public services are protected by a security mechanism (Paragraph 6). 5) Web application: the users access the system using a Web interface. Application owners can issue requests (queries) for available devices and subscribe to selected devices and customers can issue queries to select applications available for subscriptions. 6) Security services: they implement access control to services based on user roles and access policies. Initially, users register to ixen to receive a login name, a password and a role (i.e. customer, application owner, or infrastructure owner) encoding user’s access rights. This is a responsibility of the cloud administrator. Once a user is logged-in, he/she is assigned an OAuth2 token encoding his/her identity. The token remains active during a session. A session is initialized at login and remains active during a time interval which is also specified in advance. An new token is issued every time a new session is initiated (e.g. at next user login). User respective user access rights are described by means of XACML (i.e. a vendor neutral declarative access control policy language based on XML). Keyrock identity manager is an implementation of this service. For each user, a XACML file is stored in Authorization Policy Decision Point (PDP) service. Services offering a public interface (i.e. typically SOA services) are protected by a security mechanism (i.e. they do not expose their interface to the Web without protection). Fig. 2 illustrates five protected services and their corresponding security services (in red color). This security mechanism is realized by means of Policy Enforcement Proxy. (PEP)\textsuperscript{14} service. Each public service is protected by a separate PEP service (stored in the same VM with the service). It is a responsibility of this service to approve or reject a request to the protected service. Each user request is forwarded to Application logic service which dispatches the request to the appropriate service. The security process is carried out by applying the sequence of steps illustrated in Fig. 4 (left). The request comes with a token in its header. The PEP service will check if the token is valid by sending a request to User Identification and authorization service. If the token is valid (and the session is active), User Identification and authorization will respond with user’s role. PEP service will forward user’s role to Authorization PDP service which stores the XACML files for all users. The decision whether the user is authorized to access the protected service will be determined by evaluating the XACML file. This process is carried out by Authorization PDP service which will respond to PEP service with a decision. If the request is approved, it is forwarded to the protected service. Not all services accept requests by users. There are also services which are accessible by other services only. They are distinguished from other protected services because they are not directly connected with Application logic. These services are protected by a security key, referred to as master key. In this case, PEP service stores the master key. Only requests with the correct key in their header can access the protected service. The mechanism is illustrated in Fig. 4 (right). In Fig. 2, Sensor interface, Mashup and Data Storage services are protected using a master key. 3. Performance Evaluation iXen is deployed in 5 (small flavor) VMs. Each one has one processor (x86_64 processor architecture, 2,800MHz), 2,048MB RAM, 20GB hard drive, runs Ubuntu 14.04 and an Apache HTTP server. The first VM runs Publish-Subscribe, Sensor query and Sensor interface services. The second VM runs Mashup, Application storage and User Identification and Authorization services. The third VM runs History database and History query services. The fourth VM runs Data storage, Application logic services and the Web application. The fifth VM runs Authorization PDP service. Each service is protected by a dedicated PEP service installed in the same VM. There are 10 BLE Estimote\textsuperscript{15} beacon sensors transmitting (each one) 100 temperature and humidity measurements per hour (24,000 per day). The sensors connect to a gateway (i.e. a mobile device) and from there, sensor measurements are transferred to Sensor interface service in the cloud. The sensors are registered to Publish-Subscribe service of iXen. The History database consists of two data sets, one with raw (i.e. unprocessed) measurements and one with statistical values (i.e. minimum, maximum, values) taken every hour. In order to run a more realistic experiment we created a much bigger dataset with measurements from 2,000 simulated sensors. Each simulated sensor produces pseudo-random measurements in the same value range and form as a real sensor. In this set-up (with all actual and simulated sensors in place), the History database contains more than 50 Million measurements. Table 2 summarizes the performance of the most representative operations. ApacheBench\textsuperscript{16} is used to stress iXen with 2,000 simultaneous requests (for each operation), 100 of which are executed in parallel (simulating the case of \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure4.png} \caption{Protecting a service with an OAuth2 token (left) and an OAuth2 master key (right.).} \end{figure} \textsuperscript{14} https://fiware-pep-proxy.readthedocs.io/en/latest/ \textsuperscript{15} https://estimote.com \textsuperscript{16} https://httpd.apache.org/docs/2.4/programs/ab.html User requests are issued on Web application service and are forwarded to Application logic. From there, they are dispatched to the appropriate iXen services. All operations address storage services: operations 1, 2 and 3 address Publish-Subscribe or Publish-Subscribe storage services; operation 4 address History query and History database services; operations 5, 6 and 7 address Mashup and Application storage services. For each request in Table 2, response times improve with the simultaneous execution of requests (i.e. the Apache HTTP server switches to multitasking) reaching their lowest values for concurrency between 50 and 150. Even with concurrency = 250 the average execution time per request is close to real-time in most cases. iXen may produce big amounts of data and receive many requests, requiring large processing capabilities which can surpass the capacities of this implementation. A solution to dealing with performance would be to employ additional VMs each running a single service (or a small group of services) or, allocate more VMs for the same service (or groups of services) in order to share the load. An important observation is that almost 15% of the time reported in Table 2 accounts for security checks (i.e. for validating user authorization credentials). 4. Future Work iXen is currently being extended to support billing policies and functionality for dealing with complex events. Incorporating scalability features for dealing with increased workloads is an important direction for future work. A possible solution would be deploying iXen in Kubernetes and a serverless environment. Transforming iXen to multi-edge cloud (MEC) architecture for dealing with distributed IoT deployments at the edges of the network and incorporating trust evaluation mechanisms for dealing with internal risks [4] is underway. Securing the IoT network for handing risks due to malicious behavior of IoT devices is still an open issue. References
{"Source-Url": "http://www.intelligence.tuc.gr/~petrakis/publications/iXen.pdf", "len_cl100k_base": 5824, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23622, "total-output-tokens": 6735, "length": "2e12", "weborganizer": {"__label__adult": 0.0003609657287597656, "__label__art_design": 0.000843048095703125, "__label__crime_law": 0.0006375312805175781, "__label__education_jobs": 0.0005826950073242188, "__label__entertainment": 0.00010877847671508788, "__label__fashion_beauty": 0.0001832246780395508, "__label__finance_business": 0.0008873939514160156, "__label__food_dining": 0.000377655029296875, "__label__games": 0.0005469322204589844, "__label__hardware": 0.00408172607421875, "__label__health": 0.0007615089416503906, "__label__history": 0.0004725456237792969, "__label__home_hobbies": 0.00013566017150878906, "__label__industrial": 0.0009241104125976562, "__label__literature": 0.00024890899658203125, "__label__politics": 0.000396728515625, "__label__religion": 0.0004730224609375, "__label__science_tech": 0.363037109375, "__label__social_life": 8.791685104370117e-05, "__label__software": 0.0269775390625, "__label__software_dev": 0.59619140625, "__label__sports_fitness": 0.00025582313537597656, "__label__transportation": 0.000843048095703125, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30120, 0.01918]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30120, 0.17642]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30120, 0.90408]], "google_gemma-3-12b-it_contains_pii": [[0, 1711, false], [1711, 7217, null], [7217, 10393, null], [10393, 14157, null], [14157, 19135, null], [19135, 22595, null], [22595, 26521, null], [26521, 30120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1711, true], [1711, 7217, null], [7217, 10393, null], [10393, 14157, null], [14157, 19135, null], [19135, 22595, null], [22595, 26521, null], [26521, 30120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30120, null]], "pdf_page_numbers": [[0, 1711, 1], [1711, 7217, 2], [7217, 10393, 3], [10393, 14157, 4], [14157, 19135, 5], [19135, 22595, 6], [22595, 26521, 7], [26521, 30120, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30120, 0.07692]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
84d5d7a56060bdc550da2d15d458e2920e11fda2
Delft University of Technology Software Engineering Research Group Technical Report Series A Genetic Programming Approach to Automated Test Generation for Object Oriented Software Hans-Gerhard Gross, Arjan Seesing Report TUD-SERG-2006-017 A Genetic Programming Approach to Automated Test Generation for Object-Oriented Software Arjan Seesing and Hans-Gerhard Gross Delft University of Technology EWI – Software Engineering Laboratory Mekelweg 4, 2628 CD Delft, The Netherlands a.c.seesing@gmail.com; h.g.gross@tudelft.nl Abstract. In this article we propose a new method for creating test software for object-oriented systems using a genetic programming approach. We believe this approach is advantageous over the more established search-based test-case generation approaches because the test software is represented and altered as a fully functional computer program. Genetic programming (GP) uses a tree-shaped data structure which is more directly comparable and suitable for being mapped instantly to abstract syntax trees commonly used in computer languages and compilers. These structures can be manipulated and executed directly, bypassing intricate and error prone conversion procedures between different representations. In addition, tree structures make more operations possible, which keep the structure and semantics of the evolving test software better intact during program evolution, compared to linear structures. This speeds up the evolutionary program generation process because the loss of evolved structures due to mutations and crossover is prevented more effectively. Keywords: Search-based Testing, Test Automation, Object-Oriented Programming 1 Introduction Testing is the most widely used and accepted technique for verification and validation of software systems. It is applied to measure to which extent a software system is conforming to its original requirements specification and to demonstrate its correct operation [11]. Testing is a search problem that involves the identification of a limited number of good tests out of a shear, nearly unlimited number of possible test scenarios. “Good tests” are those runtime scenarios that are likely to uncover failures, or demonstrate correctness of the system under test (SUT). Identifying good test cases typically follows predefined testing criteria, such as code coverage criteria [3]. This is based on the assumption that only the execution of a distinct feature, or its coverage, can reveal failures that are associated with this feature. Because the primary activities of testing, test case identification and design, are typical search problems, they can be tackled by typical search heuristics. One of the most important search heuristics for software testing is known to be random testing. This is also one of the most commonly used testing strategies in industry today. Recently, also more advanced heuristic search techniques have been applied to software testing. These are based on evolutionary algorithms, and they are also slowly making their ways into industry [1, 2] since their performance in finding test cases was found to be at least as good as random testing, but usually much better [8, 22]. The group of these testing techniques is referred to as evolutionary testing (ET) according to Wegener and Grochtmann [26]. ET is an automatic test case generation technique based on the application of evolution strategies [17], genetic algorithms [5, 10], genetic programming [13], or simulated annealing [24]. ET searches for optimal test parameter combinations that satisfy a predefined test criterion. This test criterion is represented through a “cost function” that measures how well each of the automatically generated optimization parameters satisfies the given test criterion. For a test, various test criteria are perceivable, according to the goal of the test, such as how well a test case covers a piece of code, in the case of structural testing [12, 16], or how well a test case violates a (safety) requirement [23], for example. Evolutionary testing has initially only been applied to traditional procedural software. Here, ET is used to generate input parameter combinations for test cases automatically that achieve, i.e., high coverage, if the test target relates to some code coverage criterion. However, recently, also object-oriented software testing based on evolutionary testing has been tackled by researchers [6, 7, 21]. The two main differences are - that object technology is inherently based on states which are not readily visible outside of an object’s encapsulating hull, - and that an object test, as the basic unit of testing, can incorporate an arbitrary number of operation invocations. An object’s internal state depends on any previously performed operation invocations, the so-called invocation history [9, 7], including input parameter settings. Hence, object testing involves not only the generation of suitable input parameter combinations for a single procedure under test, but, additionally, the generation of suitable test invocation sequences of various operations of an object, plus the generation of their respective input parameter combinations. As a consequence, in object testing, we have to deal with a number of test artifacts, such as the sequence or combination of operation invocations, the input parameter combinations for the tested operations, the sequence or combination of operation invocations that bring the object into an “interesting” initial testing state, including constructor invocation, and the input parameter combinations for the previously mentioned state setting operations. When applied to objects, ET must therefore generate optimization parameter values that correspond to a constructor, with input parameters, a number of operations, including input parameters, that bring the object into a distinct state, and a number of operations, including input parameter values, for the actual tested functionality. Object-oriented evolutionary testing can actually be regarded as search-based test software programming. In other words, the problem is suitable to be solved through genetic programming techniques. Genetic programming can be seen as a specialization of a genetic algorithm, and it is particularly aimed at evolving software programs according to the rules of simulated natural evolution. Genetic programming is well suited for test program generation for testing object-oriented code because the symbols used by GP are restricted only to operation invocations and input parameter types that are required and used by the SUT. This is quite in contrast to the generation of arbitrary functional code which is based on the full alphabet of the programming language under consideration. So, test code generation through genetic programming is much less complex. In this paper, we introduce and explore an evolutionary testing approach for object oriented code that is based on the application of genetic programming. In the next section, Section 2, we will introduce evolutionary testing and explain briefly how it is typically applied under the procedural programming paradigm. Section 3 looks at related work, and it introduces two different approaches to apply genetic algorithms to the testing of object-oriented code. In Section 4 we will introduce our approach that applies genetic programming to the generation of test software for object oriented code. Here, we explain the details of the algorithm used and show a few very small examples of how to use genetic programming for the purpose under consideration. Section 5 shows some results from experiments that we have carried out, and Section 6 presents out conclusions and gives an outlook on future research. 2 Automated Software Testing and Evolutionary Algorithms Traditionally, a human tester develops the test scenarios and writes the test code for an SUT manually. Ideally, a testing tool should generate the entire test code automatically, but this is very difficult to achieve, so that only parts of the testing process can be automated. The process of test automation can be subdivided into three main activities: - Generation of test scenarios according to testing criteria, also referred to as software test data generation. - Generation of a test oracle out of the SUT's specification. - Combination of both, test scenarios and oracle, into executable test cases. The first activity can be automated with relative ease, and this is also what most commercial testing tools are capable to do, although, usually, existing tools apply crude heuristics to find test scenarios. The automation of the second activity is usually much more daunting in practice, due to poor quality, or low formality, of the SUT's requirements specification available. Without formalization of the specification, it is nearly impossible to automate the generation of the oracle. The last step simply involves the creation of an arbiter that compares the observation from the SUT's execution with the expected observation from the oracle, and decides whether the test passes or fails. This poses no particular difficulty on automation, once the oracle problem has been solved. The work outlined in this paper and the related work presented, concentrate on the first testing activity, the automated generation of the test scenarios. An efficient way to do this is with a random generator. Random testing can be used to create a volume of test scenarios, but it is not specifically obeying any test coverage criteria. Test tools based on random testing, generate test scenarios and simply measure and illustrate the coverage of the SUT. They cannot generate test scenarios that are “guided” by the coverage. Evolutionary Testing for Procedural Code More advanced search heuristics such as evolutionary algorithms (EA) can be used to specifically look for test scenarios that cover certain branches of a program. This class of algorithms is loosely related to the mechanisms of natural evolution, and they are based on reproduction, evaluation and selection. The following pseudo code represents a standard genetic algorithm that can be used for testing; P, P1, P2, and P3 represent populations of feasible test scenarios: ``` initialize_random (P); fitness_function (P); while not stopping criterion do begin P1 = selection (P); P2 = recombination (P1); P3 = mutation (P2); fitness_function (P3); P = merge_populations (P3,P); end-while ``` Each set of parameters, the so-called individual, is represented by a different binary string, the so-called chromosome, within a population. Each chromosome represents the input parameter values for the execution of the SUT. The GA starts with a random initial population of chromosomes. Selection chooses the chromosomes to be recombined and mutated out of this initial population. Recombination reproduces the selected individuals and exchanges their information (pair-wise) in order to produce new individuals. This information exchange is called crossover. Mutation introduces a small change to each newly created individual. The resulting individuals (P3 in the pseudo-code) are then evaluated through the fitness function. This transfers the information encoded in the chromosome, the so-called genotype, into an execution of the SUT, the so-called phenotype. The fitness function measures how well the chromosome satisfies the test criterion. In our case this is the coverage of a program branch. The implementation of the fitness function follows earlier standards in evolutionary testing, described in other articles, i.e., [12, 14, 15]. For the next generation, the old and the new populations are merged, thereby retaining the best individuals. The process of selection, reproduction and evaluation is referred to as one generation, and these steps are repeated until the stopping criterion is satisfied, e.g., a predefined number of generations, or the satisfaction of the test criterion. Fitter dividuals, represented by their chromosomes, that come closer to covering the current target are favored in the recombination and selection process, so that in subsequent generations, the population will comprise fitter individuals that are more likely to satisfy the test criterion. In order for such a search process to obtain full branch coverage, every branch must be successively selected as target and solved through an individual search process. The application of evo- nutionary algorithms to such structural testing problems has been demonstrated in practice, i.e. [1, 12, 15, 16]. The previously described simple representation of input parameter values in a chromosome, is not sufficient in object-oriented software testing. Here, in addition to the input parameter values, the search process also needs to include any arbitrary number and sequence of operation invocations on the object, and any internal state settings, as described earlier. This turns the fixed-length, simple chromosome of the procedural paradigm into an arbitrary-length, complex chro- mosome for the object-oriented paradigm. Especially, the fact that initial state settings of an object are part of a test scenario [9], makes the implementation of the automatic test generation process more difficult. Recent publications on evolutionary testing of object-oriented systems have proposed encodings to deal with this additional dimension. The first alternative is to encode the operation invocation sequences as chromosome and come up with new recombination and mutation strategies [21]. The second alternative is to use a standard binary encoding of the chromosome, so that standard GA tools can be used, and devise a specialized so-called “genotype-phenotype transfer function” that maps the chromosome representation to a test scenario [25]. These are briefly laid out in the following section, before we go on to propose a third way of organizing the chromosome as a tree structure, in Section 4, so that we can apply a standard genetic programming technique. 3 Evolutionary Testing for Objects 3.1 Object-Specific Chromosome Encoding One way to deal with the enhanced complexity of objects in evolutionary testing is to enrich the chromosome with representations that are capable to deal with these more complex entities (Tonella, [21]). This method adds structure to the chromosome during evolution, that can be mapped directly to an executing program. Tonella proposes the following grammar: \[ \begin{align*} \text{<chromosome>} & : = \text{<actions>} \circ \text{<values>} \\ \text{<actions>} & : = \text{<action>} \begin{cases} \text{<actions>}, \text{<parameters>} \end{cases} \\ \text{<action>} & : = \text{<id} = \text{constructor} \begin{cases} \text{<parameters>} \end{cases} \\ & \quad \mid \text{<id} = \text{CLASS} \# \text{NULL} \\ & \quad \mid \text{<id} \text{. method} \begin{cases} \text{<parameters>} \end{cases} \\ \text{<parameters>} & : = \text{<parameter>} \begin{cases} \text{,} \text{<parameters>}, \text{<parameters>} \end{cases} \\ \text{<parameter>} & : = \text{builtin-type} \begin{cases} \text{<generator>} \end{cases} \end{align*} \] \[ \begin{align*} \text{<generator>} & \::= [ \text{low} ; \text{up} ] \\ & \mid [ \text{genclass} ] \\ \text{<values>} & \::= \text{<value>} \{, \text{<value>}\}\? \\ \text{<value>} & \::= \text{integer} \\ & \mid \text{real} \\ & \mid \text{boolean} \\ & \mid \text{string} \end{align*} \] The “@” separates the chromosome into two parts. The first part contains the sequence of operation invocations, including constructor and method invocations, each separated by “;”, and the second part represents the input values that these operations take, each separated by “,”. Such a sequence of operation invocations plus parameter values represents a test scenario. An `<action>` can represent either a new object (indicated as `$id$), or a call to a method on an object identified by `$id$`. Parameters of operation invocations (<`parameters`>) can represent built-in types such as `int`, `real`, `boolean`, and `string`, or chromosome variables (`$id$`). The generation operator (<`generator`>) produces the values for the input parameters. It can generate random numbers in the range between `low` and `up`, or it can use an external class to have a value produced. The grammar proposed permits invalid chromosomes, so additional rules must be imposed for “well-formedness”: - chromosome variables cannot be used before they are assigned. - built-in types in the first part require a corresponding input value in the second part of the chromosome. - methods used in the chromosome must be visible for the used classes. Because the genetic algorithm performs on chromosomes with this particular organization, the standard binary crossover and mutation operators may not be applied. Tonella proposes specific operators that lend their ideas from genetic programming. Mutation can change values or operations (constructors and methods). A value can be mutated through change to a randomly generated value of the same type. A constructor can be mutated through random change to another constructor. Redundant input values are then dropped, missing ones generated. A new method may be inserted by a mutation including the respective input parameter values for the method. A method may also be removed through a mutation including all its input values. Crossover between two chromosomes works in a similar way, although it usually involves various of the previously described measures at the same time. Two chromosomes are cut at a randomly determined location (at an `<action>`-delimiter), in the case of a simple one-point crossover, and their respective tails are swapped and rejoined. Redundant constructors must be removed, as well as needless input values, and, finally, conflicting variable names must be changed. 3.2 Object-Specific Genotype–Phenotype Transfer An alternative way to apply evolutionary testing to the more complicated requirements of object technology, is to maintain a binary or numerical chromosome that can be handled by any standard genetic algorithm, and then provide rules, or a grammar, to map the binary representation into a test scenario. Each test program may be represented as a sequence of statements, and each statement consists of an object, an operation, constructor or method, and some parameters (Wappler and Lammermann, [25]). The mapping of the chromosome to test scenarios can be determined by sequentially reading the chromosome and turning it into operation invocations according to rules. Two genes can be assigned for operation invocations, one for the target object, and one that denotes the operation to be invoked on that object. Because operation invocations take varying numbers of input parameters, input values must beaccommodated by a variable number of genes. The genes are then mapped into a real test scenario, a phenotype, according to the production rules of a grammar: \[ \text{test\_program ::= \{statement;\}*} \] \[ \text{statement ::= \{return\_value;\{constr\_call|method\_call\}} \] \[ \text{return\_value ::= class\_name instance\_name =} \] \[ \text{constr\_call ::= new class\_name (parameters)} \] \[ \text{method\_call ::= \{class\_name\instance\_name\}.method\_name(parameters)} \] \[ \text{parameters ::= [parameter \{, parameter\}]*} \] \[ \text{parameter ::= basic\_type\_value\instance\_name=NULL} \] In this system [] represents an option, {} alternatives, {}+ at least one repetition, and {}* arbitrary repetitions. Because these rules allow the generation of erroneous test scenarios, the fitness function assigns a degree of failure to the decoding. This failure is part of the fitness, so that such “defective genetic material” is eventually evading from the population. The decoding from the chromosome into a real test scenario is performed through specific functions, fully described in [25]. Methods are numbered in a series, and each number of one gene in the chromosome corresponds to a specific number of a method. Input parameters are represented by one gene in the chromosome, and they can map to concrete values and objects. 4 Proposed Genetic Programming Approach Genetic programming (GP) is a specialization of a genetic algorithm that is particularly aimed at evolving computer programs based on the principles of natural evolution [13]. The chromosomes in genetic programming represent hierarchically structured computer programs made up of arithmetic operations, mathematical functions, boolean and conditional operations, and terminal symbols, such as types, numbers, and strings. The genotype-phenotype mapping of GP is much more natural for the domain of test program generation compared with a standard genetic algorithm. The fact that GP is based on hierarchically organized trees requires specialized genetic operators for recombination and mutation [13]. Recombination takes sub-trees from previously selected parent individuals and swaps them in order to reorganize them into new individuals (trees). The chromosomes are always cut and reassembled at nodes, and not within nodes of the tree representing the computer program. The mutation operator introduces random changes in the tree by selecting a node of the tree randomly, deleting everything beyond that node, or adding a randomly generated subtree, or changing leaves of the tree randomly. These are all standard GP operators according to [13]. 4.1 GP Chromosome Table 1 lists the basic classes of representations that are used in our proposed genetic programming approach. <table> <thead> <tr> <th>Node Name</th> <th>Description</th> <th>Can have</th> <th>Can be</th> </tr> </thead> <tbody> <tr> <td>L-Variable</td> <td>Variable definition</td> <td>yes</td> <td>no</td> </tr> <tr> <td>R-Variable</td> <td>Reference to an L-Variable</td> <td>no</td> <td>yes</td> </tr> <tr> <td>Constant</td> <td>A primitive value (int, double, ...)</td> <td>no</td> <td>yes</td> </tr> <tr> <td>Constructor</td> <td>Creates an object</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>Method</td> <td>Calls an object’s method</td> <td>yes</td> <td>no</td> </tr> <tr> <td>Field Assignment</td> <td>State change of an object</td> <td>yes</td> <td>no</td> </tr> <tr> <td>SUT</td> <td>Subject under test</td> <td>yes</td> <td>no</td> </tr> <tr> <td>Array</td> <td>Creates an array of objects</td> <td>yes</td> <td>no</td> </tr> <tr> <td>NULL</td> <td>Keyword, implemented in the constructor</td> <td>no</td> <td>yes</td> </tr> </tbody> </table> The types to be used by GP are arbitrary, because every single object that is created, represents a type. Every operation refers to an object, and thus a type, plus some input parameters, including their individual types. These must be created by the GP process and added as leaves to the node in the tree that represents the operation. Each operation maps to a subtree of the entire GP hierarchy, including constructors and input values for the required (sub-)objects. Apart from arbitrary object types, we also have to allow basic or primitive types, such as boolean, integer, real, and the like. These are primarily used in order to denote input and return values. Because of the late binding principle in object technology, not all types are known to the GP system a-priori. The SUT is used as the starting point of the GP system. It indexes all its operations, that it has to test and which it can use to change its state. All the classes it references in the signatures of its operations are also indexed, plus all subsequent classes used by these. This indexing is performed recursively until all classes needed for the test case are loaded and known to the GP system. Abstract parameter types, or interfaces, and classes that extend or implement these must be added manually to the index of the GP system. That is, only if they are not referenced by some other already existing and indexed class. Figure 1 shows an example tree-shaped representation of ![Example tree-shaped representation of a GP-chromosome; the anchor symbol indicates containment](image) **Fig. 1.** Example tree-shaped representation of a GP-chromosome; the anchor symbol indicates containment a GP-chromosome that translates into the following Java testing code snippet (moving from left to right). ```java var1 = new Test(1,6); var2 = new Foo(); var1.set(var2); var1.set(var2); var1.test(); ``` There are two types of variables in programming languages, L(left)-type and R(right)-type variables. L-type variables define and initialize variables, and R- type variables reference them. The compiler will issue an error, when a variable is used as an R-type, before it has been used (initialized) as an L-type. R-types are terminal, and the L-types require one “child-node”. ### 4.2 Object Reflection In order for GP to work properly and generate valid testing code, it needs rules, on the basis of which it can recombine existing nodes and generate new nodes. In traditional genetic programming, the grammar usually comprises all constructs of the programming language used [13]. Test code is usually straight-forward, and all it needs to do is to invoke a certain sequence of operation invocations with parameter values, including the creation and initialization of the variables used. The rules are restricted to the operations of the SUT plus the objects and return types that it uses in these operations. In Java, these can even be generated at runtime through Java’s built-in reflection mechanism [4]. This information is then stored in a repository of basic symbols which represents the rules that the genetic programming algorithm can use to generate test programs. Earlier, we referred to this repository as the GP index. The hierarchical structure of testing code is typically flat, like the one displayed in Fig. 1. This flat hierarchy is due to the fact that testing has a more sequential nature, by calling one operation after another, leading to a single path through the test program. This is different from “normal functional code” that is usually made up of conditional executions, leading to various paths through the program. Every operation invoked is attached as subtree close to the root node of the entire chromosome tree. Extensive hierarchical structure is only exhibited if operation invocations require objects as input parameters, although this can be circumvented by imposing flat hierarchies. This is described below. 4.3 Detailed Genetic Operators Initial Population. Two different methods are used in order to create the first population: random population, or a population based on execution traces. The first method selects initial operation invocations including their input values randomly. The second method applies existing knowledge from executing the SUT. Here, we can form an initial population from already known typical usage scenarios of the SUT. This leads to an initial population that can already cover many of the SUT’s runtime paths for typical usage profiles. This method improves the performance of the test generation considerably. Mutation. GP requires a separate mutation operator for each individual basic building block, each of which may be subject to mutation according to a predefined mutation rate. We can distinguish three types of mutation operators, one that creates a new building block, one that changes an existing one, and one that deletes a building block. We have devised these three operators for each of the functions in Table 1. A constructor can be created, deleted or changed to a different constructor. Creation or deletion implies that their respective sub-trees, comprising input parameters, are created or deleted. The same principles that apply to constructors apply also to other operations, the normal methods of an object. They can be added or removed, or their input values can be changed. Constructors and normal methods are different only in the way that we need at least one constructor in order to create the object, and the constructor must always be invoked before any other operation. Some methods take objects as arguments. These objects need to be created through a constructor and, maybe, also their operations need to be invoked. This principle may be applied recursively, depending on the operations required, thus potentially leading to constructor compositions of arbitrary depth. We have decided not to permit the generation of such hierarchies, and move the composite constructor sub-tree up towards the root node. This is illustrated in Fig. 2. The object constructor can then be moved to a position in the tree where it is executed before the object reference is used as input value. It is important to note that there is no reason for restricting the composition depth other than controllability of the experiments. It makes it easier to understand what the GP-algorithm is doing and to control and assess its behavior. ![Tree flattening activity diagram](image) **Fig. 2.** Tree flattening activity **Crossover.** In GP, unlike genetic algorithms, the crossover is only applied at nodes in the chromosome tree, and not at leaves. The nodes for crossover are determined randomly for each of the two participating individuals, or through a search for distinct nodes. If the two crossover nodes are compatible, the crossover operator simply exchanges the entire subtrees. Two subtrees are compatible, if the types of the two root nodes of the candidate sub-trees are the same, and a search through the tree can actually determine feasible nodes of the same type. Compatibility is always given at the root node level of the entire chromosome trees. The simplest crossover is performed at the method level, thus exchanging entire methods including input parameters. Input parameter nodes can also be exchanged, given that they have the same types, and constant values can be swapped between individuals. Figure 3 illustrates an exchange at the root level, swapping entire sub-trees of methods. Crossover and mutation can generate chromosomes of arbitrary length over time, simply by adding more and more sub-trees. This is not desirable, so overgrowth of the chromosome must be regulated through the introduction of a penalty on the overall fitness for larger individuals. This turns our approach into a multi-objective evolutionary algorithm, although, here, size of the test case is the second optimization objective, thus putting selective pressure on the generation of short test scenarios. 4.4 Genotype/Phenotype Transfer and Program Execution. We are using coverage metrics to indicate an individual’s fitness [12, 16]. There are two approaches to execute an individual and obtain coverage information. The first one generates the test program code, for example as Java source or byte code, after which it will compile and execute it. The second one uses the reflection mechanism in Java. This allows us to skip the creation, compilation and class loading steps present in the first approach. For example, the method node has as its children a node which creates an object to call a method on (if it is not a static method), and nodes to create the arguments it might need. Although, reflection calls are much slower than the normal Java calls, using reflection compensates for the additional overhead of creation, compilation, and loading of a normal Java class. 5 Experiments In order to demonstrate the applicability of our proposed test case generation technique based on GP, we have applied it to 5 test programs, XMLElement, an XML parsing package including a number of classes, and from the Java collection classes, HashMap, BitSet, TreeMap, and TreeTokenizer. All test were executed on a 2.1 GHz Athlon XP under Java 1.6 [20]. Mutation was set to 70% method introduction, 15% method removal, and 15% variable introduction. The results displayed in Table 2 demonstrate the advantage of the GP approach over a traditional random testing strategy. Only for the smallest SUT, StringTokenizers, the random testing technique could generate the same high coverage (100%) as our GP approach. For the other examples, the GP approach achieves much higher test coverage. The columns time in seconds, Time(s), give an indication of how much more processing time is required for the GP, which is a much more complex algorithm, compared to the random generation. A more thorough discussion of these experiments can be found in [19]. Table 2. Tested SUTs, comparison between GP-based testing and random testing. <table> <thead> <tr> <th>SUT</th> <th>Name</th> <th>Branches</th> <th>GP testing</th> <th>Random testing</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Coverage(%)</td> <td>Time(s)</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Coverage(%)</td> <td>Time(s)</td> </tr> <tr> <td>BitSet</td> <td>124</td> <td>100</td> <td>495</td> <td>86 133</td> </tr> <tr> <td>XMLElement</td> <td>121</td> <td>90</td> <td>369</td> <td>80 101</td> </tr> <tr> <td>HashMap</td> <td>50</td> <td>94</td> <td>180</td> <td>72 43</td> </tr> <tr> <td>TreeMap</td> <td>39</td> <td>92</td> <td>13</td> <td>46 29</td> </tr> <tr> <td>StringTokenizers</td> <td>5</td> <td>100</td> <td>5</td> <td>100 2</td> </tr> </tbody> </table> 6 Conclusions and Future Work The purpose of this paper is the proposition of a genetic programming approach to generate test software for object-oriented systems automatically. We have not applied the techniques presented to extended problems, so experiments that we have performed may only be regarded as a proof of concept and an initial step towards a more extensive application. The main improvement, or advantage, of our proposed method over the other two described approaches [21, 25] is that the test software is already represented and altered as a fully functional computer program. This means that the experience gained in genetic programming can be utilized to create these test cases. Genetic programming proposes many more different types of mutations and more robust cross-over algorithms which are designed to keep the structures they alter semantically correct, preventing loss of evolved structures. Genetic programming uses tree structures which are more similar to the abstract syntax trees used in computer programs. This leads to more powerful programs, because certain tests are impossible to create in linear programs, and to a simplification of the execution of generated test cases. Genetic programming is specifically geared toward program generation, and this makes its application to test software generation so straight-forward. Test software generation for object-oriented Java code as it is introduced in this article, through its very nature, is much easier to perform than the GP- Based generation of arbitrary functional software. This is because the alphabet of the GP system is not the entire alphabet of the programming language under consideration, but merely the methods and input parameters of the object under test. And these can even be retrieved automatically. At least, this is the case for a modern object-oriented development environment like Java. Although, the number of different operation types is quite limited, large classes which contain many methods will lead to huge hierarchical trees. This increases the search space drastically that the genetic programming algorithm has to work its way through. Future work will be geared toward limitation of size and complexity of the search space as much as possible. This can already be done manually, by skipping methods which do not alter the state of their object, so-called pure methods [19]. An extension to our automatic testing system may detect such methods and remove them from the GP alphabet. The performance of genetic algorithms is not only influenced by their internal data structures and their alleged operators, but even more so by an efficient fitness function. The fitness function used for the experiments is only a very crude implementation of the standard fitness function proposed for coverage-based evolutionary testing. There is definitely still leeway for improvement. Also, the various mutation parameters need to be adjusted to achieve optimal performance. This work may be seen as an initial step towards object-oriented test program generation based on genetic programming. References
{"Source-Url": "http://swerl.tudelft.nl/twiki/pub/Main/TechnicalReports/TUD-SERG-2006-017.pdf", "len_cl100k_base": 7533, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 24288, "total-output-tokens": 9346, "length": "2e12", "weborganizer": {"__label__adult": 0.00037598609924316406, "__label__art_design": 0.00025081634521484375, "__label__crime_law": 0.0003483295440673828, "__label__education_jobs": 0.0006012916564941406, "__label__entertainment": 4.4465065002441406e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.000164031982421875, "__label__food_dining": 0.00031638145446777344, "__label__games": 0.0004775524139404297, "__label__hardware": 0.0006203651428222656, "__label__health": 0.0004525184631347656, "__label__history": 0.0001761913299560547, "__label__home_hobbies": 7.462501525878906e-05, "__label__industrial": 0.00031566619873046875, "__label__literature": 0.00020265579223632812, "__label__politics": 0.0002301931381225586, "__label__religion": 0.0004050731658935547, "__label__science_tech": 0.006984710693359375, "__label__social_life": 7.003545761108398e-05, "__label__software": 0.003505706787109375, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0003256797790527344, "__label__transportation": 0.0004277229309082031, "__label__travel": 0.00019443035125732425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40277, 0.01929]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40277, 0.7545]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40277, 0.88161]], "google_gemma-3-12b-it_contains_pii": [[0, 242, false], [242, 242, null], [242, 2528, null], [2528, 5986, null], [5986, 9075, null], [9075, 11903, null], [11903, 15148, null], [15148, 17867, null], [17867, 20827, null], [20827, 23501, null], [23501, 25333, null], [25333, 28408, null], [28408, 29936, null], [29936, 31486, null], [31486, 34643, null], [34643, 37827, null], [37827, 40277, null], [40277, 40277, null], [40277, 40277, null], [40277, 40277, null]], "google_gemma-3-12b-it_is_public_document": [[0, 242, true], [242, 242, null], [242, 2528, null], [2528, 5986, null], [5986, 9075, null], [9075, 11903, null], [11903, 15148, null], [15148, 17867, null], [17867, 20827, null], [20827, 23501, null], [23501, 25333, null], [25333, 28408, null], [28408, 29936, null], [29936, 31486, null], [31486, 34643, null], [34643, 37827, null], [37827, 40277, null], [40277, 40277, null], [40277, 40277, null], [40277, 40277, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40277, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40277, null]], "pdf_page_numbers": [[0, 242, 1], [242, 242, 2], [242, 2528, 3], [2528, 5986, 4], [5986, 9075, 5], [9075, 11903, 6], [11903, 15148, 7], [15148, 17867, 8], [17867, 20827, 9], [20827, 23501, 10], [23501, 25333, 11], [25333, 28408, 12], [28408, 29936, 13], [29936, 31486, 14], [31486, 34643, 15], [34643, 37827, 16], [37827, 40277, 17], [40277, 40277, 18], [40277, 40277, 19], [40277, 40277, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40277, 0.0823]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
195009dbad10f6c78205a60981bfa2ed9533edc9
Reliability Evaluation based on the AADL Architecture Model Dongyi Ling, Shihai Wang*, Bin Liu and Xiaoqi Xing Beihang University School of Reliability and Systems Engineering, Beijing, China *Corresponding author, Email: lingdongyi@foxmail.com, {wangshihai, liubin}@buaa.edu.cn, xingxiaoqi@live.cn Abstract—The structure of the embedded system gets much more complicated. Current basic architecture analysis and design language (AADL) reliability model cannot meet the requirements of software reliability being evaluated while being designed. For the present, reliability evaluation needs abundant fault analysis which can not be realized in the early of software development. The article has come up with a methodology based on system architecture using AADL to perform reliability evaluation at early development with good understanding of the reliability rules and transformation from AADL based system architecture to Petri Net, a one-to-one mapping was achieved between AADL elements and Petri Net elements. Using current mathematic model of Petri Net to evaluate the reliability of software architecture. At last, a flight control system of AADL model was given as an example to validate the availability of the given method. Index Terms—AADL; Reliability Evaluation; Petri Net I. INTRODUCTION As the development of embedded system, its structure gets much more complicated, also requires heavier investments and longer period to complete the development as well as higher expectation to the non-functional requirement (scheduling, reliability and security). The earlier method for developing embedded system can no longer meet the current requirement. Therefore, the industry has come up with a methodology based on Model Driven Architecture (MDA) [1], system development has been brought to a higher level – the level of model. Coding can be done autonomously according to different computation platforms. Modeling is now the core of development. We can perform analysis of non-function requirements on the model directly, and greatly reduce the development time, thereby lower the cost. At present, system reliability evaluation method is mainly divided into two types: one method is system reliability evaluation based on code, the other is system reliability evaluation based on component. The method of system reliability evaluation based on code can be performed easily, but the assess accuracy is not high for the method influence of human factors. To the embedded system with high reliability requirement and more complex, this method can only evaluate reliability on the software code, but not on the period of system design. Compare these two method, system reliability evaluation based on component is more suitable for the system development based on the model. To incorporate this trend, Society of Automotive Engineers (SAE, United States) has published new standard AS5506 – Architecture Analysis and Design Language (AADL) [2] [3]. AADL is used to describe the architecture of complex real time embedded system. AADL doesn’t concern about how the system will be realized specifically, rather, it describes the system by defining parts, the interaction between them and the binding between software and hardware. References [4] [5] [6] have proposed the theories of transforming the reliability model of AADL based error to GSPN reliability computing model. With these theoretical analysis, one can achieve the transformation from GSPN reliability computing model from library of basic error appendices, basic dependency relations (Out-in, Propagation), and advanced dependency relations (Guard In, Guard Out, Guard Event, Guard Transition). The result of this transformation can be evaluated by using basic reliability test based on AADL reliability model. In real application, a software reliability model based on AADL error model requires software to perform well-round analysis under failure mode. This is hard to be done during the design period of software. For the evaluation requirement during the design period of software, which needs carried out on the basis of software architecture reliability evaluate during the design period. Current basic AADL reliability model cannot meet the requirement of software reliability being evaluated while being designed. The article has come up with a methodology based on system structure using AADL to perform reliability test. We study the distinguishing feature between AADL architecture model and Petri Net model, analysis the feasibility of the AADL model transfer to Petri Net. With good understanding of the rules of transformation from AADL based system model to Petri Net, a one-to-one mapping was achieved between AADL elements and Petri Net elements, the component of AADL is transfer to the Place and the Transition in the Petri Net, and the connection is transfer to the arc in the Petri Net. Definitions for transitional relations were also made by formalizing the problem to simplify the transformation and the evaluation of system reliability based on the current Petri Net related mathematical model. On this basis, transfer from the Petri Net model to the system architecture of Petri Net model in formal transformation, and done the mathematical proof of the work, makes the transformation of architecture Petri Net model can make use of the mathematical properties of Petri Nets for reliability calculation. Which designers can be carried out on the architecture of the system reliability and power consumption, weight and cost trade-off, validate and optimize the system structure design of the model. Prior to that based on the reliability of the structure model of the AADL evaluation can only be working with a abundant of error analysis by system designers, which need plenty of manual labour and working experience on the system. In this paper, the reliability of the evaluation method can ignore the effect of error data in the system, by using the basic reliability of components for architecture to evaluate the system and reliability, the reliability of the system is analyzed with the weak links, and system structure for an early assessment of the work. Therefore, the reliability analysis for system model based on AADL architecture model provide theoretical basis for the design and validation is given. II. RELIABILITY ANALYSIS BASED ON AADL MODEL A. Introduction to AADL AADL is a graphical modeling language used in the embedded system. The core language elements include components, component type, component implementation, property sets, packages, and annex libraries. Components are the cores of AADL. System software and hardware are treated as combinations of components, and thus a whole system is described as a set of interactive components. Three types of components in AADL are software components (data, subprogram, process, thread, and thread group), hardware components (RAM, bus, devices, and processing unit), and composite components (system). Software components are connected to hardware through its individual properties. Components are defined according to its type and realization, and can be inherited through the use of keyword ‘Extend’. A type of components can have one more multiple realizations. Components type and realizations can also use property sets and package [7]. Mode refers to the running state of system, mode and its conversions describe the dynamics behavior when system is running. AADL expands through attachment [8] [9]. The interface specification of a component is called its type. It provides features (e.g. communication ports). Components communicate one with another by connecting their features (the connections section). Each component describes their internals: subcomponents, connections between these sub-components [10], etc. An implementation of a thread or a subprogram can specify call sequences to other subprograms, thus describing the execution flows in the architecture. Since there can be different implementations of a given component type, it is possible to select the actual components to be put into the architecture, without having to change the other components, thus providing a convenient approach to application configuration. AADL allows properties to be associated with AADL model elements. Properties are typed and represent name/value pairs that represent characteristics and constraints. Examples are the period and execution time of threads, the implementation language of a subprograms, etc. The standard includes a predefined set of properties and users can introduce additional properties through property definition declarations. For interested readers, an introduction to the AADL can be found in. Other languages can be integrated in AADL models by means of annex libraries. These languages can be added on each component to describe other aspects. Some annex languages have been designed, such as the behavior annex or the error model annex. The error model annex [11] defines states of a component, its potential faults and errors and their propagation in the system. AADL provides two major benefits for building safety-critical systems. First, compared to other modeling languages, AADL defines low-level abstractions including hardware descriptions. Second, the hybrid system components help refine the architecture as they can be detailed later on during the design process. B. Reliability Analyze Method Based on AADL During the process of AADL reliability analysis, first step is to establish the AADL reliability model. The model used in this article is generated based on AADL structure model. As a high level modeling language, AADL cannot be used directly in the reliability computation and analysis. Therefore, it is required to transfer AADL based model to a lower level formalized model, and computation and analysis can be conducted [12]. Foreign scholars combine EMA with different kinds of analytic methods such as dependency diagrams (DD), Fault Tree Analysis (FTA), and Generalized Stochastic Petri Nets (GSPN) to perform qualitative analysis. However, these models and techniques can only describe system static structure and behaviors. Basic formats of DD are series, parallel, redundancy, but it ignores system time variant dynamic characteristics; FTA does not consider the order of events when error happens, even later on with the concept of dynamic fault tree, it still awaits further exploration and verification. Petri Net is a model without any global control, it is a graphical and mathematical tool to describe and simulate a system. Petri Net is intuitive and visualized. Being capable of describing a complex system and its dynamic behavior in such a way as well as possessing many other mathematical properties makes it convenient to describe and analyze asynchronous simultaneous system. The reliability analysis process used in article can be expressed in terms of the flow chart (Figure 1). 1) Evaluate the needs during reliability analysis 2) Understand each component 3) Map each functional components, data, and even stream onto AADL components and connections 4) Categorize the function of each components into structural parts and connection parts 5) Convert AADL structure model according to Petri Net mapping relation By using Petri Net based reliability model and the Petri Net model, perform reliability computation. Collect figure from reliability analysis, review with the requirement and obtain final result for system reliability. III. AADL MODEL TRANSFORMATION ORIENTED PETRI NET A. Introduction to Petri Net Definition 1: Petri Net consist of triple, which is $N = (S, T, F)$, and to meet: 1. $S \cup T$; 2. $F \subseteq (S \times T) \cup (T \times S)$; 3. $\text{dom}(F) \cup \text{cod}(F) = S \cup T$. $S = \{S_1, \ldots, S_n\}$ is assemblage of token, $I$ is number of token, the token represent the state or condition of the system; is assemblage of transition, $m$ is number of transition, the transition represent the event which change the system state, such as component fault and fixed active; F is key element to connect system state and event, which with two-way [i], the visualization is expressed as the direction arc which connect with token and transition; $\text{dom}(\cdot)$ and $\text{cod}(\cdot)$ are assemblage of define and domain. Petri Net can describe dependencies between various events, analysis the state of the system dynamic evolution, conflicts and deadlocks [13]. Definition 2: In $SA = (C, L)$, $C$ is a set of place, $L$ is a set of transition, construct a net which is $N = (S, T, F)$, $N = (P, T, F)$ is a SA net, $F$ is a collection of edge which exist relationship between P and T, $M_0$ represent initial mark of SA, then the net $N$ which is just construct is a Petri Net. Proof: (1) $\because P$ is a set of Software Architecture(SA), it can learn that is limited from P is limited, so P and T is limited set. (2) It can be known from the place and transition definition clearly: $P \cap T = \Phi, P \cup T = \Phi$. (3) $\because F$ is a set of edge which have relationship between P and T, while $(P \times T) \cup (T \times P)$ represent a set contain all the edge between P and T. (4) $\forall x \in \text{dom}(F) \cup \text{cod}(F)$, it can be learn form the definition of $\text{dom}(F)$ and $\text{cod}(F)$ in Petri Net that $x \in P \cup T$; to $\forall x \in P \cup T$, x in the net N, we are sure to find a node of associated with it clearly: $\exists y, (x, y) \in F$ or $(y, x) \in F$. (5) We can learn from performability of software system that the Net $\sum_i M_i$ is performace from initial mark $M_0$. That is the Net $\sum_i M_i$ is active. Definition 3: A Petri Net System $SA\text{PN} = (N, W, M_0)$, $N = (P, T, F)$, $W = \{P_i\}$, $P_i$ is transition probability, $M_0$ is initial mark of N, is considered to be a Software Architecture Petri Net, If and only if: 1. N have two special place s and e, $s = \Phi, e = \Phi$, N have two special transition $t_s$ and $t_e$, $s = \{s\} \land s = \{t_s\} \land \{t_e\} = \{e\} \land e = \{t_e\}$. 2. If we add a transition connect s and e, $s = \{e\}, t_e = \{s\}$, said it as $N'$, which called extend net of N, according $N'$ as strongly connected. The SA reliability model: to analyze the reliability of the system is composed of several parts. To establish the reliability model through structural decomposition is a very important step. According to the hardware system reliability model can derive the two typical SA reliability model in series and parallel. (1) Series SA model In order to facilitate the calculation, consider each element as a sub system $S$. The reliability model of the series SA can be regarded as a series system composed of N subsystems. Assume the $i$ subsystem for the lifetime of $X_i$, the reliability of it is $R_i(t)$ at the time of $t$, and $X_1, X_2, \cdots, X_n$ mutual independence, and the lifetime is $X_s$, the reliability of it is $R_s(t)$ at time $t$, then: $$R_i(t) = P(X_i > t) = P(\min(X_1, X_2, \cdots, X_n) > t) = \prod_{i=1}^{n} P(X_i > t) = \prod_{i=1}^{n} R_i(t)$$ (1) parallel SA model Assume $SA$ is consist of $n$ elements, a function as a treatment to a certain element of complete, That is to say, If an element of $E$ in normal working, SA is called parallel SA which consists of $n$ elements. Here, we put each element as a sub system, Assume the $i$ subsystem for the lifetime of $X_i$, the reliability of it is $R_i(t)$ at the time of $t$, and $X_1, X_2, \cdots, X_n$ mutual independence, and the lifetime is $X_s$, the reliability of it is $R_s(t)$ at time $t$, then: $$R_i(t) = P(X_i > t) = P(\max(X_1, X_2, \cdots, X_n) > t) = 1 - P(\max(X_1, X_2, \cdots, X_n) \leq t) = 1 - P(X_1 \leq t, X_2 \leq t, \cdots, X_n \leq t) = 1 - \prod_{i=1}^{n} P(X_i \leq t)$$ (2) parallel SA model $$1 - \prod_{i=1}^{n} (1 - P(X_i > t)) = 1 - \prod_{i=1}^{n} (1 - R_i(t))$$ B. AADL Architecture Model to Petri Net Transformation The basic transformation rules is map the most basic components of AADL into the place of Petri Net model, putting connections of software architecture components are mapped into transition of the Petri Net, and putting components with only out-data/event feature mapped into place with token of Petri Net, then put the connections in AADL mapped into connection in Petri Net. As shown in figure 2. ![Figure 2](image.png) Figure 2. Transfer rule from AADL model element to Petri Net Different component transformation can be used for different levels of modeling in a system. In the system level, it can transfer the subsystem component as part of the model transformation; and the hardware component such as bus, device, memory and processor can be transferred as model transformation as well. In the subsystem level, it can transfer components process, thread, data and so on as part of the model transformation. The figure below is a basic AADL model, the model contain a system component called complete. Five process component in the complete which are init, data_check, bit, cds and start cds. The system implement a function that is active bit or cds. When the event is active, the system will perform from init till bit return a result or cds start normal. The AADL model transform follow the rules as below: ![Figure 3](image.png) Figure 3. A basic AADL model C. Reliability Calculation Based on the Transformation of AADL System Model To The Petri Nets Software architecture Petri (SAPN) net is a reliability assessment model for modeling based on the software architecture. The model uses software components and connections [14] and the correlation of them to structure a Petri Net model [15], abstract the Petri Net which transfer from the AADL model, express the component of SA in place of Petri Net, express the connection of SA in transition of Petri Net, hence the interactive of the component and the connection in SA can be abstracted as process of one place to another place transition [16]. Then weighted in the SAPN model for each component elements do weighted processing, its model abstraction are defined as follows: Definition 3 (weighted SAPN): Weighted SAPN is eight dimension ordered couple \((P, T, H, S, EN, Pr, R, R)\), \(P\) assemblage of place, \(T\) is assemblage of transition, total function \(H: T \times P \to [0,1]\) is component and connection reliability metric domain, \(R\) is transition process reliability metric domain, \(R, R \subseteq [0,1]\) in general. This paper treats components and connections as black box, so the reliability of components and connections are given by reliability test and the history data. Assuming the start component is additional component, which doesn’t have any actions, the reliability of it is 100%. Assessing the reliability of the SAPN steps are as follows: Step1: Set up the weighted SAPN model according to the SA model [17]; Step2: Set up the test path (PW) according to the transition propriety of weighted SAPN model, if cycle test path consists in the weighted SPAN, thus the path do not be calculated repeatedly. Then work out the transition propriety of PW. In the weighted SAPN, the test path is executed, the breadth-first search (BFS) algorithm can be used to work out test paths from the start to the final, transition properties of the PW can be calculated as follow: \[ P_{PW} = \prod_{i=1}^{n} P_i \] Step3: calculate the reliability of the test path P. Assuming the PW is \( p_1, p_2, p_3, \ldots, p_i, \ldots, p_n \), thus the reliability of this test path can be calculated as follow: \[ R_{PW} = \prod_{i=1}^{n} R_{pi} \] \[ R_{pi} = \prod_{i=1}^{n} (1 - \alpha_i \ln C_i) \prod_{i=1}^{n} (1 - \alpha_i \ln L_i) \] where, \( R_{pi} \) is reliability of \( C_i \), which is component of SA. \( R_{li} \) is reliability of \( L_i \), which is component of SA. \( R_{pi} \) is reliability of transition process. Step4: The reliability of SA can be calculated as follow: \[ R_{SA} = \sum_{i=1}^{n} R_{pi} \cdot P_{pi} \] \[ P_{pi} \] is transition property which along the path \( P_i \). IV. EXAMPLE VERIFICATION This paper takes a flight control system as an example. The State selector and the Operating are interface of flight control system, pilots can set status of the system function and indicate status of the system function. The system consists of the stability augmentation/stabilization (PCS, LCS), automatic flight (ap_manage) and automatic balancing (pap, lap), stability augmentation/stabilization system configuration for more than three degrees (rm). In addition the system owns line control subsystem. The subsystem consists of flight state selector (CDS), build in OS, bit management, and several other main modules. First AADL model set up on the flight control system is shown in figure 3: Then evaluate the reliability of the Petri Net which transform from AADL architecture model. According to structure of AADL model transformation rules, the AADL structure can be mapped into GSPN graphics as shown in figure 4: Generate test path according to the transition probability weighted SAPN, use BFS algorithm calculate paths from S to EN: - P1: Operating, key_value, State_selector, POWERON-IOP-os_startup - P2: Operating, key_value, State_selector, scheduleBit-BIT-pNVM-RM-air_startup-CAS-show_value-Operating - P3: Operating, key_value, State_selector-NVM-RM-air_startup-CAS-show_value-Operating - P4: Operating, key_value, State_selector-air_startup-CAS-show_value-Operating - P5: Operating, key_value, State_selector-scheduleBit-BIT-pNVM-RM-air_startup-lap-pro_l_nav_sub-apmanage-ap_man_lap-show_value-Operating - P6: Operating, key_value, State_selector-scheduleBit-BIT-show_value-Operating - P7: Operating, key_value, State_selector-pNVM-RM-show_value-Operating - P8: Operating, key_value, State_selector-air_startup-CAS-show_value-Operating - P9: Operating, key_value, State_selector-air_startup-lap-pro_l_nav_sub-apmanage-ap_man-lap-show_value-Operating - P10: Operating, key_value, State_selector-POWERON-IOP-os_startup-OS-StartLogic-startlog-scheduleBit-BIT-show_value-Operating Assuming that the reliability of each components C1 - C10 {Operating, State_selector, IOP, OS, startlog, BIT, RM, CAS, lap, apmanage} are {0.99, 0.98, 1, 0.95, 0.98, 1, 1, 0.98, 0.98, 0.97}. The reliability of each connections {key_value, POWERON, os_startup, StartLogic, scheduleBit, PNV, air_start, pro_l_nav_sub, ap_man, show_value} are {0.98, 1, 0.95, 0.98, 1, 1, 1, 0.98, 0.97}. Figure 4. AADL model of flight control system Figure 5. AADL model of flight control system Figure 6. SAPN of flight control system 1) According to the principle of transfer path weight average path transition probability is obtained as follow: \[ P1: 0.03125 \] \[ P2: 0.078125 \] \[ P3: 0.0625 \] \[ P4: 0.125 \] \[ P5: 0.078125 \] \[ P6: 0.125 \] \[ P7: 0.125 \] \[ P8: 0.125 \] \[ P9: 0.03125 \] \[ P10: 0.0625 \] 2) According to reliabilities of each component in each path, calculate the reliability of each path. \[ P1: 0.79 \] \[ P2: 0.90 \] \[ P3: 0.922 \] \[ P4: 0.922 \] \[ P5: 0.89 \] \[ P6: 0.90 \] \[ P7: 0.922 \] \[ P8: 0.922 \] \[ P9: 0.86 \] \[ P10: 0.768 \] 3) Using calculation results above into architecture computing formula, calculate the reliability of software architecture as follow: \[ R=0.7074085/0.78125=0.9055 \] V. SUMMARY AND OUTLOOK Reliability evaluation based on the architecture mainly has two aspects of meaning. On one hand, some software can't implement black box testing at the system level. Then the system structure of the reliability evaluation model is established to calculate the reliability of the whole system. On the other hand, it can establish software architecture model in the early period of software development. A large number of engineering practice shows that many defects of the software appears in the design period. It is meaningful to set up an architecture reliability model in design period to guide the software development. The key component which has greater influence on the reliability, can be found in individual component sensitivity analysis to avoid unnecessary mistakes. The work can improvesystem reliability, thus save a large number of cost in the last stage of verification. As aviation standard analysis and design language, AADL can model and analyze system function and non-function properties. This paper gives a reliability assessment method based on AADL architecture model. As a high level modeling language, AADL cannot be used directly in the reliability computation and analysis. Therefore, it is required to transfer AADL based model to a lower level formalized model. Petri Net have many excellent properties, it can find a corresponding relationship between AADL, hence transfer the AADL to Petri Net then to analysis the system. This paper put stress on the AADL architecture model transferring to Petri Net rule, made a synthesis of reliability mathematical calculation method on Petri Net. At last take a flight control system as an example to verify the feasibility of this method. The next research direction is: 1) Integrate the rules of automatic transfer function, Petri Net analysis and Evaluation function into the OSATE modeling tool; 2) A new generation of joint type on the new features of the aviation electron system is "high coupling and low cohesion". This is totally different from the traditional distributed software architecture, should do corresponding calibration and how to improve to make the software reliability model is more accurate fit to the reliability evaluation work on new generation of aviation electron system. 3) The scale of the Petri Net model will be grown exponentially by the growth of token and transition, which will lead to the state space explosion problem. How to ease the state space problem, to simplify the model while not reducing its description ability, is the next research direction. ACKNOWLEDGMENT This work was supported by the National Natural Science Foundation of China (61300069). REFERENCES © 2014 ACADEMY PUBLISHER Dongyi Ling was born in Changsha, Hunan, China. She is a Ph.D. student in BeiHang University School of Reliability and Systems Engineering, Beijing. Her research interests include software architecture analysis, component-based software modeling, testing, and software reliability evaluation. Shihai Wang received his Ph.D. in Computer science from the University of Manchester UK at 2010. He joined the school of Reliability and System Engineering, Science and Technology on Reliability and Environmental Engineering Laboratory BeiHang University, as a lecturer from 2011. Currently his research interests include software testing, software fault diagnosis. Bin Liu is a Ph.D. and a professor. His research interests include system engineering, software testing and software reliability. Xiaoqi Xing is a Ph.D. student in BeiHang University School of Reliability and Systems Engineering, Beijing. His research interests include software reliability evaluation, software testability and software fault inject.
{"Source-Url": "http://ojs.academypublisher.com/index.php/jnw/article/download/jnw091027212727/10166", "len_cl100k_base": 6334, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 25679, "total-output-tokens": 7832, "length": "2e12", "weborganizer": {"__label__adult": 0.0004429817199707031, "__label__art_design": 0.0007476806640625, "__label__crime_law": 0.0004189014434814453, "__label__education_jobs": 0.0009660720825195312, "__label__entertainment": 9.381771087646484e-05, "__label__fashion_beauty": 0.0002002716064453125, "__label__finance_business": 0.0003178119659423828, "__label__food_dining": 0.0004401206970214844, "__label__games": 0.0009326934814453124, "__label__hardware": 0.003204345703125, "__label__health": 0.0007214546203613281, "__label__history": 0.0004031658172607422, "__label__home_hobbies": 0.00014889240264892578, "__label__industrial": 0.000858306884765625, "__label__literature": 0.0003409385681152344, "__label__politics": 0.00026297569274902344, "__label__religion": 0.0006232261657714844, "__label__science_tech": 0.1397705078125, "__label__social_life": 9.506940841674803e-05, "__label__software": 0.007686614990234375, "__label__software_dev": 0.83984375, "__label__sports_fitness": 0.00035452842712402344, "__label__transportation": 0.00104522705078125, "__label__travel": 0.00023949146270751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29881, 0.06674]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29881, 0.61607]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29881, 0.90063]], "google_gemma-3-12b-it_contains_pii": [[0, 5292, false], [5292, 11282, null], [11282, 15921, null], [15921, 19295, null], [19295, 22642, null], [22642, 27881, null], [27881, 29881, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5292, true], [5292, 11282, null], [11282, 15921, null], [15921, 19295, null], [19295, 22642, null], [22642, 27881, null], [27881, 29881, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29881, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29881, null]], "pdf_page_numbers": [[0, 5292, 1], [5292, 11282, 2], [11282, 15921, 3], [15921, 19295, 4], [19295, 22642, 5], [22642, 27881, 6], [27881, 29881, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29881, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d345de9caf0899c3cf8a4d9764921b36f50f1db9
Modelling the environment using graphs with behaviour: do you speak Ocelet? Pascal Degenne, Ayoub Ait Lahcen, Olivier Curé, Rémi Forax, Didier Parigot, Danny Lo Seen To cite this version: Pascal Degenne, Ayoub Ait Lahcen, Olivier Curé, Rémi Forax, Didier Parigot, et al.. Modelling the environment using graphs with behaviour: do you speak Ocelet?. iEMSs 2010, Jul 2010, Ottawa, Canada. 8 p. hal-00794328 HAL Id: hal-00794328 https://hal-upec-upem.archives-ouvertes.fr/hal-00794328 Submitted on 25 Feb 2013 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Modelling the environment using graphs with behaviour: do you speak Ocelet? P. Degenne\textsuperscript{a}, A. Ait Lahcen\textsuperscript{b}, O. Curé\textsuperscript{c}, R. Forax\textsuperscript{c} D. Parigot\textsuperscript{b} D. Lo Seen\textsuperscript{a} \textsuperscript{a}CIRAD - UMR TETIS, Montpellier France (pascal.degenne@cirad.fr),(danny.lo_seen@cirad.fr) \textsuperscript{b}INRIA, Sophia Antipolis France (ayoub.ait_lahcen@inria.fr),(didier.parigot@inria.fr) \textsuperscript{c}Institut Gaspard Monge - Université Paris Est, Marne la Vallée France (ocure@univ-mvl.fr),(forax@univ.mvl.fr) Abstract: Environmental modelling often implies defining elements that relate and interact with each other, in a system that evolves with time. Ocelet is a domain specific environmental modelling language that was designed around a limited set of key concepts chosen to help modellers focus on their model, while leaving the implementation to an automatic code generation phase. Here, we focus more specifically on a concept called Relation in Ocelet that allows to build graphs that describe which elements of the model interact, and how. It is designed to be used in combination with two other main concepts: Entities (elements of the model) and Scenarios (describing the temporal evolution). Every Ocelet Relation can express one specific point of view of a system and several Relations can be combined to integrate different points of view in the same model. By its diversity, points of view convey expressive power: with different expert views on a system, at different spatial scales, or an environment sensed by its different components. Moreover, in this versatile design, a Relation defined for one specific model can be reused in a different modelling context. Libraries of generic interaction behaviours can thus be developed for efficient and reliable modelling practices. An example is given to illustrate how interaction graphs can be built, manipulated, and reused using Ocelet. Finally, we give insight into the code generation phase that produces the simulator. Keywords: Domain Specific Language, Landscape, Ocelet, Interaction graph, Relation 1 INTRODUCTION Interactions are at the heart of most environmental studies and the way they are modelled is strongly characterised by the modelling formalism used. In System Dynamics (SD), interaction operations can be performed between any of the components of the system, but the latter are generally not spatially represented. In Cellular Automata, interaction operations are defined by a set of rules applied to the cells of a tessellation, and every cell can only interact with its immediate neighbours. Agent based models (ABM) are more versatile and agents are allowed to interact with each other according to a set of locally defined rules. In the case of Discrete Events and Object Oriented design, there is no constraint on how the elements of a model can interact with each other, but this comes at the expense of complex programming for the modeller. When building environmental simulation models, Domain Specific Languages (DSL) are an interesting compromise between expressive power and a steep programming language learning curve. However, to our knowledge only few attempts have been made in this domain so far. Fall and Fall [2001] proposed SELES in which a landscape modelling language is used to define landscape states in the form of grid layers and a set of landscape events to make the model evolve with time. Gaucherel et al. [2006] designed L1, a DSL based platform for simulating the evolution of patchy landscapes. CAOS (Grelck et al. [2007]) is a DSL specialized in parallel simulation of cellular In these examples, a major constraint for the modeller is that space representation is imposed. This is not the case for Ocelet, a DSL developed for landscape and environmental modelling and simulation (Degenne et al. [2009]). Ocelet was built by carefully selecting the most elementary concepts needed for spatio-temporal modelling. It offers the possibility of creating new modelling primitives based on these concepts, and to incrementally assemble them for developing complex models. The result is a DSL with increased expressivity where the complexities of implementation are left to a code generator. In this article we emphasise on the concept of interaction graphs, as used in Ocelet under the name of Relation. In the next section we explain how and why interaction graphs can be used to express relations in a model. We then briefly present the Ocelet modelling language, the associated code generation and development framework, and also how Ocelet can be used with an ontology language. Finally, based on an example, we present how a relation can be built, manipulated and reused within Ocelet. 2 Relations as Interaction Graphs An interaction graph not only defines who are in relation (graph structure) but also how the elements relate (behaviour). When modelling the environment, we consider that working directly on interaction graphs can be useful for at least two reasons. First, acting at the most elementary level of the underlying data structure (a set of dynamic graphs) allows manipulating in a similar way different kinds of relations (aggregations, spatial, functional, ...) Second, the state of the model at any given time can be analysed using graph analysis algorithms to extract topological characteristics that emerge during the simulation. These may reflect some specificities of the model that would hardly be visible otherwise. Such analysis algorithms have for example been developed by Batagelj and Mrvar [1998], Fuller and Sarkar [2006] or Saura and Torne [2009]. In this section, we describe the structure and the dynamic nature of relations or interaction graphs. We then explain how they are made to express behaviour, and how a relation developed in one case can be reused in another. Finally we outline how a relation holds the notion of a point of view. 2.1 Graph with dynamic structure and behaviour Entities of a model can, at a given time, relate to each other in diverse ways. For example, neighbourhood (where two entities are considered neighbours if they are close enough for a given distance function), aggregation (where some entities are considered parts of a larger composite entity), connectivity (where entities can reach each other if a communication route exist between them), influence (where one entity can influence the behaviour of another one) are in fact relations. For each relation, one can build a graph where the nodes are entities and the relations between entities are the arcs. In many environmental modelling cases the graphs needed are actually hypergraphs (each arc may connect more than two nodes). Such hypergraphs can be built explicitly. For example, if we have several groups of entities connected to each other in the form of simple graphs, one can establish another graph connecting those groups to each other at a broader scale. In that way, it is possible to consider the behaviour of entities within a group as well as between groups. But one can also build hypergraphs implicitly. For example, in the case of a spatial relation where an agricultural parcel is linked to each of its borders by one n-node arc, a graph is built using arcs linking more than two nodes. Such n-node arcs based graphs are de facto hypergraphs. Using n-node arcs can be a way to simplify the graph structure we have to manipulate in the model. Another aspect to take in consideration when modelling with interaction graphs is their dynamic nature. During a simulation, some entities can be added to the model, others can disappear, and individual relationships can be established or removed. This means that the interaction graphs are dynamic, with evolving numbers of nodes and arcs, and have changing graph topologies. Attached to the graph are semantics that specify what happens between the linked entities when they do interact: the kind of information they exchange, the actions one performs on the other, the effects produced by the interaction on the entities and on the arcs involved. In many types of environmental models, attaching behaviour to an interaction graph is not straightforward. Sometimes the graph structure is implicit (e.g. cellular automata based on tessellations) and only the be- haviour is specified. The programming work is then reduced but the specification of the behaviour is seriously constrained by the implicit graph structure. In other cases where the graph structure is more versatile, a greater power of expression is obtained but the lack of adapted tools makes the programming work difficult. In order to get the best of both solutions, it would be necessary to have ways to manipulate the graph structure and attach the behaviour semantics directly on that structure using one same appropriate modelling concept. 2.2 Roles and re-usability It is rare when an environmental model is original in all its parts. The most common situation is to have some parts of the model that are similar to other already existing models. Re-usability has been a key concern in software development and modelling tools as well. In the case of behaviours attached to relation graphs, two situations can be considered: Re-usability of a relation graph topology: It can be interesting to have ready made relation graph structures such as the 3-neighbours situations found in triangulated irregular network, the 4 or 8-neighbours situations found in grids, or also star and circular shaped relationships just to name a few. Based on the well known characteristics of such structures, one could imagine a modelling tool that provides optimized implementations for them to be used in different models. Re-usability of an attached behaviour: In that case we wish to be able to reuse the definition of how entities interact with each other when they do, in different modelling situations. To make the behaviour definition adaptable to a different context, the interaction should not be specified using the entities relating with each other but using the role they play. It would then be possible to attach a behaviour definition to a different relationship graph where entities are able to play similar roles. It also means that a behaviour defined once can be instantiated several times, on different graphs, even in one same model. Finally, it can be noted that by designing a modelling tool with re-usability concerns as described above, it becomes possible to build sub-model libraries (named primitives in Ocelet) and make them available for a modellers community. 2.3 Modelling your point of view At least two cases can be identified where the notion of point of view can take the form of semantics attached to a graph. First, when specialists of several different fields work on the same environmental model, they may share the same entities but need to describe interactions between these entities differently according to their own expert view. The nodes of the graph could be shared, but the arcs and the behaviour attached to those arcs would reflect their different points of views on the model. Second, it happens that different entities of a model have different points of view on their environment and would then have to interact accordingly with that environment (see example section 4). Here again the nodes of a graph could be shared but the arcs and the behaviour attached to those arcs could be specific to every point of view. 3 The Ocelet Language The main concepts of the Ocelet domain specific language are presented here. The concept of Relation in Ocelet is then explained in more detail, showing how it addresses most of the concerns discussed in section 2. Models written in Ocelet are translated into a general purpose language through a code generation phase before a simulation can be run. The code generation aspect and the associated development platform, and the use of Ocelet with an ontology language are also briefly presented here. 3.1 Key concepts of the language Three main concepts are at the core of the language. They are named *Entity*, *Relation* and *Scenario*: - **Entity**: Entities are the basic elements that can be linked together to build a model. An entity may contain other entities, and is then called a composite entity. Entities that do not contain other entities are atomic entities. Entities have properties that can be used to reflect their state. Entities also provide *Services* which are published functions that can be called locally or remotely. The concept of entity is close to and inspired by the definition of Components in Service Oriented Architectures (Szyperski [1998]). - **Relation**: A relation is a connection between two or more entities that provide and require compatible services. It defines the nature of interactions between these entities and provides services for the activation of those interactions. This concept is detailed in paragraph 3.2. - **Scenario**: A scenario is a sequence of actions composed of service calls or relation expressions within a model or composite entity. A scenario is activated for a period of time. Therefore the scenario expresses most of the temporal behaviour of a model or a composite entity. In addition to these concepts, we have to mention a special category of atomic entities that is named *Datafacer*. A Datafacer is an atomic entity specialized in data access. The Datafacer provides different mechanisms for data persistence. Its implementation code can be written in a programming language other than Ocelet in order to optimize data access performances for every type of data sources that a model can integrate. Other concepts such as primary types (number, boolean, ...), tests and control instructions, are also available in Ocelet but they are not different from those of other programming languages such as C or Java, and therefore do not require specific descriptions here. It is also important to mention that even though Ocelet is not strictly an object oriented language, the elements (Entities and Relations) of a model have to be defined first and then instantiated within a Scenario allowing to create as many individual copies as necessary. 3.2 Relations are interaction graphs in Ocelet The Relation concept as defined in Ocelet is an interaction graph very close to what was discussed in section 2: it contains the information of *who* is in interaction and also of *how* they interact. As Relations have semantics attached to the arcs of their graph, they are constrained by the type of entities that can be linked. The definition of a Relation has to specify the role played by the different entities involved, like for example: ``` relation RelationName[roleA, roleB] {...} ``` The statement above defines a Relation of the most common kind: every arc of the graph links two nodes. The nodes will be entities; one entity playing role A and the other role B. Once defined, the Relation must be instantiated, and which entities playing role A and role B must also be stated for that instance: ``` myInstance = RelationName[EntityA, EntityB]; ``` The fact that Relations are defined using roles makes them reusable in different contexts. A Relation carefully designed with genericity in mind could then be used and adapted for several different models. To establish connections and actually build the graph, the predefined `connect()` and `disconnect()` services are available. For example, ``` myInstance.connect(lake, river) implies that lake is an instance of EntityA, river is an instance of EntityB and an arc will be added to the Relation graph between them. Ocelet allows to define Relations holding hypergraphs directly by specifying more than two roles in the declaration statement, like for example: relation RelationName[roleA, roleB, roleC, roleD] {...}. The *how* part is defined in the form of services that the modeller can write to precisely describe what happens when the entities interact. The services are written in the declaration of the Relation, like in: relation RelationName[roleA, roleB] { service foo() { roleA.doSmthg(); roleB.setVal(roleA.getVal()); } } The definition above implies that the entities playing roleA for that Relation must provide the two services doSmthg() and getVal(), while the entities playing roleB must provide the service setVal(). getVal() and setVal() must also return and accept compatible types. These are verified when the Relation is instantiated. One important point to note is that only one call to the foo() service is necessary to activate all the arcs of the relation graph. ### 3.3 Code generation and development platform Models written in Ocelet are not compiled directly, but are translated into a general purpose programming language first. For the target language we use Java, and the Ocelet development environment is integrated into the Eclipse platform in the form of Eclipse plug-ins. The code generated is based on components such as defined by Szyperski [1998] to better separate the functional (the code related to what the model is about) and non-functional (the components discovery and communication mechanisms) aspects, as well as on the Service oriented computing (SOC) paradigm as described by Papazoglou and Georgakopoulos [2003]. SOC is a paradigm that uses services as fundamental elements for developing applications. The main purpose of this approach is to introduce the minimum dependencies between software bricks to promote their re-usability and their dynamic discovery and combination at run time. For every Ocelet element, description files are generated that describe the services provided and required by that element. According to the description files, the component generator will produce non-functional code which will manage external communications based on sending or receiving messages synchronously or asynchronously. The assembly of components for a given application is not necessarily known at the start of a simulation and may change dynamically over time. Such a component framework provides an extensibility mechanism allowing the clear separation between the business logic and context-aware service interactions. The code generation allows modellers using Ocelet to take advantage of that dynamic execution environment without having to deal with implementation details. ### 3.4 Mapping Ocelet’s concepts to an ontology language The limited set of key concepts present in Ocelet have been selected to ease the work of the modellers but also to permit a mapping to an ontology language. We have chosen an ontology language, namely OWL2, which is based on Description Logics (Baader et al. [2003]) and recommended by the W3C’s Semantic Web working group. The mapping is relatively straightforward and enables to transform automatically one language to the other. Hence modellers either have the possibility to specify their models directly in OWL2, using an editor like Protégé, or to specify the model directly in Ocelet, using an Eclipse plug-in. Several advantages are provided by such an OWL2 serialisation of an Ocelet model. The main one consists in enjoying state of the art reasoning tools on standard inference services, i.e. detecting and repairing ontology inconsistencies and classifying the set of entities of a model. A second advantage corresponds to proposing an efficient storage and query solution for model instances. This aspect is particularly important considering that modellers will simulate temporal situations in any possible order. This feature will enable modellers to analyze the data stored in these data management systems directly with a query language or through applications using API for these query languages. ### 4 RELATIONS ILLUSTRATED The development of the Ocelet DSL was based on the analysis of several very different modelling situations that are studied by our partners in different fields, such as the ecology of mangroves in French Guiana, the epidemiology of the Rift Valley Fever in Senegal, agricultural land dynamics in France and West Africa, and forest landscape dynamics in South India. But for the purpose of this paper, in order to particularly illustrate the use of Relations in Ocelet, we have used a simple and didactic example of a modelling situation. 4.1 Neighbourhood from a tree point of view In this example, the objective is to model the progressive colonization of trees on a given landscape. A first version of the model contains two pieces of land crossed by a river. The trees growing on one side (Land 1) spread their seeds around their close neighbourhood (Fig. 1(a)). In the model, we define a relation named DropSeeds to connect every tree to the land or river entities that are close enough to receive their seeds (Fig. 1(b)), and to specify what happens when seeds are dropped. The seeds falling into the river are considered lost, and in the present case, as the river is too large, the trees cannot spread their seeds on the other side of the river. ![Sample graph for the dropseed1 Relation](image) **Figure 1:** Trees dropping seeds in their close neighbourhood The Tree entity is entirely defined using Ocelet. It has position coordinates and a service getDroppedSeedLocations() that returns the positions of the seeds it produces. The Land and River entities are also defined using Ocelet but their shape are defined by data sources. Those data sources are accessed by an appropriate Datafacer. In particular, the Datafacer has a service that can provide the distance between a shape and a given location. It does not matter if internally the shapes are stored in vector or grid format, or if they are in a file or a database. It is the purpose of the Datafacers to hide the underlying data implementations, and they are expected to be optimized for the kind of data source they are dealing with. The DropSeeds Relation is defined as follows: ```ocellet relation DropSeeds[seedEmitter, seedReceiver] { service drop() { group[Position] seedsPos = seedEmitter.getDroppedSeedLocations(); for (pos in seedsPos) { seedReceiver.acceptSeedAt(pos); } } } ``` It can be noted that the DropSeeds Relation is defined not using Tree and Land entities, but using the role they can play for that Relation: seedEmitter and seedReceiver. At initialization, the Relation is instantiated with a statement: \[ dropseed1 = \text{DropSeeds}[\text{Tree}, \text{Land}]; \] That statement specifies that for dropseed1 the seedEmitter role will be played by Tree entities and the seedReceiver role will be played by Land entities. Then, tree - land connections are established using calls to \( \text{dropseed1.connect()} \). The resulting graph held by the dropseed1 instance of the Relation will be similar to the example shown in Fig.1(b). During the simulation, a scenario calls the \( \text{dropseed1.drop()} \) service when necessary. One such call is enough for the code of that service to be executed on every arc belonging to the dropseed1 Relation graph. That service takes a list of seed positions from the seedEmitter and propose to the seedReceiver to accept those seeds if they are located within its area. The seed location tests are performed by the seedReceiver, through a Datafacer in the case of Land entities. 4.2 Neighbourhood from a bird point of view ![Sample graph for all Relation instances](image1) ![Figure 2: Adding a bird’s neighbourhood point of view allows trees to spread across the river](image2) Figure 2: Adding a bird’s neighbourhood point of view allows trees to spread across the river Now, let us introduce in the model a species of birds that eat the seeds of the trees and drop them somewhere in their living area. A new Relation named \( \text{EatSeeds}[\text{seedProvider}, \text{seedEater}] \) can be defined to specify how the birds would choose the trees or seeds in a given area. Details of that new Relation are not given here as they are in principle similar to the DropSeeds Relation described above. The same DropSeed Relation can thus be reused to express the bird’s neighbourhood point of view. For that, a second instance of DropSeeds is created: \( \text{dropseed2} = \text{DropSeeds}[\text{Bird}, \text{Land}] \). But, as a prerequisite for playing the seedEmitter role in the Relation, the Bird entity must provide the \( \text{getDroppedSeedLocations()} \) service. For the birds, both Land 1 and Land 2 are within reach, and the dropseed2 graph reflects that situation. Details of that new Relation are not given here as they are in principle similar to the DropSeeds Relation described above. The same DropSeed Relation can thus be reused to express the bird’s neighbourhood point of view. For that, a second instance of DropSeeds is created: \( \text{dropseed2} = \text{DropSeeds}[\text{Bird}, \text{Land}] \). But, as a prerequisite for playing the seedEmitter role in the Relation, the Bird entity must provide the \( \text{getDroppedSeedLocations()} \) service. For the birds, both Land 1 and Land 2 are within reach, and the dropseed2 graph reflects that situation. Details of that new Relation are not given here as they are in principle similar to the DropSeeds Relation described above. The same DropSeed Relation can thus be reused to express the bird’s neighbourhood point of view. For that, a second instance of DropSeeds is created: \( \text{dropseed2} = \text{DropSeeds}[\text{Bird}, \text{Land}] \). But, as a prerequisite for playing the seedEmitter role in the Relation, the Bird entity must provide the \( \text{getDroppedSeedLocations()} \) service. For the birds, both Land 1 and Land 2 are within reach, and the dropseed2 graph reflects that situation. Details of that new Relation are not given here as they are in principle similar to the DropSeeds Relation described above. The same DropSeed Relation can thus be reused to express the bird’s neighbourhood point of view. For that, a second instance of DropSeeds is created: \( \text{dropseed2} = \text{DropSeeds}[\text{Bird}, \text{Land}] \). But, as a prerequisite for playing the seedEmitter role in the Relation, the Bird entity must provide the \( \text{getDroppedSeedLocations()} \) service. For the birds, both Land 1 and Land 2 are within reach, and the dropseed2 graph reflects that situation. Details of that new Relation are not given here as they are in principle similar to the DropSeeds Relation described above. 5 CONCLUSION AND PERSPECTIVE Interactions between landscape elements are essential in environmental modelling. For this reason, we have taken special care in designing a versatile way of modelling them when developing our DSL-based approach. Ocelet has been designed to take advantage of modelling all different categories of relationship, including spatial and functional using one same programming paradigm (graphs with behaviour) and to offer the tools necessary for considerably simplifying what would otherwise be a tedious programming work. In Ocelet, a relation can be defined very simply by an interaction graph that both describes who is interacting and how. The behaviour attached to a graph can be activated on all the arcs at once with only one service call. That behaviour can also easily be made reusable for application in a different modelling context. This relatively simple way of defining a relation has been found to allow modelling a large variety of situations, and reusable primitives are being built with Ocelet to ease the modelling process. We believe that the possibilities offered by integrating network analysis features to Ocelet’s relations, the capacity of building reusable modelling bricks, and to relate them with an ontology language, are promising research subjects to be investigated. ACKNOWLEDGMENTS This work was supported (in part) by the Agence Nationale de la Recherche (ANR) under Project No. ANR-07-BLAN-0121 (STAMP: Modelling dynamic landscapes with Spatial, Temporal And Multi-scale Primitives). REFERENCES
{"Source-Url": "https://hal-upec-upem.archives-ouvertes.fr/hal-00794328/file/iEMSs2010.pdf", "len_cl100k_base": 6058, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 21591, "total-output-tokens": 7117, "length": "2e12", "weborganizer": {"__label__adult": 0.0003447532653808594, "__label__art_design": 0.0005207061767578125, "__label__crime_law": 0.00034880638122558594, "__label__education_jobs": 0.0009946823120117188, "__label__entertainment": 9.369850158691406e-05, "__label__fashion_beauty": 0.0001823902130126953, "__label__finance_business": 0.0002605915069580078, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0006771087646484375, "__label__hardware": 0.0010251998901367188, "__label__health": 0.000614166259765625, "__label__history": 0.0005106925964355469, "__label__home_hobbies": 0.0001628398895263672, "__label__industrial": 0.000614166259765625, "__label__literature": 0.0003647804260253906, "__label__politics": 0.0003046989440917969, "__label__religion": 0.0005259513854980469, "__label__science_tech": 0.1187744140625, "__label__social_life": 0.00014078617095947266, "__label__software": 0.0111541748046875, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.00038504600524902344, "__label__transportation": 0.0007343292236328125, "__label__travel": 0.00028443336486816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30912, 0.02221]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30912, 0.81447]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30912, 0.91667]], "google_gemma-3-12b-it_contains_pii": [[0, 1053, false], [1053, 4744, null], [4744, 9419, null], [9419, 13106, null], [13106, 17169, null], [17169, 21180, null], [21180, 23378, null], [23378, 27573, null], [27573, 30912, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1053, true], [1053, 4744, null], [4744, 9419, null], [9419, 13106, null], [13106, 17169, null], [17169, 21180, null], [21180, 23378, null], [23378, 27573, null], [27573, 30912, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30912, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30912, null]], "pdf_page_numbers": [[0, 1053, 1], [1053, 4744, 2], [4744, 9419, 3], [9419, 13106, 4], [13106, 17169, 5], [17169, 21180, 6], [21180, 23378, 7], [23378, 27573, 8], [27573, 30912, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30912, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
e1ebe5d3fc27f1ae8e4422fc3f4001d24de2865b
Upper and Lower Bounds on the Number of Solutions Jean-Philippe Martin Microsoft Research Cambridge December 2007 Abstract We present a fast and extensible algorithm for computing upper and lower bounds on the number of solutions to a system of equations. For a given size of variables (e.g., 32 bits), the algorithm can be run in time linear in the number of terms and variables, at the cost of looser bounds. 1 Introduction It is well-known how to compute a solution for a system of equations. In this paper, we, instead, try to quickly determine lower and upper bounds on the number of solutions. The problem of counting solutions also has a long history ([3] for example says that Binet and Cauchy looked in 1812 into the problem of counting the number of perfect matchings in a bipartite graph). There is a wealth of papers and tools for the case of boolean variables (known as #SAT), but we are not aware of any tool for more general constraints (known as #CSP) that focuses on speed. Our goal is a fast algorithm (one that finishes in the order of seconds) that can handle complex expressions such as standard arithmetic or bit shifts (Figure 1). The answer must be correct, but we are willing to accept looser bounds in exchange for faster execution. The algorithm must be easily extendable to new operations as the need arises. Our new protocol satisfies these requirements. 2 Fast Solution Count Bounder The Fast Solution Count Bounder (FSCB) algorithm takes as input a series of equations in a rich subset of the Yices syntax (see figures 3–8 for a list) and outputs: • A lower bound on the number of solutions • An upper bound on the number of solutions • For each variable, a lower bound on the number of acceptable values for that variable • For each variable, the corresponding upper bound. A value $v$ is acceptable for a variable $x$ if there is at least one solution to the system of equations where $x = v$. For the example of Figure 1, a perfect solution count bounder would return $(16384,16384,[128,128],[128,128])$. Our FSCB algorithm returns that exact value. These bounds compose for the example given. The first expression has 128 solutions, the second has 128 as well and the two together have exactly 16384 ($128^2$) solutions. FSCB leverages this composability for speed. The computation proceeds in two steps. FSCB first computes the bounds for each individual equation, and then combines them for the final answer. 2.1 Per-Equation Bounds The first step is to compute per-equation bounds. For best results, FSCB first combines all the equations that refer to the same single variable. For example, $(/= INPUT[0] 2)$ and $(< INPUT[0] 128)$ would be combined into the single equation $(and (=/ INPUT[0] 2) (< INPUT[0] 128))$. This does not change the semantics of the input. For each of those combined equations, FSCB then evaluates the expression for each value of the input variable to determine the exact number of solutions. For a given size of variables, the running time of this step is $O(n + m)$ where $n$ is the number of equations and $m$ is the number of variables. This step is only practical for relatively small variables (8 to 16 bits in our experience): equations that refer to larger variables can be left for the next step. FSCB then considers each multi-variable equation in turn. It bounds the number of solutions by progressively constructing an approximation that we call a summary. The summary of an expression contains the set of input variables it refers to, a lower and upper bound for the value of the expression, a lower and upper bound for the number of images (we call this the range), and two boolean flags. An expression is flagged as homogeneous only if every image has the same number of preimages (recall that if \( f(x) = y \), then \( x \) is a preimage of \( y \)). An expression is marked masked-homogeneous only if it is homogeneous and there exist \( v, w \) such that the set of images is exactly \( \{ x \& v | w : \text{for all } x \} \). Symbol “\&” denotes bitwise-and, and “|” is bitwise-or. For example, \((\text{bitwise-and } x \ 32)\) (for 8-bit \( x \)) has minimum 0, maximum 31, has exactly 32 distinct images (values 0 through 31) and is both homogeneous (every image has 8 preimages) and masked-homogeneous \((v = 32, w = 0)\). FSCB first replaces each leaf of the expression (i.e. a constant or variable) with the corresponding summary. The leaf summary rules are shown in Figure 2. Next, FSCB summarizes expressions of summaries. In this way, FSCB goes up the expression tree and progressively summarizes all the terms. For each supported operation, FSCB indicates how to compute the expression summary from that of the sub-expressions. To extend FSCB to support additional operations one simply has to add new rules there. For example, consider the second equation of Figure 1. FSCB first summarizes the \((\text{bitwise-and } \text{INPUT}[1] \ 1)\) subexpression. The resulting summary has minimum value 0, maximum 1, range 2, and is both homogeneous and masked-homogeneous. FSCB would then summarize the constant 3, and next combine the two summaries using the addition rule to obtain the summary for \((\text{add } (\text{bitwise-and } \text{INPUT}[1] \ 1) \ 3)\). In this case, the minimum is 3, maximum 4, the range still 2, the expression is homogeneous but not masked-homogeneous. <table> <thead> <tr> <th>Variable</th> <th>Constant</th> </tr> </thead> <tbody> <tr> <td>(x)</td> <td>(c)</td> </tr> <tr> <td>(0, 255)</td> <td>(c, c)</td> </tr> <tr> <td>(256, 256)</td> <td>(1, 1)</td> </tr> <tr> <td>yes</td> <td>yes</td> </tr> <tr> <td>yes</td> <td>yes</td> </tr> </tbody> </table> Figure 2: Leaf summary rules Pseudocode for these summary rules is shown in Figures 3–6 for the case of 8-bit values. They can be easily generalized to wider values. The pseudocode uses the notation \([l,h,lr,hr,hom,mh]\) to denote a summary with minimum \(l\), maximum \(h\), min-range \(lr\), max-range \(hr\), flagged as homogeneous if \(hom\) is true and flagged as masked-homogeneous if \(mh\) is true. For simplicity, the code for commutative functions assumes that if one of the arguments is constant, then it is the second one. For example, if \(c\) is a constant then (add \(c\) \(f\)) is summarized using the rule for (add \(f\) \(c\)). The pseudocode uses a number of helper function, described in Figure 7. These rules can for example correctly compute that the expression (((\(x\) xor \(y\)) + 5)&13) + 7 can take 8 possible values, between 7 and 20. Some of the rules have tests that may appear surprising at first. As an example, we explain the rules for multiplication of \(f\) and \(g\) (Figure 4). The first check (line 2) determines whether the multiplication can overflow. If it does not, then the lower bound for \(f\) \(*\) \(g\) is simply the product of the lower bound for \(f\) and that for \(g\) (line 3), similarly for the upper bound (line 4). We then compute the number of images. There are two cases where the result might just be a constant (i.e. with range 1): (i) the functions have variables in common (consider e.g. \((x\&1)\)\(*\)(\(\neg x\&1\))), or (ii) one of the functions could be the constant zero. If either is true, then we set \(lr\) to 1 (line 8). Otherwise we know that \(f\) \(*\) \(g\) has at least as many images as both \(f\) and \(g\). Since there is no overflow, we know that multiplying a homogeneous function with a constant will be homogeneous (line 11): the set of images will change, but they still each have the same number of preimages. The masked-homogeneous property (line 12) is maintained if we multiply \(f\) by a power of two, since this is equivalent to a bitwise shift. The new \(v, w\) are simply multiplied by this same factor: \((x\&v|\(w\) \(*\) \(2^c\) = \((x \(*\) \(2^c\))\&\((v \(*\) \(2^c\))|\((w \(*\) \(2^c\))\). Finally, in the case of overflows we set \(l, h, lr, hr, mh\) conservatively, but we know the result is homogeneous if \(f\) is homogeneous and \(g\) is an odd constant. The reason is that all inputs that yield distinct images through \(f\) also yield distinct images through \(f\) \(*\) \(g\). Here is a short proof. Let \(max = 2^{bitwidth}\). Suppose that \(g\) is an odd constant \(c\) and there are two distinct values \(x = f(x_0)\) and \(y = f(y_0)\). Without loss of generality, \(x < y\) and \(y = x + d\) \((d < max)\). We want to show that \(x \(* c\) and \(y \(* c\) are distinct (modulo \(max\)). Since \(y \(* c\) \mod max = (x \(* c + d \(* c) \mod max, that means the two values can only be equal if \((d \(* c) \mod max = 0. Consider the factorization of the left hand side: factor 2 appears at most \(bitwidth\) – 1 times in \(d\) and none in \(c\). Therefore, \(c \(* d\) is not a multiple of \(max\). A final set of rules is used to determine the number of accepted inputs when two summaries (or a summary and a constant) are compared. For example, in \((f = 3)\), the number of accepted inputs is between \(f.ld\) and \(f.hd\) if \(f.l \leq 3 \leq f.h\) (\(ld\) and \(hd\) are defined in Figure 7), and zero otherwise. The rules for the minimal number of accepted inputs are shown in Figure 8. To get the lower bound on the number of answers, we conservatively assume that as many as possible of the values of the summary do not satisfy the equation. For example, when considering \((f > 12)\), we consider the smallest \(f\) that matches the summary. If the summary for example indicates 8 values between 7 and 20, then the rule will consider that these values are 7–14 (2 accepted inputs). When instead computing the upper bound in this example, FSCB considers that the values are 13–20 (8 accepted inputs). None of the rules include any loop or recursion, so bounding an equation takes \(O(t)\), where \(t\) is the number of terms in the equation. 2.2 Combining Bounds The second step is to combine per-equation bounds. Given an equation \(a\), the procedure from the previous section determines the lower and upper bounds on the number of accepted inputs, denoted \(a.la\) and \(a.ha\). We use \(a.vars\) to denote the set of variables in \(a\). Given two such bounds, we can compute the bounds on a system that has the two corresponding equations. For example if equation \(a(x) = c\) admits 128 solutions and equation \(b(y) = d\) admits the same number, we can deduce that \((a(x) = c \land b(y) = d)\) admits \(128^2\) pairs of inputs. If the two equations \(a\) and \(b\) refer to the same variables, then the upper bound for the number of accepted inputs is the minimum between \(a.ha\) and \(b.ha\). Figure 9 shows how we merge per-equation lower bounds into bounds that apply to both equations together. We merge these bounds pairwise until we have a single (lower and upper) bound for the entire system of equations. Merging all the per-equation bounds has time complexity \(O(n)\) done that way. However, we can get a higher-quality result by first merging the equations that refer to similar variables. Consider a graph where each node represents an equation, and there is an edge between equations that have a variable in common. We compute the connected components of this graph and merge those equations first, before merging the bounds for the connected components. This approach results in better bounds at the cost of a longer runtime. To compute the bounds on the number of acceptable inputs for a single variable \(x\), one can simply use the same procedure to merge all the equations that refer to at least that variable. The upper bound on the total number of accepted inputs is also an upper bound on the number of values of \(x\) that are accepted. The lower bound is computed according to Figure 10. Computing the bounds for a variable has a running time of $O(n)$. 3 Related Work Counting the number of solutions for boolean conditions is known as #SAT. It has been studied for some time and has applications in artificial intelligence [5] among others. While it is always possible to transform finite-field arithmetic constraints (such as the ones we consider) into a series of boolean conditions, analyzing these constraints directly may allow faster computations. There is some research on finding the number of solutions to algebraic constraints. Pesant, for example [4], shows specific techniques for a number of special cases. He suggests for example that the number of solutions for $f < g$ can be computed from tables that contain all the images of $f$ and $g$ in increasing order. The related work we have found focuses on finding good bounds, rather than finding bounds quickly. We have unfortunately not found any solution-counting program that would accept inputs in the format we used so comparing performance is difficult. To get a ballpark idea, we note that the MBound algorithm [2] (the fastest #SAT algorithm we are aware of) can analyze 20'000 clauses with 600 boolean variables in under 3 minutes (Their later SampleCount protocol [1] gives better bounds but takes longer). Our algorithm analyzed a bigger problem in less time: 6'500 32-bit clauses (equivalent to 208'000 boolean clauses) with 4'800 8-bit variables, finishing under 20 seconds. We expect that optimizing our algorithm could further reduce the execution time. The speed difference does not mean that our algorithm is superior, however, because our different focus means that our bounds are almost certainly inferior to those that MBound would have found. We are also reporting numbers from two different problems, so they are not directly comparable. Even though we cannot run MBound on the same input directly, we could in principle translate our clauses into boolean ones and then compare the two algorithms directly. Unfortunately we are not aware of any such format converter. 4 Conclusion The FSCB algorithm can quickly compute the lower and upper bound on the number of solutions to a given system of equations. If variables has constant size and the number of terms per equations is constant, the FSCB can run in $O(n + m)$ where $n$ is the number of equations and $m$ is the number of variables. This high speed is obtained through an algorithm that requires only two passes through the equations: one to merge the equations that refer to the same variables and one to compute per-equation bounds and combine them. This speed is obtained at the cost of relatively loose bounds. The bounds can be improved by combining the per-equations bounds in a better order. We chose this slower approach for our implementation and could bound a system of over 6'500 32-bit equations under 20 seconds. We expect FSCB to be primarily of interest when computation speed is crucial, and possibly as a heuristic to guide another algorithm. Another benefit is that it can process clauses that use a combination of algebra (e.g. addition, multiplication) and bit manipulation (e.g. bitwise-or or shift-left), and that it can easily be extended to process even more. FSCB could be further improved by storing more information in the summaries, refining the summary rules, or finding better ways of combining the per-condition bounds. (add f g): if (f.h + g.h > 255) then l := 0 h := 255 lr := max(f.lr/2, g.lr/2); else l := f.l + g.l h := f.h + g.h lr := max(f.lr, g.lr); if (f and g have variables in common) then lr := 1 hr := min(f.hr * g.hr, 256); mh := (is-permutation(f) & is-constant(g)) || (is-constant(f) & is-permutation(g)) if (mh) then lr := hr hom := (f.hom & is-constant(g)) || (is-constant(f) & g.hom) || ((f, g have no variable in common) & is-permutation(f) & is-permutation(g)) || mh return [l, h, lr, hr, hom, mh] (subtract f g): if (f.l - g.h < 0) then l := 0 h := 255 lr := max(f.lr/2, g.lr/2); else l := f.l - g.h h := f.h - g.l lr := max(f.lr, g.lr); if (f and g have variables in common) then lr := 1 hr := min(f.hr * g.hr, 256); mh := (is-permutation(f) & is-constant(g)) || (is-constant(f) & is-permutation(g)) if (mh) then lr := hr hom := (f.hom & is-constant(g)) || (is-constant(f) & g.hom) || ((f, g have no variable in common) & is-permutation(f) & is-permutation(g)) || mh return [l, h, lr, hr, hom, mh] Figure 3: Summary rules (multiply \( f, g \)): \[ \text{if } (f.h \ast g.h < 256) \text{ then } \\ \quad l := f.l \ast g.l \\ \quad h := f.h \ast g.h \\ \quad \text{if } ((f \text{ and } g \text{ have a variable in common}) \\ \qquad \text{or } (f.lr=1 \& f.l=0) \\ \qquad \text{or } (g.lr=1 \& g.l=0)) \\ \quad \text{then } lr := 1 \\ \quad \text{else } lr := \max(f.lr, g.lr) \\ \quad hr := \min(f.hr \ast g.hr, 256) \\ \quad \text{hom} := (\text{is-constant}(g) \& f.hom) \\ \quad \text{mh} := (f.mh \& \text{is-constant}(g) \& g.l \text{ is a power of two}) \\ \quad \text{return } [l, h, lr, hr, \text{hom}, \text{mh}] \\ \quad \text{hom} := (f.hom \& \text{is-constant}(g) \& g \text{ is odd}) \\ \quad \text{return } [0, 255, 1, 256, \text{hom}, \text{false}] \\ \] (bitwise-and \( f, g \)): \[ h := \min(f.h, g.h) \\ l := 0 \\ \text{max-newrange} := 2^{\max\text{-number-of-bits-set}(g)} \\ \text{min-newrange} := 2^{\min\text{-number-of-bits-set}(g)} \\ \text{max-d} = \frac{256}{\text{min-newrange}} \\ \text{if } (\text{is-constant}(g)) \text{ then } \\ \quad hr := \min(f.hr, \text{max-newrange}) \\ \quad \text{else} \\ \quad hr := \min(f.hr \ast g.hr, 256) \\ \text{if } (f \text{ and } g \text{ have no variable in common}) \text{ then } \\ \quad lr := \frac{f.lr}{\text{max-d}} \\ \text{else} \\ \quad lr := 1 \\ \text{mh} := (f.mh \& \text{is-constant}(g)) \|| (g.mh \& \text{is-constant}(f)) \\ \text{return } [l, h, lr, hr, \text{mh}, \text{mh}] \\ \] Figure 4: Summary rules (2) (bitwise-or f g): l := max(f.l, g.l) h := max-or(f.h, g.h) ldiv := 2^min-number-of-bits-set(g) hdiv := 2^max-number-of-bits-set(g) hnr := 256 / ldiv lnr := 256 / hdiv if (is-constant(g) then hr := min(f.hr, hnr) else hr = min(f.hr * g.hr, 256) if (f and g have a variable in common then lr := 1 else lr := f.lr / hdiv mh := (f.mh && is-constant(g)) || (g.mh && is-constant(f)) return [l, h, lr, hr, mh, mh] (bitwise-xor f g): if (is-constant(g)): l := min-xor(f, g.l) h := max-or(f.h, g.h) return [l, h, lr, f.hr, f.hom, f.mh] if (f and g have no variable in common then lr := max(f.lr, g.lr) else lr := 1 hr := min(f.hr, g.hr, 256); hom := (f and g have no variable in common) && ( (g.mh && (is-permutation(f) || is-constant(f))) || (is-permutation(g) && (f.mh || is-constant(f)))) return [0, 255, lr, hr, hom, false] Figure 5: Summary rules (3) (shift-left f g): if (!is-constant(g)): return [0,255,1,256,false,false] if g.l>=8: return [0,0,1,1,true,true] h := f.h << g.l if (f.h * 2 ^ g.l > 255): l := 0 hom := false else l := f.l << g.l hom := f.hom mh := f.mh && hom if (g.l<8) then d := 2 ^ g.l & 255 nr := 256 / d lr := f.lr / d else nr := 1 lr := 1 hr := min( f.hr, nr ) return [l,h,lr,hr,hom,mh] (shift-right f g): if (!is-constant(g)): return [0,255,1,256,false,false] if (2 ^ g.l > f.h): return [0,0,1,1,true,true] h := h >> g.l l := l >> g.l hom := mh := f.mh && (2 ^ g.l <= f.l) if (g.l<8) then d := 2 ^ g.l & 255 nr := 256 / d lr := f.lr / d else nr := 1 lr := 1 hr := min( f.hr, nr ) return [l,h,lr,hr,hom,mh] Figure 6: Summary rules (4) is-constant(f): return (f.l == f.h) is-permutation(f): return (f.lr==256 && |f.vars|==1) max-number-of-bits-set(f): if is-constant(f) then return number-of-bits-set(f.l) pick smallest x s.t. 2^x > f.h return x min-number-of-bits-set(f): if is-constant(f) then return number-of-bits-set(f.l) if (f.l>0) return 1 return 0 input-bits(f,g): return 8 * | f.vars union g.vars | input-count(f,g): return 2 ^ input-bits(f,g) max-or(x,y): h := max(x,y) l := min(x,y) pick smallest z s.t. 2^z > l return h | (2^z-1) min-xor(f,c): if (c < f.l) then pick smallest ac s.t. 2^{ac} > c return f.l & ~(2^{ac}-1) if (f.h < c) then pick smallest ah s.t. 2^{ah} > f.h return c & ~(2^{ah}-1) return 0 f.ld: if (f.hom) then return input-count(f) / f.hr else return 1 f.hd: if (f.hom) then return input-count(f) / f.lr else return input-count(f) - f.lr + 1 Figure 7: Helper functions for the summary rules (equals f g): if (f.h < g.l) || (g.h < f.l) then expression is unsatisfiable if (both constant and equal) then return 0 if (f and g have a variable in common) then return input-bits(f, g) leq := max(f.l, g.l) heq := min(f.h, g.h) minlhit := f.lr - max(f.h - heq, leq - f.l) minlhit := min(minlhit, heq - leq + 1) minrhit := g.lr - max(g.h - heq, leq - g.l) minrhit := min(minrhit, heq - leq + 1) inter := max(1, minlhit + minrhit - (heq - leq + 1)) ic := input-count(f, g) return max(1, min(inter * f.ld * g.ld, ic)) (not-equal f g): if (f.h < g.l) || (g.h < f.l) then return 0 if (both constant and equal) then expression is unsatisfiable if (f and g have a variable in common) then return input-bits(f, g) leq := max(f.l, g.l) heq := min(f.h, g.h) accepted := input-count(f, g) - (heq - leq + 1) * f.hd * g.hd return max(1, min(accepted, input-count(f, g))) (unsigned-less-than f g): if (f.h < g.l) then return 0 if (f.l >= g.h) then expression is unsatisfiable wcRange := g.l - (f.h+1 - f.lr) if (wcRange >= 1) then accepted := min(wcRange*f.ld, input-count(f)) else if (f, g have input byte in common) then accepted := 1 else accepted := f.ld return accepted * 2^(8*variables in g but not f) (unsigned-greater-than f g): if (f.l > g.h) then return 0 if (f.h <= g.l) then expression is unsatisfiable wcRange := f.l + f.lr - 1 - g.h if (wcRange >= 1) then accepted := min(wcRange*f.ld, input-count(f)) else if (f, g have input byte in common) then accepted := 1 else accepted := f.ld return accepted * 2^(8*variables in g but not f) Figure 8: Lower bounds from summaries merge(a,b): if (a.vars disjoint from b.vars) then return [a.la*b.la, a.ha*b.ha] else ab := a.vars union b.vars i := a.la * 2^(8*size(ab-a.vars)) j := b.la * 2^(8*size(ab-b.vars)) la := i + j - 2^(8*size(ab)) ha := min(i,j) return [la,ha] Figure 9: Merging bounds per-variable(bounds a): ha := min(256,a.ha) if (a.la==0) then la:=0 else la := max(1, ceil(a.la / 2^(8*(size(a.vars)-1)))) return [la,ha] Figure 10: Computing per-variable bounds References
{"Source-Url": "https://www.microsoft.com/en-us/research/wp-content/uploads/2007/12/tr-2007-164.pdf", "len_cl100k_base": 6532, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 30314, "total-output-tokens": 7733, "length": "2e12", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0003826618194580078, "__label__crime_law": 0.0007371902465820312, "__label__education_jobs": 0.0010051727294921875, "__label__entertainment": 0.00012683868408203125, "__label__fashion_beauty": 0.00022029876708984375, "__label__finance_business": 0.0007853507995605469, "__label__food_dining": 0.0005331039428710938, "__label__games": 0.0008616447448730469, "__label__hardware": 0.001834869384765625, "__label__health": 0.0010585784912109375, "__label__history": 0.0004301071166992187, "__label__home_hobbies": 0.00014603137969970703, "__label__industrial": 0.0011167526245117188, "__label__literature": 0.00038695335388183594, "__label__politics": 0.0005273818969726562, "__label__religion": 0.0007052421569824219, "__label__science_tech": 0.387939453125, "__label__social_life": 0.0001239776611328125, "__label__software": 0.012115478515625, "__label__software_dev": 0.5869140625, "__label__sports_fitness": 0.0003712177276611328, "__label__transportation": 0.0008487701416015625, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22980, 0.03382]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22980, 0.53771]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22980, 0.79327]], "google_gemma-3-12b-it_contains_pii": [[0, 1423, false], [1423, 3195, null], [3195, 5628, null], [5628, 8852, null], [8852, 11583, null], [11583, 13954, null], [13954, 15060, null], [15060, 16141, null], [16141, 17618, null], [17618, 18553, null], [18553, 19325, null], [19325, 20277, null], [20277, 21886, null], [21886, 22980, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1423, true], [1423, 3195, null], [3195, 5628, null], [5628, 8852, null], [8852, 11583, null], [11583, 13954, null], [13954, 15060, null], [15060, 16141, null], [16141, 17618, null], [17618, 18553, null], [18553, 19325, null], [19325, 20277, null], [20277, 21886, null], [21886, 22980, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22980, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22980, null]], "pdf_page_numbers": [[0, 1423, 1], [1423, 3195, 2], [3195, 5628, 3], [5628, 8852, 4], [8852, 11583, 5], [11583, 13954, 6], [13954, 15060, 7], [15060, 16141, 8], [16141, 17618, 9], [17618, 18553, 10], [18553, 19325, 11], [19325, 20277, 12], [20277, 21886, 13], [21886, 22980, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22980, 0.02326]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
d1ca9c1bfeb69ad53219d01b216ad2296cd3ad27
SOFTWARE SECURITY TECHNIQUES: RISKS AND CHALLENGES Marius Iulian MIHAILESCU¹ Stefania Loredana NITA² Marian Dorin PIRLOAGA³ ¹Department of IT&C, LUMINA – The University of South-East Europe ²Department of Integrated Systems, Institute of Computers ³Military Technical Academy Abstract: Because of the increasing number of applications that are working on-line, software security has become an important aspect for software development process. The paper will present the main mechanisms and features on which we have to stop when we are designing and implementing a software application, such as sensitive information, execution of the program, and different ways of analyzing static and dynamic code. We will explain two attacks techniques (analysis and tampering) that could occur on the program and we will demonstrate how we can exploit some vulnerable points of access in the software application. Based on the two types of attacks we will discuss about obfuscation techniques and perturbated functions as a new approach to obfuscation and diversity. Keywords: software security, obfuscation, perturbated functions, client-server, attacks Introduction Nowadays we are facing with a real challenge regarding the security of software applications within a company or a personal computer. When we are talking about security for a software application we have to concentrate on four general questions: (1) where the application will be installed (local network, business computer, personal computer, cloud computing environment)?; (2) who will have access to the application (types of users)?; (3) how the application will be accessed (authentication methods)?; (4) which are the security techniques used and where in the source code of the application have been implemented and how?. Behind of this process everything its quite complicated and the goal of this paper is to present a framework that need to be applied when an application will be developed and deployed within a business of personal environment. Many companies are developing software applications without a strategy for assuring and finding the right security techniques for the applications during the development process (e.g. protecting the code against different attacks as we will discuss later in this paper). Such strategy could be Software Security Assurance (SSA), which is known as the technique included in the development phase of the software applications. The SSA is operating at a level of security that is very consistent with the potential threats that could come out from the loss, inaccuracy, alteration, unavailability, or misuse of the data and resources that it uses, controls, and protects. Naturally speaking, an adequate security is necessary in this mixed and heterogeneous environment. As we can point out, a software contains secret, confidential or sensitive information. Let’s take for example, the medical files or credit card numbers. In order to protect this data, there exist encryption and authentication algorithms [2]. The paper will discuss about software obfuscation and it will present some of the most common techniques used in software development process in order to protect the sensitive code and not only. The paper is structured in seven sections (excluding the introduction and conclusions sections) as it follows: (Section 2) Obfuscation; (Section 3) Software Protection Problems; (Section 4) Attacks on Software; (Section 5) Code Transformations; (Section6 ) The Proposed Framework. Obfuscation PC programs speaks to the most complex questions that have been developed by people. Notwithstanding understanding a little program, for example, a 10-line project, for example, the one displayed in Fig. 1 can be amazingly troublesome. The multifaceted nature of projects had turned into the bane (and extremely well the shelter) of the product business, and the appeasement process have turned into the fundamental objective of industry and scholarly. Beginning from this, we can discover a few angles not all that shockingly for both, theoreticians and professionals which have been attempting to “tackle this unpredictability for good” and use it some way or another to ensure touchy data, and obviously calculation. This is known as programming confusion, and the exchange from our article will associate with this idea. Any cryptographic instrument, for instance encryption or confirmation can be considered as acknowledging many-sided quality security, yet with programming jumbling, the general population begin going for something more yearning, suppose as a method for changing subjective projects into something like muddled. As per [21] the exploration on muddling is in an „embryonic stage”. This announcement depends on the way that there are no down to earth effectiveness verification, yet we have just hypothetical evidences which are arranged extremely distant from the practice. Software Protection Problems We take a gander at programming assurance from a designing perspective. Those procedures does not fit into white-box model, as we have depicted in Section 1. In this section we will concentrate on a few arrangements with respect to customer server procedures, methods to obstruct programming investigation, Collberg’s obscurity changes, code changes, and an exceptional discourse will be on confusion measurements. One of the goal is to give a cutting edge for programming assurance methods. As a short audit of the fundamental commitments, we can express: a review of programming insurance systems, an examination of strategies judged on their capability to secure against investigation and altering assaults, both static and element. Client-Server Solutions A standout amongst the latest systems keeping in mind the end goal to ensure the basic programming was to run it at the proprietor side rather than the client side. This procedure or system is referred to as programming as an administration. For this situation, the basic programming was not appropriated to untrusted has, but rather it had been kept up on a very much ensured server. The assurance of the server is made from system, equipment, and programming security. More often than not, the code itself is not secured by some other procedures. As per this setup, the administrations are dispersed and not the product itself, as we can see in Figure 1. Source code and the executable code dependably will be on the server side. In the event that we are an assailant, the server will be seen as a black box which can’t be gotten to by sending reactions of solicitation and getting. The administration is currently dispersed over the customers and the server. There are some focal points, for example, the lessened size of the server, just additional overhead is required to keep up the correspondence between the customer and the server. This perspective will raise an alert cautioning on the way that this procedure of correspondence will speak to the fundamental issue. At a fast look, the said model will empty the server, yet when we look practically speaking the customer and the server require an extremely escalated correspondence so that the transfer speed will turn into a bottleneck. Fig. 4. The partial client-server model splits code into a critical and noncritical part: the critical part is run at the server side; the non-critical part is run at the client side [1] Techniques to Thwart Software Analysis Underneath, we will attempt to get a handle on various strategies that can make an insurance against investigation. The point of the most procedures that are available today is to ensure against figuring out [12], statically or powerfully [13] [14]. A portion of the strategies said above can change the code when the application is disconnected from the net or amid the runtime process. For this situation, both classes will increase current standards for an assailant that desires to make an appropriate examination, and obviously, it will have the capacity to postpone an altering assault also. Fig. 3. Obfuscation Model [1] 3.1.1. Collberg’s Obfuscation Method for Transformation Object-situated writing computer programs is connected all over the place since it offers diverse focal points to peruse, adjust and amplify your code. Programming in modules will leave diverse tracks into the executable and this will abuse these imprints and follows keeping in mind the end goal to remake the first source code [15]. As a short history, when Java bytecode get to be defenseless at decompilation [16], yielding the first source code, the analysts had begun to research the procedures with a specific end goal to secure the first source code [17] [18]. One fascinating view point, is the real trick proposed by Collberg’s [19], where he characterizes jumbling as a procedure of change that endeavors to change a project into a something comparative which is extremely hard to figure out. The examination in view of code obscurity applies one or more code changes stages which will make the code more impervious to investigation and altering. There is one single hindrance which comprise in holding its usefulness. For this situation, our code can be circulated over various untrusted has without expecting any sort of dangers that could be figured out (see Figure 3). As indicated by Collberg et al [20], we need to concentrate on four fundamental classes of code muddling changes: - Lexical change; - Control stream changes; - Data stream changes; - Preventive changes; For more insights about these classes of code obscurity changes, you may discover here [1], beginning with page 30. Attacks on Software There are two fundamental classifications of assaults on programming that could occur, for example, static assaults and element assaults. The primary commitments of this segment is to show an assurance plan that will augments the control stream chart leveling which is more grounded against static control stream investigation, three models that guide our plan onto application situations, and some assaults to delineate the quality of our plan. Static Attacks Static investigation which alludes to examinations which don’t include execution of the real code. Compilers depend on static examination methods with a specific end goal to streamline code. For instance, consistent engendering and liveness examination. Figuring out is utilized with a malevolent intention. Somebody has something and needs to comprehend what it does, how it does this, and so on. A figure out commonly begins examining an item, by dismantling it, and after that tries to comprehend it a little bit at a time, forming parts, discovering designs, and so forth. In programming, a fundamentally the same procedure happens. Initial a double record is dismantled. As a second step, the figure out might... pick to decompile the dismantled code into source code. Lastly, he will assess the source code. Note that the figure out can likewise essentially run the code he acquired. There are to systems which can be connected on the code: dismantling and decompilation. **Disassembling** At the point when figuring out a double record, an initial step comprises of dismantling the program into a human justifiable configuration. The dismantling step is the backwards of the gathering stage in compilers. It makes an interpretation of paired code into get together directions that accommodate a specific CPU engineering. While this is a static system, it is not an immaculate skill, as useful disassemblers need to depend on suppositions [1]. **Decompilation** Decompilation is really a discretionary system that the figure out can apply. On the off chance that an aggressor needs to comprehend a whole program, he may be faced with a huge number of lines of gathering code. A decompiler essentially searches for examples that can be deciphered into source code. As abnormal state code is wealthier and more conservative, it is frequently less demanding to comprehend [1]. How to protect against static analysis While encryption regularly is introduced as the way to ensure programming statically, it really moves the issue, simple to cryptography where mystery of a message is moved to mystery of a key. In an encoded executable record, unique system code is scrambled, and a decoding routine is added to the first program. Consequently, code encryption is a type of self-changing code [23]. Really, the whole program is dealt with as information, while the decoding routine remains code. In the event that the last is anything but difficult to examine, one can "break" the unscrambling schedule, and decode the project. Consecutive steps, for example, disassembling and decompilation permit to figure out it, as though it were never secured. Besides, not all designs at present bolster is self-adjusting code. Some working frameworks implement a $Q\oplus R$ strategy as a system to make the misuse of security vulnerabilities more troublesome. This implies a memory page is either Writable (information) or executable (code), yet not both. Scrambled code should strive with infection scanners because of its suspicious conduct (malware additionally utilizes self-decoding code) or because of false-positive marks matches, i.e. bit designs that infection scanners check for. A workaround for this impediment is the utilization of a virtual machine [22]. On the off chance that code is arranged in the nick of time, the virtual machine can run it. In the event that the virtual machine is mystery, and the byte code is encoded, one needs to assault the virtual machine first. **Dynamic Attacks** On the off chance that a contender succeeds in extricating and reusing an exclusive calculation, the results might be noteworthy. Besides, mystery keys, classified information, or security related code are regularly not expected to be dissected, separated, stolen, or defiled. Indeed, even within the sight of legitimate defends, for example, licensing and cybersecurity laws, figuring out remains a significant danger to programming engineers and security specialists. By and large, the product is figured out, as well as messed around with, as exemplified by the multiplication of breaks for gaming programming and DRM frameworks. In a branch sticking assault, an assailant replaces a contingent hop by an unequivocal one, compelling a particular branch to be taken notwithstanding when it shouldn't under the anticipated conditions. Such assaults could majorly affect applications, for example, authorizing, DRM, charging, and voting. **Code Encryption** The objective of encryption is to shroud data. Initially, it was connected inside the setting of correspondence, yet has turned into a procedure to secure an expansive scope of basic information, either for transient transmission or long haul stockpiling. All the more as of late, business instruments for programming insurance have ended up accessible. These devices need to shield against assailants who can execute the product on an open design and in this manner, yet in a roundabout way, have entry to all the data required for execution. This area gives a diagram of runtime code decoding and encryption. One can likewise allude to this as a particular type of self-adjusting or self-producing code. Encryption guarantees the privacy of information. With regards to paired code, this strategy for the most part ensures against static examination and altering. For instance, encryption methods are utilized by polymorphic infections and polymorphic shell code. Along these lines, they can sidestep interruption. **Bulk Decryption** We allude to the strategy of decoding the whole program immediately as mass unscrambling. The decoding routine is generally added to the scrambled body and set as the section purpose of the system. At run time this routine unscrambles the body and after that exchanges control to it. The decoding routine can either counsel an inserted key or get one powerfully (e.g. from client information or from the working framework). The fundamental point of preference of such a system is, to the point that the length of the project is scrambled, its internals are covered up and along these lines ensured against static investigation. Another point of preference is that the encoded body makes it hard for an aggressor to statically change bits meaningfully. Changing a solitary piece will bring about one or more piece flips in the unscrambled code (contingent upon the blunder proliferation of the encryption plan) and hence one or more adjusted directions, which may prompt project crashes or other unintended conduct because of the weakness of parallel code. Notwithstanding, as all code is decoded at the same time, an assailant can essentially sit tight for the unscrambling to happen before dumping the procedure picture to plate. **On-Demand Decryption** Rather than mass decoding, where the whole program is unscrambled without a moment's delay, one could build granularity and unscramble little parts at the point in time when they are really required. When they are no more required, they alternatively can be re-encoded. This method is for instance connected by Shiva, a paired encryptor that utilizes muddling, hostile to investigating systems, and multi-layer encryption to secure x86 doubles utilizing the Mythical person position. On-interest unscrambling defeats the shortcomings of uncovering all code free without a moment's delay as it offers the likelihood to decode just the fundamental parts, rather than the entire body. The hindrance is an expansion in overhead because of numerous calls to the unscrambling and encryption schedules. **Attacks and Improvement** Our gatekeepers, which alter code contingent upon other code, offer a few favorable circumstances over the product protects proposed by Chang and Atallah [24] and the from introduced by Horne et al. [26]. A review is given underneath: **Classification.** To start with, all capacities are scrambled statically, either by mass or by on-interest encryption. An aggressor breaking down code statically is compelled to first infer all dynamic unscrambling keys and after that decode the code. Besides, the length of code remains scrambled in memory it is ensured against memory dump assaults. With a decent plan it is plausible to guarantee just an insignificant number of code pieces are available in memory in decoded structure. Thus, an aggressor dumping memory would just have the capacity to assess works part of the call stack. Exchanging off security for execution, utilizing the hotness heuristic, chooses more capacities for mass encryption, consequently making them helpless to element examination. **Alter resistance.** Together with a decent reliance plot, our watchmen offer assurance against endeavors to adjust the project code. On the off chance that a capacity is messed with statically or even progressively, the system will produce debased code at a later stage and in this way it will in the end crash because of illicit guidelines or yield questionable results. Besides, if the adjustment produces executable code, mistakes will in any case show up in different capacities. An assailant utilizing a debugger to step-follow through the system, may fall flat too. For instance, the Unix debugger gdb [27] utilizes programming breakpoints. These product breakpoints adjust the stacked code in memory. In the event that the comparing code is either hashed (to determine a decoding key) or unscrambled, this will incite flaws. **Imperviousness to an equipment helped circumvention assault.** An assault, proposed by van Oorschot et al. [28], misuses contrasts between information peruses and direction brings to sidestep self-check summing code. The assault comprises of copying every memory page, one page containing the first code, while another contains altered code. A changed bit captures each information read and diverts it to the page containing the first code, while the code brought for execution is the adjusted one. Nonetheless, later work of Giffin et al. [25] represents that self-altering code can identify such an assault and along these lines ensure against it. As our work concentrates on self-encoding code, a kind of self-altering code, the discovery system of Giffin et al. likewise applies to our procedure. **Code Transformations** Business muddling programs frequently just scramble identifier names and evacuate excess data, for example, investigate data, in code. This is entirely unimportant, however jumbling offers significantly more potential outcomes. A decent confusion exists out of one or more program changes that change a project's control and information stream in a way it gets to be harder to investigations and figure out. However, the main limitation for these changes is safeguarding the usefulness of the first program. Consequently, obscurity is an accumulation of numerous systems that are helpful for project change, confusion or randomization. Besides, the greater part of these code changes are not one way and it is difficult to choose where to utilize which changes. Hence, a few parameters measure the nature of a change reasonable for code obscurity: - The fundamental confinement remains safeguarding the system usefulness. - The fundamental objective of code changes is maximal confusion of the first program. - A change needs as much imperviousness to robotized assaults. - A change should be as stealthy as could reasonably be expected, too for static as dynamic investigation methods. - Increase in code size and execution time should be minimized. Regardless these systems don't promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless theses systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. **The Proposed Framework** In this section we will propose a framework that is important to be connected in two phases of the product improvement stages, the main stage is investigation and the second one is outline. On the off chance that the structure is connected with accomplishment over the stages specified beneath, then the usage stage will be done much less demanding and the dangers to make security gaps and breaks will be minimized. Regardless these systems don’t promise waterproof security, a blend of a few change procedures can prompt adequate handy assurance against figuring out and altering assaults. This framework is a little time consuming, but it is worth it. We can keep the track of everything that show and give us the possibility to issue a security hole into the application. The next step is to apply this framework automatically. In order to achieve this, we will implement as an add-on or plugin for NetBeans IDE and Microsoft Visual Studio 2015. It will be available to download it from NuGET at the end of 2016. CONCLUSIONS In the end, we will like to mention that our research for this paper was a real challenge especially when we have tried to cover the most important aspects about software security techniques, and to point out the main risks and advantages. The main goal of the paper was achieved, but there are other many things that need to be mentioned and just a simple article is not enough. We have proposed a framework which is required to follow when a new software is designed and ready for the implementation phase. BIBLIOGRAPHY [1]. Jan Cappaert, Code Obfuscation Techniques for Software Protection, Dissertation presented n partial fulfillment of the requirements for the degree of Doctor in Engineering, Arenberg Doctoral School of Science, Engineering & Technology, Faculty of Engineering, Department of Electrical Engineering (ESAT), Katholieke Universiteit Leuven, 2012.
{"Source-Url": "https://www.anmb.ro/buletinstiintific/buletine/2016_Issue1/FCS/458-465.pdf", "len_cl100k_base": 5430, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22800, "total-output-tokens": 7303, "length": "2e12", "weborganizer": {"__label__adult": 0.0003955364227294922, "__label__art_design": 0.00025725364685058594, "__label__crime_law": 0.000942230224609375, "__label__education_jobs": 0.0004100799560546875, "__label__entertainment": 4.9591064453125e-05, "__label__fashion_beauty": 0.00014269351959228516, "__label__finance_business": 0.00016200542449951172, "__label__food_dining": 0.0003287792205810547, "__label__games": 0.0007181167602539062, "__label__hardware": 0.001003265380859375, "__label__health": 0.0005469322204589844, "__label__history": 0.0001283884048461914, "__label__home_hobbies": 8.481740951538086e-05, "__label__industrial": 0.0003006458282470703, "__label__literature": 0.0001952648162841797, "__label__politics": 0.00019633769989013672, "__label__religion": 0.0003292560577392578, "__label__science_tech": 0.01332855224609375, "__label__social_life": 6.830692291259766e-05, "__label__software": 0.0082550048828125, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.00025272369384765625, "__label__transportation": 0.0003266334533691406, "__label__travel": 0.00013649463653564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32379, 0.01611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32379, 0.35759]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32379, 0.91501]], "google_gemma-3-12b-it_contains_pii": [[0, 4438, false], [4438, 7212, null], [7212, 10835, null], [10835, 15919, null], [15919, 21131, null], [21131, 27227, null], [27227, 31934, null], [31934, 32379, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4438, true], [4438, 7212, null], [7212, 10835, null], [10835, 15919, null], [15919, 21131, null], [21131, 27227, null], [27227, 31934, null], [31934, 32379, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32379, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32379, null]], "pdf_page_numbers": [[0, 4438, 1], [4438, 7212, 2], [7212, 10835, 3], [10835, 15919, 4], [15919, 21131, 5], [21131, 27227, 6], [27227, 31934, 7], [31934, 32379, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32379, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
006e87f310b4e0f991a514b83c812a0aae48abf9
Mapping reductions More undecidable languages Undecidability by Rice Theorem $\mathcal{RE}$-Completeness Reductions by computational histories Sipser’s book, Chapter 5, Sections 5.1, 5.3 Mapping Reductions **Definition:** Let $A$ and $B$ be two languages. We say that there is a *mapping reduction* from $A$ to $B$, and denote $$A \leq_m B$$ if there is a *computable function* $$f : \Sigma^* \rightarrow \Sigma^*$$ such that, for every $w$, $$w \in A \iff f(w) \in B.$$ The function $f$ is called the *reduction* from $A$ to $B$. A mapping reduction converts questions about membership in $A$ to membership in $B$ Notice that $A \leq_m B$ implies $\overline{A} \leq_m \overline{B}$. Mapping Reductions: Reminders Theorem 1: If $A \leq_mB$ and $B$ is decidable, then $A$ is decidable. Theorem 2: If $A \leq_mB$ and $B$ is recursively enumerable, then $A$ is recursively enumerable. Corollary 1: If $A \leq_m B$ and $A$ is undecidable, then $B$ is undecidable. Corollary 2: If $A \leq_m B$ and $A$ is not in $\mathcal{RE}$, then $B$ is not in $\mathcal{RE}$. Corollary 3: If $A \leq_m B$ and $A$ is not in $\text{co}\mathcal{RE}$, then $B$ is not in $\text{co}\mathcal{RE}$. Mapping Reductions in General Mapping reductions are applicable to wide areas in mathematics, not only computing. For example, consider the following two sets: 1. The set of equations of the form $Ax^2 + By + C$ where the coefficients are integers, that have a root consisting of positive integers. 2. The set of knots that can be untied (without tearing or breaking the rope) leaving at most $\ell$ loops. Even though these two sets have very different nature, they are reducible to each other (by mapping reductions). In this course we concentrate mainly on computing related problems, but reductions are relevant in much wider scopes. Bucket of Undecidable Problems Same techniques prove undecidability of - Does a TM accept a **decidable** language? - Does a TM accept a **regular** language? - Does a TM accept a **context-free** language? - Does a TM accept a **finite** language? - Does a TM accept a language that contains **all prime numbers**? - Does a TM accept a language that contains **all quartets of positive integers** \( > 2 \) satisfying \( x^n + y^n = z^n \)? Rice’s Theorem By now, some of you may have become cynical and embittered. - Like, been there, done that, bought the T-shirt. - Looks like any non-trivial property of TMs is undecidable. That is correct. Rice’s Theorem – Restatement Theorem If $C$ is a proper, non-empty subset of the set of enumerable languages, then it is undecidable whether for a given encoding of a TM, $\langle M \rangle$, $L(M)$ is in $C$. (See problem 5.22 in Sipser’s book) Rice’s Theorem **Theorem** Let $C$ be a proper non-empty subset of the set of enumerable languages. Denote by $L_C$ the set of all TMs encodings, $\langle M \rangle$, such that $L(M)$ is in $C$. Then $L_C$ is undecidable. Proof by reduction from $A_{TM}$. Given $M$ and $w$, we will construct $M_0$ such that: - If $M$ accepts $w$, then $\langle M_0 \rangle \in L_C$. - If $M$ does not accept $w$, then $\langle M_0 \rangle \notin L_C$. Rice’s Theorem - Without loss of generality, $\emptyset \not\in C$. - (Otherwise, look at $\overline{C}$, also proper and non-empty.) - Since $C$ is not empty, there exists some language $L \in C$. Let $M_L$ be a TM accepting this language (recall $C$ contains only recursively enumerable languages). - continued . . . Rice’s Theorem Given $\langle M, w \rangle$, construct $M_0$ such that: - If $M$ accepts $w$, then $L(M_0) = L \in C$. - If $M$ does not accept $w$, then $L(M_0) = \emptyset \notin C$. $M_0$ on input $y$: 1. Run $M$ on $w$. 2. If $M$ accepts $w$, run $M_L$ on $y$. a. if $M_L$ accepts, accept, and b. if $M_L$ rejects, reject. Claim: The transformation $\langle M, w \rangle \rightarrow \langle M_0 \rangle$ is a mapping reduction from $A_{TM}$ to $L_C$. Rice’s Theorem Proof: $M_0$ on input $y$: 1. Run $M$ on $w$. 2. If $M$ accepts, run $M_L$ on $y$. a. if $M_L$ accepts, accept, and b. if $M_L$ rejects, reject. The machine $M_0$ is simply a concatenation of two known TMs – the universal machine, and $M_L$. Therefore the transformation $\langle M, w \rangle \rightarrow \langle M_0 \rangle$ is a computable function, defined for all strings in $\Sigma^*$. (But what do we actually do with strings not of the form $\langle M, w \rangle$?) Rice’s Proof (Concluded) - If $\langle M, w \rangle \in A_{TM}$ then $M_0$ gets to step 2, and runs $M_L$ on $y$. - In this case, $L(M_0) = L$, so $L(M_0) \in C$. - On the other hand, if $\langle M, w \rangle \not\in A_{TM}$ then $M_0$ never gets to step 2. - In this case, $L(M_0) = \emptyset$, so $L(M_0) \not\in C$. - This establishes the fact that $\langle M, w \rangle \in A_{TM}$ iff $\langle M_0 \rangle \in L_C$. So we have $A_{TM} \leq_m L_C$, thus $L_C$ is undecidable. ♣ Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. Rice’s Theorem (Reflections) Rice’s theorem can be used to show undecidability of properties like - “does $L(M)$ contain infinitely many primes” - “does $L(M)$ contain an arithmetic progression of length 15” - “is $L(M)$ empty” Decidability of properties related to the encoding itself cannot be inferred from Rice. For example “does $\langle M \rangle$ has an even number of states” is decidable. Properties like “does $M$ reaches state $q_6$ on the empty input string” are undecidable, but this does not follow from Rice’s theorem. Rice does not say anything on membership in $\mathcal{RE}$ of languages like “is $L(M)$ finite”. Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. Consider the language $L_{\text{infinite}} = \{\langle M \rangle \mid L(M) \text{ is infinite}\}$. By Rice Theorem, this language is not in $\mathcal{R}$. We want to show that $L_{\text{infinite}} \notin RE$. **Idea:** Reduction from $H_{\text{TM}}$. So we are after a reduction $f : \langle M, w \rangle \mapsto \langle M_0 \rangle$ such that - If $M$ halts on $w$ then $L(M_0)$ is finite. - If $M$ does not halts on $w$ then $L(M_0)$ is infinite. This looks a bit tricky… Reductions via Controlled Executions (2) We are after a reduction $f : \langle M, w \rangle \mapsto \langle M_0 \rangle$ such that - If $M$ halts on $w$ then $L(M_0)$ is finite. - If $M$ does not halts on $w$ then $L(M_0)$ is infinite. Given $\langle M, w \rangle$, construct the TM $M_0$ as following: - $M_0$ on input $y$ - Runs the universal machine $U$ on $\langle M, w \rangle$ for $|y|$ steps. - If $U$ did not halt in that many steps, $M_0$ accepts $y$. - If $U$ did halt in that many steps, $M_0$ rejects $y$. $f(\langle M, w \rangle) = M_0$. Let us examine $L(M_0)$. (Remark: $M_0$ halts on all inputs.) Reductions via Controlled Executions (3) \[ f(\langle M, w \rangle) = M_0. \] Let us examine \( L(M_0) \). - If \( M \) does not halt on \( w \), then \( M_0 \) accepts all \( y \), so \( L(M_0) = \Sigma^* \), and thus \( \langle M_0 \rangle \in L_{\text{infinite}} \). - If \( M \) does halt on \( w \) after \( k \) simulation steps, then \( M_0 \) accepts only \( y \)s of length smaller than \( k \), so \( L(M_0) \) is finite, and thus \( \langle M_0 \rangle \notin L_{\text{infinite}} \). We have shown that \( \overline{H_{\text{TM}}} \leq_m L_{\text{infinite}} \). Since \( \overline{H_{\text{TM}}} \notin \mathcal{RE} \), this implies \( L_{\text{infinite}} \notin \mathcal{RE} \). ♠ **RE-Completeness** **Question:** Is there a language $L$ that is hardest in the class $\text{RE}$ of enumerable languages (languages accepted by some TM)? **Answer:** Well, you have to define what you mean by “hardest language”. **Definition:** A language $L_0 \subseteq \Sigma^*$ is called $\text{RE}$-complete if the following holds - $L_0 \in \text{RE}$ (membership). - For every $L \in \text{RE}$, $L \leq_m L_0$ (hardness). **RE-Completeness** Definition A language $L_0 \subseteq \Sigma^*$ is called $\mathcal{RE}$-complete if the following holds - $L_0 \in \mathcal{RE}$ (membership). - For every $L \in \mathcal{RE}$, $L \leq_m L_0$ (hardness). The second item means that for every enumerable $L$ there is a mapping reduction $f_L$ from $L$ to $L_0$. The reduction $f_L$ depends on $L$ and will typically differ from one language to another. **RE-Completeness** **Question:** Having defined a reasonable notion, we should make sure it is not vacuous, namely verify there is at least one language satisfying it. **Theorem** The language $A_{TM}$ is RE-Complete. **Proof:** - The universal machine $U$ accepts the language $A_{TM}$, so $A_{TM} \in \mathcal{RE}$. - Suppose $L$ is in $\mathcal{RE}$, and let $M$ be a TM accepting it. Then $f_L(w) = \langle M, w \rangle$ is a mapping reduction from $L$ to $A_{TM}$ (why?). Reduction via Computation Histories Important technique for proving undecidability. - Useful for testing existence of some objects. - For example, basis for proof of undecidability in Hilbert’s tenth problem, - where "object" is integral root of polynomial. - Other examples: Does a linear bounded TM accept the empty language? - Does a context free grammar generate $\Sigma^*$? Reminder: Configurations Configuration: $1011q_70111$ means: - state is $q_7$ - LHS of tape is $1011$ - RHS of tape is $0111$ - head is on RHS $0$ Configurations - configuration $uaq_i bv$ yields $uq_j acv$ if $\delta(q_i, b) = (q_j, c, L)$ - Of course, $uaq_i bv$ yields $uacq_j v$ if $\delta(q_i, b) = (q_j, c, R)$ - Special case (left end of tape): $q_i bv$ yields $q_j cv$ if $\delta(q_i, b) = (q_j, c, L)$. Computation Histories Let $M$ be a TM and $w$ an input string. - An **accepting** computation history for $M$ on $w$ is a sequence $C_1, C_2, \ldots, C_\ell$, where - $C_1$ is the starting configuration of $M$ on $w$ - $C_\ell$ is an accepting configuration of $M$, - each $C_i$ yields $C_{i+1}$ according to the transition function. - A **rejecting** computation history for $M$ on $w$ is the same, except - $C_\ell$ is a rejecting configuration of $M$. Remarks - Computation sequences are finite. - If $M$ does not halt on $w$, no accepting or rejecting computation history exists. - Notion is useful for both deterministic (one history) and non-deterministic (many histories) TMs. A CFG Question SENTENCE NOUN-PHRASE ARTICLE a NOUN boy VERB sees Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. Emptiness of CFGs We have already seen an algorithm to check whether a context-free grammar is empty. On input $\langle G \rangle$ where $G$ is a CFG: 1. Mark all terminal symbols in $G$. 2. Repeat until no new variables become marked: 3. Mark any $A$ where $$A \rightarrow U_1 U_2 \ldots U_k$$ and each $U_i$ has already been marked. 4. If start symbol marked, accept, otherwise reject. Using Computation Histories for CFGs So the language $\text{EMPTY}_{\text{CFG}}$ is decidable. **Question:** What about the complementary question: Does a CFG generate all strings? $$\text{All}_{\text{CFG}} = \{ ⟨G⟩ | G \text{ is a CFL and } L(G) = \Sigma^* \}$$ Does a CFG Generate All Strings? Theorem: $\text{All}_{\text{CFG}}$ is undecidable. Proof by reduction from $A_{\text{TM}}$ to $\overline{\text{All}_{\text{CFG}}}$: - Given $\langle M, w \rangle$, construct a coding of a CFG, $\langle G \rangle$ - $G$ generates all strings that are not accepting computation histories for $M$ on $w$ - if $M$ does not accept $w$, $G$ generates all strings - if $M$ does accept $w$, $G$ does not generate the accepting computation history. Does a CFG Generate All Strings? An accepting computation history appears as $\#C_1\#C_2\# \ldots \#C_\ell\#$, where - $C_1$ is the starting configuration of $M$ on $w$, - $C_\ell$ is an accepting configuration of $M$, - Each $C_i$ yields $C_{i+1}$ by transition function of $M$. A string is not an accepting computation history if it fails one or more of these conditions. Does a CFG Generate All Strings? Instead of the CFG, $G$, we construct a PDA, $D$ (recall equivalence). $D$ non-deterministically “guesses” which condition is violated. - then verifies the guessed violation: - Is there some $C_i$ that is not a configuration of $M$ (number of $q$ symbols $\neq 1$)? - Is $C_1$ not the starting configuration of $M$ on $w$? - Is $C_\ell$ not an accepting configuration of $M$? - Does $C_i$ not yield $C_{i+1}$ by the transition function of $M$? - The last condition is the tricky one to check. Does a CFG Generate All Strings? - Does $C_i$ not yield $C_{i+1}$? Idea: - Scan input. Nondeterministically decide "violating configuration" $C_i$ was reached. - Push $C_i$ onto the stack till $\#$. - scan $C_{i+1}$ and pop matching symbols of $C_i$ - check if $C_i$ and $C_{i+1}$ match everywhere, except . . . - around the head position, - where difference dictated by transition function for $M$. Problem: When $D$ pops $C_i$ from stack, $C_i$ is in reverse order. Ignoring the local changes around head position, what we were trying to identify the language $x \# y$, with $x \neq y$. While this can be done in principle by a non deterministic PDA (see problem 2.26 in Sipser), there is a simpler way. So far, we used a “straight” notion of accepting computation histories. Does a CFG Generate All Strings? So far, we used a “straight” notion of accepting computation histories \[ \begin{align*} &\# \to \# \to \# \to \# \to \# \cdots \to \# \\ &C_1 &C_2 &C_3 &C_4 &C_\ell \end{align*} \] But in this modern age, why not employ an alternative notion of accepting computation history, one that will make the life of our PDA much easier? **Solution:** Write the accepting computation history so that every other configuration is in reverse order. \[ \begin{align*} &\# \leftrightarrow \# \leftrightarrow \# \leftrightarrow \# \leftrightarrow \# \\ &C_1 &C_2 &C_3 &C_4 &C_\ell \end{align*} \] This takes care of difficulty in the proof. Wrapping Things Up Given $\langle M, w \rangle$, we constructed (algorithmically) a PDA, $D$, which rejects the string $z$ if and only if $z$ equals an accepting computation history of $M$ on $w$, written in the "alternating format". Therefore $L(D)$ is either $\Sigma^*$ or $\Sigma^* \setminus \{z\}$. This $D$ has an equivalent (and efficiently described) CFG, $G$, namely $L(D) = L(G)$. So $L(G)$ is either $\Sigma^*$ or $\Sigma^* \setminus \{z\}$. The mapping $\langle M, w \rangle \mapsto \langle G \rangle$ is thus a reduction from $A_{TM}$ to $\overline{All_{CFG}}$. Since $A_{TM} \notin \mathcal{R}$ we get $\overline{All_{CFG}} \notin \mathcal{R}$. As $\mathcal{R}$ is closed under complement, we conclude that $\overline{All_{CFG}} \notin \mathcal{R}$. Linear Bounded Automata - A restricted form of TM. - Cannot move off portion of tape containing input - Rejects attempts to move head beyond input - Size of input determines size of memory Linear Bounded Automata Question: Why linear? Answer: Using a tape alphabet larger than the input alphabet increases memory by a constant factor. Believe it or not, LBAs are quite powerful. The deciders for $A_{\text{DFA}}$ (does DFA accept?) $A_{\text{CFG}}$ (is string in CFG?) EMPTY$_{\text{DFA}}$ (is DFA trivial?) EMPTY$_{\text{CFG}}$ (is CFG empty?) are all LBAs. Every CFL can be decided by a LBA. Not easy to find a natural, decidable language that cannot be decided by an LBA. A Language Define \[ A_{LBA} = \{ \langle M, w \rangle | M \text{ is an LBA that accepts } w \} \] Question: Is \( A_{LBA} \) decidable? Lemma: Let $M$ be a LBA with - $q$ states - $g$ symbols in tape alphabet On an input of size $n$, LBA has exactly $qng^n$ distinct configurations, because a configuration involves: - control state ($q$ possibilities) - head position ($n$ possibilities) - tape contents ($g^n$ possibilities) Theorem \( A_{LBA} \) is decidable. Idea: - Simulate \( M \) on \( w \). - But what do we do if \( M \) loops? - Must detect looping and reject. - \( M \) loops if and only if it repeats a configuration. - Why? And is this also true of “regular” TMs? - By pigeon hole, if our LBA \( M \) runs long enough, it must repeat a configuration! Theorem Theorem $A_{LBA}$ is decidable. On input $\langle M, w \rangle$, where $M$ is an LBA and $w \in \Sigma^*$ 1. Simulate $M$ on $w$, 2. While maintaining a counter. 3. Counter incremented by 1 per each simulated step (of $M$). 4. Keep simulating $M$ for $qng^n$ steps, or until it halts (whichever comes first) 5. If $M$ has halted, accept $w$ if it was accepted by $M$, and reject $w$ if it was rejected by $M$. 6. reject $w$ if counter limit reached ($M$ has not halted). More LBAs Here is a related problem. \[ \text{EMPTY}_{\text{LBA}} = \{ \langle M \rangle | M \text{ is an LBA and } L(M) = \emptyset \} \] Question: Is \( \text{EMPTY}_{\text{LBA}} \) decidable? Surprisingly though, LBAs do have undecidable problems too! More LBAs \[ \text{EMPTY}_{\text{LBA}} = \{ \langle M \rangle | M \text{ is an LBA and } L(M) = \emptyset \} \] Theorem: \( \text{EMPTY}_{\text{LBA}} \) is undecidable. Proof by reduction using computation histories. More LBAs \[ \text{EMPTY}_{\text{LBA}} = \{ \langle M \rangle | M \text{ is an LBA and } L(M) = \emptyset \} \] Theorem \( \text{EMPTY}_{\text{LBA}} \) is undecidable. Proof by reduction from \( A_{\text{TM}} \). If \( \text{EMPTY}_{\text{LBA}} \) were decidable, then \( A_{\text{TM}} \) would also be. Question: Suppose that \( \text{EMPTY}_{\text{LBA}} \) is decidable. How can we use this supposition to decide \( A_{\text{TM}} \)? Let \( R \) be a decider for the language \( \text{EMPTY}_{\text{LBA}} \). More LBAs Given $M$ and $w$, we will construct an LBA, $B$. - $L(B)$ will contain exactly all accepting computation histories for $M$ on $w$ - $M$ accepts $w$ iff $L(B) \neq \emptyset$. - Will use $R$ to decide whether $L(B) = \emptyset$. - Then we can decide whether $M$ accepts $w$. More LBAs It is not enough to show that $B$ exists. We must show that a TM can construct $\langle B \rangle$ from $\langle M, w \rangle$. Assume an accepting computation history is presented as a string: $$ \# \quad C_1 \quad \# \quad C_2 \quad \# \quad C_3 \quad \# \cdots \# \quad C_\ell \quad \# $$ with descriptions of configurations separated by # delimiters. The LBA The LBA $B$ works as follows: On input $x$, the LBA $B$: - breaks $x$ according to the # delimiters - identifies strings $C_1, C_2, \ldots, C_\ell$. - then checks that following conditions hold: - Each $C_i$ are a configuration of $M$ - $C_1$ is the start configuration of $M$ on $w$ - Every $C_{i+1}$ follows from $C_i$ according to $M$ - $C_\ell$ is an accepting configuration Slides modified by Benny Chor, based on original slides by Maurice Herlihy, Brown University. The LBA - Checking that each $C_i$ is a configuration of $M$ is easy: All it means is that $C_i$ includes exactly one $q$ symbols. - Checking that $C_1$ is the start configuration on $w$, $q_0w_1w_2\cdots w_n$, is easy, because the string $w$ was “wired into” $B$. - Checking that $C_\ell$ is an accepting configuration is easy, because $C_\ell$ must include the accepting state $q_a$. - The only hard part is checking that each $C_{i+1}$ follows from $C_i$ by $M$’s transition function. The Hard Part Checking that for all \( i \), \( C_{i+1} \) follows from \( C_i \) by \( M \)'s transition function. - \( C_i \) and \( C_{i+1} \) almost identical, except for positions under head and adjacent to head. - These positions should be updated according to the transition function. Do this verification by - zig-zagging between corresponding positions of \( C_i \) and \( C_{i+1} \). - use “dots” on tape to mark current position - all this can be done in space allocated by input \( x \) The LBA, $B$, accepts the string $x$ if and only if $x$ equals an accepting computation history of $M$ on $w$. Therefore $L(B)$ is either empty or a singleton $\{x\}$. We construct $B$ in order to feed it to the claimed decider, $R$, of $\text{EMPTY}_{LBA}$ (which we assume to exist). Once this decider returns its answer, we invert this answer to decide whether $M$ accepts $w$. Important! The Proof Suppose TM $R$ decides $\text{EMPTY}_{LBA}$. Define TM $S$ that decides $A_{TM}$: On input $\langle M, w \rangle$ 1. Construct LBA, $B$, from $M$ and $w$ as described above. 2. Run $R$ on $\langle B \rangle$. 3. if $R$ rejects, accept; if $R$ accepts, reject. If $R$ accepts $\langle B \rangle$ - $M$ has no accepting computation history on $w$ - $M$ does not accept $w$ - So $S$ rejects $\langle M, w \rangle$ The Proof (cont.) If $R$ rejects $⟨B⟩$ - the language of $B$ is non-empty - the only string $B$ can accept is an accepting computation of $M$ on $w$ - thus $M$ accepts $w$ - So $S$ accepts $⟨M, w⟩$. To conclude, $S$ decides $A_{TM}$, a contradiction. ♣
{"Source-Url": "http://www.cs.tau.ac.il/~bchor/CM07/Compute9.pdf", "len_cl100k_base": 6563, "olmocr-version": "0.1.50", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 96121, "total-output-tokens": 8809, "length": "2e12", "weborganizer": {"__label__adult": 0.0005335807800292969, "__label__art_design": 0.0004892349243164062, "__label__crime_law": 0.0005741119384765625, "__label__education_jobs": 0.0027675628662109375, "__label__entertainment": 0.00017011165618896484, "__label__fashion_beauty": 0.00027871131896972656, "__label__finance_business": 0.00028014183044433594, "__label__food_dining": 0.0007891654968261719, "__label__games": 0.0013427734375, "__label__hardware": 0.001667022705078125, "__label__health": 0.0015954971313476562, "__label__history": 0.0005788803100585938, "__label__home_hobbies": 0.00025773048400878906, "__label__industrial": 0.00099945068359375, "__label__literature": 0.0011692047119140625, "__label__politics": 0.0004758834838867187, "__label__religion": 0.00099945068359375, "__label__science_tech": 0.31884765625, "__label__social_life": 0.0002114772796630859, "__label__software": 0.00714874267578125, "__label__software_dev": 0.65673828125, "__label__sports_fitness": 0.0006265640258789062, "__label__transportation": 0.0011739730834960938, "__label__travel": 0.000270843505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21003, 0.01043]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21003, 0.64609]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21003, 0.78791]], "google_gemma-3-12b-it_contains_pii": [[0, 188, false], [188, 541, null], [541, 696, null], [696, 896, null], [896, 1190, null], [1190, 1833, null], [1833, 2277, null], [2277, 2484, null], [2484, 2732, null], [2732, 3174, null], [3174, 3494, null], [3494, 3962, null], [3962, 4461, null], [4461, 5040, null], [5040, 5771, null], [5771, 6248, null], [6248, 6867, null], [6867, 7566, null], [7566, 8000, null], [8000, 8424, null], [8424, 8906, null], [8906, 9287, null], [9287, 9438, null], [9438, 9705, null], [9705, 10172, null], [10172, 10402, null], [10402, 10570, null], [10570, 10966, null], [10966, 11232, null], [11232, 11708, null], [11708, 12085, null], [12085, 12622, null], [12622, 13032, null], [13032, 13411, null], [13411, 14076, null], [14076, 14845, null], [14845, 15035, null], [15035, 15183, null], [15183, 15530, null], [15530, 15670, null], [15670, 15964, null], [15964, 16304, null], [16304, 16785, null], [16785, 17044, null], [17044, 17264, null], [17264, 17781, null], [17781, 18069, null], [18069, 18438, null], [18438, 18930, null], [18930, 19422, null], [19422, 19925, null], [19925, 20321, null], [20321, 20748, null], [20748, 21003, null]], "google_gemma-3-12b-it_is_public_document": [[0, 188, true], [188, 541, null], [541, 696, null], [696, 896, null], [896, 1190, null], [1190, 1833, null], [1833, 2277, null], [2277, 2484, null], [2484, 2732, null], [2732, 3174, null], [3174, 3494, null], [3494, 3962, null], [3962, 4461, null], [4461, 5040, null], [5040, 5771, null], [5771, 6248, null], [6248, 6867, null], [6867, 7566, null], [7566, 8000, null], [8000, 8424, null], [8424, 8906, null], [8906, 9287, null], [9287, 9438, null], [9438, 9705, null], [9705, 10172, null], [10172, 10402, null], [10402, 10570, null], [10570, 10966, null], [10966, 11232, null], [11232, 11708, null], [11708, 12085, null], [12085, 12622, null], [12622, 13032, null], [13032, 13411, null], [13411, 14076, null], [14076, 14845, null], [14845, 15035, null], [15035, 15183, null], [15183, 15530, null], [15530, 15670, null], [15670, 15964, null], [15964, 16304, null], [16304, 16785, null], [16785, 17044, null], [17044, 17264, null], [17264, 17781, null], [17781, 18069, null], [18069, 18438, null], [18438, 18930, null], [18930, 19422, null], [19422, 19925, null], [19925, 20321, null], [20321, 20748, null], [20748, 21003, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21003, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21003, null]], "pdf_page_numbers": [[0, 188, 1], [188, 541, 2], [541, 696, 3], [696, 896, 4], [896, 1190, 5], [1190, 1833, 6], [1833, 2277, 7], [2277, 2484, 8], [2484, 2732, 9], [2732, 3174, 10], [3174, 3494, 11], [3494, 3962, 12], [3962, 4461, 13], [4461, 5040, 14], [5040, 5771, 15], [5771, 6248, 16], [6248, 6867, 17], [6867, 7566, 18], [7566, 8000, 19], [8000, 8424, 20], [8424, 8906, 21], [8906, 9287, 22], [9287, 9438, 23], [9438, 9705, 24], [9705, 10172, 25], [10172, 10402, 26], [10402, 10570, 27], [10570, 10966, 28], [10966, 11232, 29], [11232, 11708, 30], [11708, 12085, 31], [12085, 12622, 32], [12622, 13032, 33], [13032, 13411, 34], [13411, 14076, 35], [14076, 14845, 36], [14845, 15035, 37], [15035, 15183, 38], [15183, 15530, 39], [15530, 15670, 40], [15670, 15964, 41], [15964, 16304, 42], [16304, 16785, 43], [16785, 17044, 44], [17044, 17264, 45], [17264, 17781, 46], [17781, 18069, 47], [18069, 18438, 48], [18438, 18930, 49], [18930, 19422, 50], [19422, 19925, 51], [19925, 20321, 52], [20321, 20748, 53], [20748, 21003, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21003, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
6340726052fbc4337998a77212a9d2ce89864f8b
ABSTRACT Blockchains store a massive amount of heterogeneous data which will only grow in time. When searching for data on the Ethereum platform, one is required to either access the records (blocks) directly by using a unique identifier, or sequentially search several records to find the desired information. Therefore, we propose the Ethereum Query Language (EQL), a query language that allows users to retrieve information from the blockchain by writing SQL-like queries. The queries provide a rich syntax to specify data elements to search information scattered through several records. We claim that EQL makes it easier to search, acquire, format, and present information from the blockchain. CCS CONCEPTS • Information systems → Query languages; Information retrieval query processing; Database query processing; Computing platforms; Digital cash; KEYWORDS Ethereum, Blockchain, Query Language, SQL 1 INTRODUCTION Blockchain was initially used as a distributed ledger allowing monetary interactions without the need of central trusted authority [2, 8, 10–12]. We prefer a more formal definition that blockchain is a globally shared transactional database managed by a peer-to-peer network [7]. Each peer in that network stores a complete copy of the blockchain database. The records are database transactions that append new information to the current state of the blockchain. Transactions are packed into blocks and the blocks are linked in a sequence forming a chain. Thus, the name blockchain. Ethereum [5] is a popular blockchain platform that is capable of executing Turing-complete programs, called Smart Contracts [10]. Ethereum also has its own cryptocurrency for monetary interactions, called Ether [5]. Ethereum and other blockchains store a massive amount of heterogeneous data. Ethereum is estimated to have approximately 300GB of data [1]. Retrieving information from this massive data is not an easy task. Moreover, the Ethereum platform only allows direct access to its first-class data elements, which includes blocks, transactions, accounts, and contracts [5]. Therefore, if we search for a particular information inside a data element, we would need a unique identifier (i.e., either the number or its hash) to access the block containing such information. Another alternative would be to direct access one block and sequentially access its parents to search for a data. Moreover, the information returned by the Ethereum platform when we access its blocks is encoded into a JSON-like structure that we need to interpret to acquire a specific item. Therefore, the Ethereum platform does not provide a semantic way to search for information and neither an easy form to present such information. For example, let’s consider a blockchain application that manages a custom cryptocurrency controlled by a single person (i.e., an owner).1 The owner needs to look for suspicious and possible malicious behavior in the history of one specific account. Currently, the owner would have two options to gather that account’s data: (i) direct access each block with the specific information, which would require for the owner to know beforehand the blocks’ hashes or numbers; or (ii) access the most recent block and sequentially search every parent block for any information related to the account. After the owner acquires the information he still needs to extract it from the stored representation and reformat it for a better visualization. In a classical database, when we need to search for a particular information, we usually write a query to fetch, present, filter, and format such information. Database query languages like SQL provide a rich syntax to describe the data we want to acquire from the database. Since blockchain can be considered a database, it would be better if we could use a similar way to fetch information inside --- 1 There is an example in the Solidity documentation called Subcurrency [7]. We present a modified version of this same example in Section 2.2. In this paper, we propose the Ethereum Query Language (EQL), a query language that enables its users to acquire information in the blockchain by writing SQL-like queries. Our goal is to provide an easier way to fetch, format, and present information extracted from the blockchain database. The remainder of this paper is organized as follows. Section 2 present basic concepts on Structured Query Language and Ethereum smart contracts. Section 3 describes the problems and challenges to search and retrieve information from the blockchain. Section 4 present EQL along with its syntax, examples, internal structure (indexes), and limitations. In Section 5, we show a preliminary evaluation describing the performance of EQL queries. In Section 6, we present related work in blockchain. Finally, Section 7 concludes the paper and outlines future work ideas. 2 BACKGROUND In this section, we present basic concepts on Structured Query Language (Section 2.1), and Ethereum Smart Contracts (Section 2.2). 2.1 Structured Query Language Structure Query Language (SQL) is considered the standard relational database language [4, 14]. Even though SQL is well established as a query language, it can also perform other operations such as manipulating data or specifying integrity constraints. The commercial success of relational databases is greatly due to SQL being a well-adopted standard [4]. In SQL, the “SELECT” statement is used to query data (i.e., retrieve information) [14]. Basically, a “SELECT” query consists of the clauses: select, from, where, group by, and order by. The select clause specifies what to show from the query results; the from clauses defines which tables to gather the data; the where clause defines a condition the information must satisfy to be returned as a result; the group by clause groups the gathered data based on a condition; and finally, the order by clause specifies how the query orders its results. We based our proposed query language on SQL, due to its popularity and broad user base. More specifically, our queries use an adapted form of the “SELECT” statement. 2.2 Ethereum Smart Contracts Smart contracts are programs that are executed in the blockchain platform [8, 10]. A smart contract is like a class; it contains attributes and functions, and it can use inheritance on other contracts [7]. The correct execution of smart contracts, as well as their resulting states, is ensured by a consensus protocol [10]. Solidity [7] is the primary language to specify smart contracts in the Ethereum blockchain. Since the Ethereum platform is Turing-complete, contracts can define any rules by using the Solidity language. Solidity is a high-level language based on JavaScript, C++, and Phyton [7]. The contracts written in Solidity are compiled into a specific bytecode to run on the Ethereum Virtual Machine (EVM). A compiled contract can be deployed into the Ethereum blockchain by executing a special transaction that allocates an address to it [2]. This address is a 160-bit unique identifier that references the contract. Once deployed, a contract can be executed remotely by client applications. Listing 1 shows a Solidity contract example (adapted from the subcurrency example on the Solidity documentation [7]) that manages a simple cryptocurrency controlled by a single person. The contract stores the balances of its accounts using a “mapping” (line 7) that works like a hash table by mapping blockchain addresses to unsigned integers. The constructor (lines 13-15) stores the address of who deployed the contract (i.e., the owner of this contract instance). Only the owner can call the function “mint” (lines 18-21), which creates new coins for a specific account. The function “send” (lines 24-30) allows the caller of this function to transfer his coins to another account, provided that the caller has sufficient funds for such operation. ``` pragma solidity ^0.4.20; contract CustomCoin { address private owner; /* The keyword "public" makes it readable from outside */ mapping (address => uint) public balances; /* Events allow light clients to react on changes efficiently */ event Sent(address from, address to, uint amount); /* Constructor: only executed when the contract is deployed */ function CustomCoin() public { owner = msg.sender; } /* Creates new coins */ function mint(address receiver, uint amount) public { if (msg.sender != owner) return; balances[receiver] += amount; } /* Allows the caller to send coins to another user/account */ function send(address receiver, uint amount) public { if (balances[msg.sender] < amount) revert(); // abort transaction balances[msg.sender] -= amount; balances[receiver] += amount; Sent(msg.sender, receiver, amount); } } ``` Listing 1: Solidity Simple Cryptocurrency Contract, Adapted Example 3 PROBLEM In this section, we describe in more detail the problems and challenges when trying to acquire data from a blockchain. Although our research is focused on Ethereum, such problems are also present on other blockchain platforms as well. 3.1 Massive Data Blockchain databases already possess a massive amount of data. For example, Ethereum is estimated to have approximately 300GB of data [1]. Moreover, Ethereum processed, on average, 876K transactions per day in December of 2017 (Figure 1). Since the data in the blockchain cannot be deleted, the number of recorded transactions will only grow in time. In this context, older information could get overwhelmed by new transactions. Indeed, the common expression “looking for a needle in a haystack” could be updated to “looking for a hash in a blockchain”. 3.2 Heterogeneous Data Blockchain not only stores a great amount of data but also manages a mixture of first-class elements such as transactions, blocks, accounts, and smart contracts. Data elements are represented as transactions, which are bundled into a storage element (blocks). Smart contracts are programming elements that manage data and function, and accounts are elements to represent users. All of the first-class elements (blocks, transactions, etc.) are different but interrelated by hashes. Even though Ethereum allows access to any of its first-class elements, the heterogeneity of the elements (each one with different meaning and high-level representation) complicates the acquisition of information. 3.3 Direct Access In general, blockchains only allow direct access to its elements by using a unique identifier. This identifier is a hash number that is generated for every first-class element stored in the blockchain [3]. The Ethereum platform, in particular, can also use a unique number (related to the order of the element) to access blocks and transactions [6]. In this scenario, a user only needs to provide a hash to the API to invoke the specific fetch method to acquire the desired element (e.g., block, transaction, contract). From the API point-of-view, this is a good solution because it is fast, simple, and possesses very little overhead. However, this direct access by hash can lead to the following issues when we consider the user’s point-of-view: - **Hash Storage.** Since the user needs the hash to fetch the information, he/she is also required to store the hash, somehow, if he/she wants to acquire the same information later. Otherwise, the user will lose the key to access the desired information in the future. This problem increases when the user needs to “remember” multiple hashes for later use. In such case, a secondary private database is required to store all hashes with related meta-data to retrieve the information from the blockchain in the future. - **Sequential Access.** If the user loses the hash, he/she can still find the relevant data by performing a sequential access in the blockchain. In Ethereum, it is possible to start at the most recent block and sequentially access the blocks parents. Since Ethereum blocks can also be fetched by their number (and not only the hash), a user could also access a sequence of blocks by just incrementing or decrementing the block number. Sequential access is also useful when a user needs to search for information scattered around multiple blocks. Searching for information in a sequential way is not efficient. Moreover, it is noteworthy that each block access is a remote procedure call (RPC), which may hinder the performance of any sequential search. 3.4 Data Opaqueness In order to give users flexibility, Ethereum stores arbitrary information (e.g., contracts, transactions) with a generic representation. Therefore, the stored data is opaque, since there is no meta-data describing the information and neither a simple way to know what was recorded. This opaqueness is useful and even necessary from the Ethereum standpoint because it reduces the data representation size and allows the storage of arbitrary structures and behaviors. On the other hand, the opaqueness overcomplicates searching for information, since a user needs to access generic representations without any knowledge of its content. 4 ETHEREUM QUERY LANGUAGE In this section, we describe the Ethereum Query Language (EQL) version 0.8, a query language designed to acquire information from the blockchain. EQL is publicly available on GitHub. We highlight the following benefits of using EQL to fetch information from blockchain: (i) describe structural and semantic filters to query for information; (ii) reformat and transform the acquired data; (iii) order the query results; and (iv) limit the amount of results returned. 4.1 Syntax EQL syntax is based on the SQL language. The idea is to allow users to write queries as close to SQL as possible to facilitate the adoption of EQL since SQL is a very popular language [4, 14]. Listing 2 shows the main elements of the EQL syntax. The syntax is described using EBNF (Extended Backus-Naur Form), we did not format the terminals identifier and number in double-quotes to highlight that they are not literals. We omitted the Expression rule to not clutter the specification, but it follows a similar structure of SQL expressions. Listing 2: EQL grammar in EBNF format ```plaintext <SelectStatement> ::= <SelectClause> [ <OrderByClause> ] [ <LimitClause> ] <SelectClause> ::= "select" <Expression> [ "as" <Identifier> ] <FromClause> <FromClause> ::= "from" <SourceBind> [ , <SourceBind> ] <WhereClause> ::= "where" <Expression> <OrderByClause> ::= "order by" <Expression> [ "asc" | "desc" ] <LimitClause> ::= "limit" number <Expression> ::= ... [ ] <SourceBind> ::= identifier "as" identifier ``` As we can see from Listing 2, the syntax for EQL queries is very similar to the "Select" statement from the SQL language. The EQL "Select" statement consists of the following clauses: select, from, where, order by, and limit. Only the select and from clauses are required, the others are optional clauses. We also like to highlight that EQL similarly to SQL is also case-insensitive. Unlike SQL, the EQL "Select" statement does not have a group by clause. 4.2 Examples We present a few examples of queries written in EQL. In the current version of EQL, we are able to query blocks and transactions. Querying for information inside contracts (i.e., contract attributes or functions) is not yet supported by our implementation. Listing 3 shows an example that query over blocks. In this query, the from clause (line 5) restricts our results to show only the first 100 blocks. The order by clause (line 4) arrange the results in descending order by how many transactions are inside each block. Finally, we use the limit clause (line 5) restricts our results to show only the first 100 blocks. Listing 3: EQL Block Query Example ```sql SELECT block.parent.number, block.hash, block.timestamp, block.number, block.amountOfTransactions FROM ethereum.blocks AS block WHERE block.timestamp BETWEEN '2016-01-01' AND now() AND block.transactions.size >10 ORDER BY block.transactions.size LIMIT 100; ``` Figure 2 shows the top-20 results formatted for a better visualization from the query (Listing 3). Figure 3 shows a raw representation of the same top-20 results. Listing 4 shows a query example that searches for transactions. The from clause (line 2) determines that the data will come from every Ethereum transaction (ethereum.transactions) using the alias "transaction". The select clause (line 1) will show as result the transaction timestamp, the hash from the account that initiated the transaction, the hash to the account that received this transaction, and the amount moved by this transaction multiplied by 0.1. We used the multiplication to show that EQL can transform the resulting data on its select clause by using simple expressions. The where clause (line 3) stipulates that only the transactions with a timestamp greater than 2018-01-01; the clause also restricts the results to the accounts with a balance greater than 100 ether that initiated the transaction. The `order by` clause (line 4) arrange the results in descending order by the balance on the account receiving the transaction. Finally, the `limit` clause (line 5) limits the results to fetch only the first ten transactions. ### Listing 4: EQL Transaction Query Example ```sql SELECT transaction.timestamp, transaction.from.hash, transaction.to.hash, transaction.amount*0.1 FROM ethereum.transactions AS transaction WHERE transaction.timestamp > date('2018-01-01') AND transaction.from.balance() > 100 ether ORDER BY transaction.to.balance() DESC LIMIT 10; ``` Figure 4 shows the results formatted for a better visualization of the query described in Listing 4. ### 4.3 Collections and Objects In EQL, a collection is a semantic representation of queried data. More specifically, collections are used in the `from` clause as they represent the data we are searching. The EQL language have four predefined collections: `ethereum.blocks`, `ethereum.transactions`, `ethereum.accounts`, and `ethereum.contracts`. Each one of those collections represents all first-class elements from a particular type (blocks, transactions, accounts, or contracts). It is possible to create custom collections from a subset of another one. EQL also supports “views” similar to SQL. When we retrieve information from a collection of blocks, the result will be presented as block objects. As we shown in Listing 3, Ethereum blocks have attributes related to their storage structure. Blocks queried by EQL have the following attributes: - **number**: integer, the block’s number. - **hash**: hash (binary 32 bytes), the block’s hash. - **parentHash**: hash, the parent’s block hash. - **parent**: block, the parent’s block (EQL automatically fetches the parent information using the hash). - **nonce**: integer, the proof-of-work nonce value. - **timestamp**: timestamp, the unix timestamp when the block was created. - **size**: integer, the size of the block in bytes. - **miner**: address (binary 20 bytes), the account who mined this block. - **difficulty**: integer, the proof-of-work difficulty for this block. - **totalDifficulty**: integer, the total difficulty of the chain until this block. - **gasLimit**: integer, the maximum gas allowed in the block. - **gasUsed**: integer, the total gas used by all transactions in this block. - **extraData**: binary variable size, a general field to contain extra data for the block. - **transactionsRoot**: hash, the first transaction on this block. - **transactionsHashes**: set of hashes, the hashes from the transactions on this block. transactions: set of transactions, the transactions in this block. amountOfTransactions: integer, the number of transactions in this block. uncleHashes: set of hashes, the uncles' hashes. uncles: set of blocks, the uncles' blocks. In the above attributes, the ones that are a set (e.g., transactions) can be called as a function passing the index as parameter. In this case, the function returns only the element at the specified index (or null if the index is out of bounds). For example, block.transactions(3) would return the third transaction in the current block. Transaction objects have a different representation. It is noteworthy to mention that while a block contains many transactions, a transaction object is contained inside a single block. In EQL, transaction objects have the following attributes: - hash: hash, the transaction's hash. - nonce: integer, the nonce of the sender account (i.e., for accounts the nonce counts the number of transactions it created; it is a form to avoid double spending). - blockHash: hash, the block’s hash. - blockNumber: integer, the block’s number. - block: block, the block that contains this transaction. - transactionIndex: integer, the transaction’s index position in the block. - fromHash: address, the address from the account that initiated this transaction (i.e., the sender). - from: account, the sender’s account. - toHash: address, the address of the receiver of this transaction. - to: account, the account of the receiver of this transaction. - value: integer, the amount transferred in Wei. - gasPrice: integer, the gas price set by the sender in Wei. - gas: integer, gas consumed by this transaction. - input: binary variable size, the data send along with the transaction. - timestamp: timestamp, the unix timestamp when the transaction was created. Accounts are a simple object that have the following attributes in EQL: - address: address, the account’s address. - name: string, the account’s name. - balance: integer, the amount of cryptocurrency (in Wei) on this account. EQL have limitations when dealing with contracts (described in more detail at Section 4.5). A contract object is a special type of an account. In the current version, contracts have the following attributes: - address: address, the contract’s address. - name: string, the contract’s name. - balance: integer, the amount of cryptocurrency (in Wei) on this contract. - bytecode: binary variable size, the contract code. An index is a summarization of data stored into a structure that improves the performance of retrieval operations. The basic idea is to allow a more efficient search into the database. In our particular case, we index blockchain data (e.g., blocks, transactions) with its related hash to speed up fetching data. For EQL, we implemented a Binary Search Tree (BST) to serve as the index structure. The BST employs a two-dimensional array where the first dimension of each entry is used for storing the property value, and the second dimension is used for storing a set of hashes to the elements that correspond to this specific value. We chose this implementation because of its simplicity for selecting an interval of indexes in any comparison operation, such as “greater than”, “less than”. We acknowledge that BST has a high storage requirement. However, we wanted a simple and fast solution to our first implementation of EQL. Moreover, EQL builds its indexes automatically, without the need for user interaction. 4.5 Limitation The current version of the EQL implementation (version 0.8) has some limitations. First, we are not able to search inside contract attributes when querying. We are still able to query blocks and transactions that were created to record a smart contract attribute change. However, we would need to use the information on blocks and transactions to find information related to contracts. We are working to circumvent this limitation and offer contract querying on the next release of EQL. Another limitation is also related to smart contracts. We are unable to use smart contract functions on EQL expressions. Since, smart contracts store not only data but also functional behavior, it might be necessary to execute a contract function to properly query for contract information. We acknowledge that allowing users to call any contract function could lead to performance bottlenecks, security issues, and re-triggering the contract. Therefore, we plan to allow “read-only” type of smart contract functions in EQL expressions. Another limitation is that we placed a maximum upper bound on the number of results returned by EQL. Even though, the limit clause is optional, our implementation will always return at maximum 1000 results because of memory constraints. Although we have no plans to allow unlimited results being returned, we are working to use an “offset” definition on the query so that a user can retrieve bigger results in installments. The lack of group by clause in EQL is also another limitation. This hinders the expressiveness of the EQL language as we cannot perform aggregation queries. We are working to add group by and aggregate functions in a future version of EQL. 5 In Ethereum, uncle blocks are valid blocks that were mined but rejected. The uncles are orphaned blocks that contribute to the security of the main chain. Uncle blocks do not contain transactions. 5 PRELIMINARY EVALUATION As a preliminary form of evaluation, we tested the performance of EQL on retrieving information. We compared EQL against using a driver\(^6\) to direct access the information inside the blockchain. We are using the direct access as a baseline for comparison since it is not possible for EQL (or any approach) to perform better than direct access. For the evaluation, we employed direct access to fetch 100 randomly selected blocks. Then, we used the EQL query shown in Listing 3 (Section 4.2) to fetch 100 blocks. First, we executed both retrieval operations with an empty cache. Second, we repeated the same operations to verify how both would perform when the searched information is already in the cache. It is noteworthy that the driver we used for direct access maintains a cache to improve its performance, and it is not a standard feature of direct accessing Ethereum. Table 1 shows the performance comparison between both approaches without and with information on the cache. Table 1 measures the time (average, standard deviation, mode, maximum, and minimum) to fetch one block in milliseconds. We were not able to calculate the standard deviation for the cached execution because the numbers were too small. Table 1: Performance Tests to Fetch Blocks measured in Milliseconds <table> <thead> <tr> <th></th> <th>Avg</th> <th>St.Dev</th> <th>Mode</th> <th>Max</th> <th>Min</th> </tr> </thead> <tbody> <tr> <td>Direct without cache</td> <td>5.88</td> <td>12.36</td> <td>1.60</td> <td>127.2</td> <td>0.30</td> </tr> <tr> <td>EQL without cache</td> <td>159.69</td> <td>13.65</td> <td>107.37</td> <td>225.64</td> <td>88.05</td> </tr> <tr> <td>Direct with cache</td> <td>0.04</td> <td>–</td> <td>0.03</td> <td>0.05</td> <td>0.03</td> </tr> <tr> <td>EQL with cache</td> <td>0.04</td> <td>–</td> <td>0.03</td> <td>0.05</td> <td>0.03</td> </tr> </tbody> </table> As we can see from Table 1, when both approaches do not use cached information, direct access is much faster than EQL. This is expected because direct access is fetching blocks directly using their hashes, while EQL is searching for information inside the blocks (i.e., the timestamp and number of transactions as defined in the query in Listing 3) to see which ones will return as the query result. When the information being searched is already cached then both approaches reach similar results. 6 RELATED WORK Porru et al. [13] acknowledge the need to create and adapt tools and techniques for blockchain-oriented software (BOS). The authors define the term BOS as a software that interacts with blockchain. Basically, they discuss the software engineering issues when dealing with BOS. The authors first present challenges on the state-of-art BOS; second they analyse 1184 GitHub projects using blockchain; and finally, they propose ideas for research on BOS. Even though the authors present very interesting research possibilities, they did not foresee research in query languages for blockchain. Bartoletti et al. [1] propose a framework for blockchain analytics coded as a Scala library. Their framework works on both Ethereum and BitCoin platform and it employs a general-purpose abstraction layer to promote reuse. One great feature of the authors’ framework is the ability to integrate data from other sources besides blockchain, such as a NoSQL database. The authors contrast their framework features against five other tools, but they do not conduct a performance comparison. This work is interesting because the authors combine blockchain data with a secondary database. We have plans to incorporate in EQL the capabilities to join blockchain data and a secondary database on the query. Kalodner et al. [9] implement an open-source blockchain analysis platform, called BlockSci. Their platform comes with many tools and features to better help with the analysis. For instance, the authors’ claim it is 15 to 600 times faster than other tools. They support the following blockchains: BitCoin, LiteCoin, Namecoin, and Zcash. Similarly to EQL, BlockSci also uses indexes in its implementation. Unlike EQL, BlockSci indexes are stored in SQLite database. Moreover, the authors claimed that many analyses do not require indexes at all. This contrasts with EQL, which indexes are essential to speed up the retrieval performance. Although BlockSci provides a better visualization and navigation on blockchain data, it does not provides an easy way to search or filter information. There are many researches towards security on blockchains and smart contracts. Luu et al. [10] analyses the security flaws in Ethereum smart contracts. The authors identified security problems and possible ways of attack by exploiting smart contracts. Then, the authors formalize solutions for the identified security issues. They also implement a tool that checks for problematic code on a smart contract. Their tool processed over 19K Ethereum contracts and found unsecured coding practices on approximately 8K of them. Juels et al. [8] investigate what they called, criminal smart contracts (CSC). CSC is a contract that facilitates illicit activities and rewards those interacting with it. The authors create their own CSC as a proof of concept. They also propose countermeasures against CSC which could help the blockchain community prevent CSC proliferation. Some of the described criminal activities could be more easily caught by querying blockchain data. 7 CONCLUSION The amount of data stored in blockchain is massive and that data is also heterogeneous and opaque. Moreover, the Ethereum platform only allows direct or sequential access to its blocks. In this context, searching for information inside the blockchain is a challenging task because we must sequentially access a huge amount of opaque data. To help in this challenge, we proposed EQL, a query language that allows users to retrieve information by writing SQL-like queries. Our implementation of EQL automates the task of sequentially searching into generic opaque structures, and provides an easier way to specify the information we want to acquire in a higher abstraction level. Although the current implementation (version 0.8) still shows limitations when dealing with smart contract information, we are able to fetch records related to contracts but only in the form of blocks or transactions. We tested the performance of EQL against direct accessing blocks for a baseline comparison. As expected, EQL is much slower when the information is not cached (average of 159 milliseconds to fetch a block) than direct access (average 5.88 milliseconds to fetch a block). However, when the information being searched is already \(^6\) The driver used for direct accessing the blockchain is also publicly available on GitHub at https://github.com/sbragagnolo/Fog (verified 2018-03-08). cached, then EQL reach similar results to direct access (average of 0.04 milliseconds to fetch a block). Even though using EQL can take longer than direct access, the goal is to help users who cannot directly access their information and need to search for it in the blockchain. Since EQL automates the searching task, it simplifies information retrieval for Ethereum. For future work, our first priority is to tackle the limitations of the implemented version of EQL, especially to support smart contract querying. Moreover, we plan to add support to group by clauses and aggregate functions as well. We are also planning to create a tool to write queries and show results based on SQL database tools (e.g., MySQL Workbench). Another future work idea is to perform a more in-depth performance evaluation, and also a user feedback evaluation on EQL. Moreover, we plan to allow EQL to merge blockchain data and data from other sources (e.g., NoSQL database, relational database) when presenting results. ACKNOWLEDGMENT This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council, CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. This research was also supported by UTOCAT. REFERENCES
{"Source-Url": "https://rmod.inria.fr/archives/papers/Braga18b-WETSEB-Query.pdf", "len_cl100k_base": 6935, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26709, "total-output-tokens": 8407, "length": "2e12", "weborganizer": {"__label__adult": 0.0004591941833496094, "__label__art_design": 0.0004105567932128906, "__label__crime_law": 0.000629425048828125, "__label__education_jobs": 0.0006718635559082031, "__label__entertainment": 0.00011491775512695312, "__label__fashion_beauty": 0.00020897388458251953, "__label__finance_business": 0.001434326171875, "__label__food_dining": 0.00045418739318847656, "__label__games": 0.0009593963623046876, "__label__hardware": 0.0013580322265625, "__label__health": 0.000766754150390625, "__label__history": 0.0003705024719238281, "__label__home_hobbies": 0.00014150142669677734, "__label__industrial": 0.0006694793701171875, "__label__literature": 0.00037479400634765625, "__label__politics": 0.0004181861877441406, "__label__religion": 0.0005059242248535156, "__label__science_tech": 0.1470947265625, "__label__social_life": 0.00010144710540771484, "__label__software": 0.0182952880859375, "__label__software_dev": 0.8232421875, "__label__sports_fitness": 0.0002853870391845703, "__label__transportation": 0.000667572021484375, "__label__travel": 0.00021266937255859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35779, 0.0273]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35779, 0.47269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35779, 0.88758]], "google_gemma-3-12b-it_contains_pii": [[0, 4011, false], [4011, 8879, null], [8879, 13114, null], [13114, 16918, null], [16918, 19611, null], [19611, 25004, null], [25004, 31658, null], [31658, 35779, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4011, true], [4011, 8879, null], [8879, 13114, null], [13114, 16918, null], [16918, 19611, null], [19611, 25004, null], [25004, 31658, null], [31658, 35779, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35779, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35779, null]], "pdf_page_numbers": [[0, 4011, 1], [4011, 8879, 2], [8879, 13114, 3], [13114, 16918, 4], [16918, 19611, 5], [19611, 25004, 6], [25004, 31658, 7], [31658, 35779, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35779, 0.0283]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
aeeb58ca0c15863cd19ab1eb6f21a5662b43b836
Improving Performance of DOM in Semi-structured Data Extraction Using WEIDJ Model Ily Amalina Ahmad Sabri, Mustafa Man School of Informatics and Applied Mathematics Universiti Malaysia Terengganu, Terengganu, Malaysia ABSTRACT Web data extraction is the process of extracting user required information from web page. The information consists of semi-structured data not in structured format. The extraction data involves the web documents in html format. Nowadays, most people uses web data extractors because the extraction involve large information which makes the process of manual information extraction takes time and complicated. We present in this paper WEIDJ approach to extract images from the web, whose goal is to harvest images as object from template-based html pages. The WEIDJ (Web Extraction Image using DOM (Document Object Model) and JSON (JavaScript Object Notation)) applies DOM theory in order to build the structure and JSON as environment of programming. The extraction process leverages both the input of web address and the structure of extraction. Then, WEIDJ splits DOM tree into small subtrees and applies searching algorithm by visual blocks for each web page to find images. Our approach focus on three level of extraction; single web page, multiple web page and the whole web page. Extensive experiments on several biodiversity web pages has been done to show the comparison time performance between image extraction using DOM, JSON and WEIDJ for single web page. The experimental results advocate via our model, WEIDJ image extraction can be done fast and effectively. Keywords: DOM (Document Object Model) JSON (JavaScript Object Notation) Information Extraction Semi-structured Data. 1. INTRODUCTION Data integration is considered as one of the hot issues to be solved especially in integrating unstructured data with multiple types and formats and stored in different location [1]. The integration between the structured and unstructured data is concerned by certain organizations in terms of their benefits to retrieve valuable information and knowledge [2]. In data integration, imposing a single global schema for all users can seriously interfere with their individual work, as the autonomy of information receivers is violated. The autonomy of information receivers implies that integrated information use must be non-intrusive [3]. This means that users should not be forced to adapt to any standard concerning the structure and meaning of the data they request. The desired kind of data integration can thus be characterized by the optimal fitness of the supplied information for a certain purpose, concerning the organization, presentation, and semantics. To state this differently, the integrated information that is provided has all the relative qualities required by a particular user in a specific task. The data resides in different forms, ranging from unstructured data (USD) in file systems to highly structure in relational database systems. We need to consider three types of data such as structured data (SD), unstructured data (USD) and semi-structured data (SSD) as shown in Figure 1 [4]. Improving Performance of DOM in Semi-structured Data Extraction… (Ily Amalina Ahmad Sabri) Data extraction is a part of data integration process. It is the process to extract useful information based on user required information. Extraction of the information from web is called as “Web Data Extraction”. It allows user to analyse semi-structured data from different views and arranged in structured form such as tabular format. Extraction and analysis information from web pages is an excited research area in the data extraction field. Internet has made the world wide web (www) as an ocean of information that provide users to collect and analyse information. Web is the largest platform that large pool of information source for people. The various information that are usually available on the web are advertisement, navigation, contact information and decoration. The miscellaneous information found on any web pages are seldom related to valuable information. It is called as noisy. Furthermore, each web page has multiple topics that are not related to each other. Most web applications developed with recent progress in computing technologies also contain multimedia data such as images, graphics and audio have increased over the past several years [5]. Recently, detecting or extracting certain information or web page content becomes user priority [6]. Multimedia contents should be just easy to retrieved and accessed as alphanumeric data by user. Human perception is the most important thing that we need to consider to make user feel comfortable and easy to use a software. Many software developed using various techniques. Document Object Model or known as DOM is a technique that can structure web page contents into details with each html tags in tree structure. The majority web pages are written in html compare to xml format. The main objective of this research is to provide user the ease of understanding to structure of web page contents compare to complicated technique. Naturally, mankind understand better from the visual presentation than text presentation. When a web page is presented to user, visual view can help the user to divide web page into several part. In previous works [7, 8], we discussed the experimental process of image extraction based on DOM and JSON. The performance of information extraction based on previous study shows DOM is faster than other approaches. However, DOM consumes large amount of memory when the html structure becomes large. The large usage of memory will affect the operational speed. Thus, we present Wrapper for Image Extraction using DOM and JSON (WEIDJ) model in details to extract images from a web page. Every web page is made of their own structure includes main topic, related topics, additional information, advertisement, contact information, image, audio and video file. Web pages provides large pool of information for people. These information can be used for beneficial purpose. This paper proposes a new tool to extract images that will demonstrate better performance than existing methods such as DOM and other conventional method. In this research, we will highlight the extraction problem from the user’s perspective with regards to search of multimedia information. The initial process of extraction starts with description of targeted web page which is provided via query interface. Our proposed model, WEIDJ aims to improve the processing time for loading images, the accuracy of the extraction process and it is more efficient and lightweight than manual process. This is because manual process of extraction is time consuming. The remainder of this paper is organized as follows: The related works and applications are discussed in next section. Special interest integrating DOM and JSON has been focused in extracting information from web page in WEIDJ Model. Section Results demonstrates the experimental result of this paper in extracting images using three approach. Finally, we offer our conclusions and plans for future works in Conclusion. 2. RELATED WORKS Web data extractions are applications that used to extract semi-structured data from web page. Usually the extraction process involved web data extraction system and web source. The system will interact with the web page and extract semi-structured data. The extracted contents may contain various elements of multimedia data such as audio, video, text and image. After extraction process, the data will be stored in temporary folder or directly store into database. These system have been developed to assist human in a wide range applications. The advantages of web data extraction systems are it can collect meaningful data efficiently and decrease human effort in structured way. There are many discussions from different perspectives about scientific methods and techniques. The design and implementation comes from various disciplined such as machine learning, natural language processing and logic. 2.1 Document Object Model (DOM) The Document Object Model (DOM) is a programming API for html and xml documents. People can create and build documents using DOM. Besides that, this model can be used to manipulate elements and contents of html and xml documents such as add, modify or delete. Narawade, et al. [9] developed page level data extraction system using DOM tree. There are two types of technique for data extraction; online and offline mode. Online mode is applied in real time extraction but offline mode is vice versa. There are three stages web page renderer, section selector, and pattern generator. The system will extract the content dynamically from the different structured web pages such as blogs, forums, articles and etc. DOM tree structure has been applied for content extraction in order to obtain better representation of the data format. Sangeetha [10] proposed a tool that can process the Resource Description Framework (RDF) Based Search and DOM Based Search to extract the relevancy data. RDF is used along with DOM to give user query answer precisely. The DOM segment fusing algorithm is used to analyze and fuse the extracted information from web. Mehta and Narvekar [11] proposed and redesigned basic DOM approach for content extraction to make it applicable for different web page structures such as blogs, forums and articles. The tool can extract information based on two different searching methods; Runtime generated list and Stored URL list. 2.2 JavaScript Object Notation (JSON) The development of web applications has become attractive disciplined in the web environment. The use and composition of different of API technology is very important and influent applications. This issue need to be deal to discover JSON approach based on the web. In recent years, a new technology, JSON based on web applications has been spreading the web environment. JSON is a lightweight data-interchange format. It is self-describing and easy to understand. It is easy for humans to read and write. It is also easy for machines to parse and generate data and very efficiency for data extraction and query retrieval [12]. JSON, DOM and XML are different technologies that have been developed to solve different problems. They are designed for different purpose. Table 1 discussed different technologies of JSON, DOM and XML. <table> <thead> <tr> <th>Table 1. Comparisons of Different Technologies</th> </tr> </thead> <tbody> <tr> <td><strong>JavaScript Object Notation (JSON)</strong></td> </tr> <tr> <td>- JSON is a lightweight, text based format can be used for data interchange format.</td> </tr> <tr> <td>- it is human readable</td> </tr> <tr> <td><strong>Document Object Model (DOM)</strong></td> </tr> <tr> <td>- DOM is used for manipulating and representing html and xml documents.</td> </tr> <tr> <td><strong>eXtensible Markup Language (XML)</strong></td> </tr> <tr> <td>- XML is designed to store and transport data.</td> </tr> <tr> <td>- It is readable for human and machine.</td> </tr> <tr> <td>- XML is more complex compare to JSON.</td> </tr> </tbody> </table> The work by [13-15] proposed Ducky focused on the data extraction. Ducky, as a semi-automatic system using JSON approach which can extract data from web sources and represent all the information in structured format. In Ducky, the configuration file will be defined as a well-formed JSON document and contains several parameters used for data extraction process. In this file, data management and rules are been specified as it is to be readable and to be expressed very simply by using JSON format. Wang [16] proposed recursive algorithm to translate XML and JSON objects in serializing forms based on the multi tree data structure. This research is motivated due to XML and JSON are widely used in application development. In this experimental work, JSON is analyzed as string arrays. This is the reason why JSON is faster than DOM-style XML objects. JSON has been widely used in the actual development of web applications maybe in similar functions but in different domain and applications. Besides that, web wrapper is a program that can extract information from web sources and translates them into relational form. Wrappers can apply JSON and DOM in their functionality. 2.3 Wrappers Nowadays, many systems for data extraction from web pages have been developed [17]. A traditional approach is to write specific programs called as “extractor” or “wrappers” is developed to extract the contents of the web pages based on certain criteria. A survey that offers a rigorous taxonomy to classify web data extraction systems has been presented by Laender, et al. [18]. Alarte, et al. [19] proposed a method that can remove irrelevant information from web template. DOM tree is used to analyse the similarity between a collections of a webpage that are detected using a hyperlink analysis. Abidin, et al. [20] introduced an automated unstructured data capturing for structured storing that deals with multimedia data. This research stated that the unstructured data such as multimedia files, documents, spreadsheets, news, emails, memorandum, reports and web pages are difficult to capture and store in the common database storage. Even there are many tools and techniques that proved to be successful in transforming unstructured data to valuable information but it simply do not work when it comes to unstructured or semi-structured data. Web wrapper is a procedure that might implement several techniques in their algorithms. The goal is to seek and find data required by human users which is extracting unstructured or semi-structured data from web sources. Finally the data will be transformed into structured data for multi-purpose. Lately, the problem of extracting information from unknown sites is getting much attention, but the only conclusive results are regarding unstructured or semi-structured documents. The theme of Web Data Extraction is covered by a number of reviews. Ferrara et al. [17] presented a survey tools and techniques overview for Web Data Extraction. The goal of this survey is to provide a structured and comprehensive overview of the research of Web Data Extraction. In 2008, a relevant survey on information extraction has been discussed by [21]. This paper believed that the automatic extraction of information from unstructured sources has opened up new avenues for querying, organizing, and analyzing data by drawing upon the clean semantics of structured databases and the abundance of unstructured data. Flesca, et al. [22] surveyed approaches, techniques and tools for extracting information available on the Web. In Table 2, we summarize applications that have been developed for semi-structured data extraction. However, there is not much research done pertaining to data extraction for multimedia data such as image, audio and video. <table> <thead> <tr> <th>Author</th> <th>Mode</th> <th>Semi-structured Data</th> </tr> </thead> <tbody> <tr> <td>Raza &amp; Gulwani [23]</td> <td>Online</td> <td>/ Text Audio Video Image</td> </tr> <tr> <td>Narawade et al. [9]</td> <td>Online</td> <td>/</td> </tr> <tr> <td>Offline</td> <td></td> <td></td> </tr> <tr> <td>Song, Sun, &amp; Liao [25]</td> <td>Online</td> <td>/ / /</td> </tr> <tr> <td>Bhardwaj &amp; Mangat [26]</td> <td>Online</td> <td>/ / /</td> </tr> <tr> <td>Kadam &amp; Pakle [27]</td> <td>Online</td> <td>/</td> </tr> <tr> <td>López et al. [28]</td> <td>Online</td> <td>/ / /</td> </tr> <tr> <td>Abidin, Idris, &amp; Husain [20]</td> <td>Online</td> <td>/ / /</td> </tr> </tbody> </table> Document Object Model (DOM) can be applied directly to find the required information from html documents. Abidin, et al. [20] constructed DOM tree structure on the first step. Then, unnecessary nodes such as script, style need to be filtered. Classification process is important to search classes of multimedia data. Data for media will be recognized when the parser found word “src=” in the data structure. Finally multimedia data can be extracted. However, it has been found that it requires large amount of processing time during the extraction process of web pages that consist of large html structure. Besides that, the extraction process will extract all images without consider repetitive files as show in Figure 2. WEIDJ Model is proposed to overcome the limitations of DOM model in extracting images. 3. WEIDJ MODEL This research proposes the information integration model for data extraction focus on semi-structured data such as image, video, audio and text by using Document Object Model (DOM) and JavaScript Object Notation (JSON). Based on the proposed integration models, a mediator tool called as a wrapper will be developed as experimental to extract semi-structured data from heterogeneous source like web pages. Experiments will be conducted on Setiu wetlands web site and biodiversity web pages dataset for testbed. The aim of this work is to find extraction approach that can identify and extract images. In this paper, we propose WEIDJ model to extract images from a web page as shown in Figure 3. It also mines images information and focuses on arranging the extracted data in a tabular format. This tool aims speed and performance of image extraction. Lots of applications worked on extracting information then arrange them into structured format [29, 30]. Mining information records in data regions plays important role. It becomes easier to extract data from data regions because it contains useful data such as images, text, audio and video. A technique is needed for the process of mining data area. This model proposed DOM tree to mine data regions in web page. ![WEIDJ Extraction Model](image) Figure 3. WEIDJ extraction model 3.1 DOM Tree Construction Initially for information extraction from web page, web address or unified resource locator (url) of web page is required. When user inputs the url, single page is the extraction target but when user input multiple url, the extraction target aims at page-wide information. This paper only focuses on record level extraction task. This approach focuses on the image extraction from surface web. Most websites are developed based on the html format rather than xml. Basically DOM will be able to define and manipulate html documents into a tree structure [9]. It defines the logical structure of documents and the way it can be accessed and manipulated. It is also known as node tree. Everything in a web page contains html tags and elements, text nodes and others. In this research, DOM is applied in html documents to form a web page into a pattern tree structure. Although objects in the DOM tree can be manipulated using certain methods but in this case, we transform the structure of web page into tree structure to recognize data regions. This process is important because data region consists useful information that is will be retrieved based on html element. Every web page is developed using html element. Each module of html element is useful for several data records. This module discovers the whole element in web documents such as <html>,<table>,<div>,<img>,<form>,<video> and so on. All the tags will be extracted from root node to its child node in designing web page. So, tags can be recognized data region by using html element. 3.2 Data Region and Classification Multimedia data such as images, video clips, animations, graphics and audio have increased rapidly over the past several years. Users have begun to expect that multimedia contents should be easily to access. They want to find relevant images that appear in web page, see video clip related to text articles they read and listen to the audio. It is important to provide integrated access to diverse types of multimedia semi-structured data stored in disparate data sources. Many web data extractors today deal with multimedia data. Data classification is important as it categorized data based on required need. The class of different objects will be identified in data classification. Classification of data patterns is important for data extraction from the web page. Figure 4 illustrates the process of data extraction and classification. There are four classes that have been identified; image, text, audio and video. These multimedia data will be identified when the parser has found the word “src=” in the data structure during the extraction process. This is a keyword for multimedia data source reference to locate the source data that has been used. When location of the required source is determined, the parser will identify its data type. Table 3 shows data source for multimedia data that contains in html documents. ![Image](image.png) **Figure 4. Data classes** **Table 3. Data Source of Media** <table> <thead> <tr> <th>Type</th> <th>&quot;src&quot;: source reference</th> </tr> </thead> <tbody> <tr> <td>Text</td> <td>&lt;a&gt;&lt;p&gt;&lt;br&gt;&lt;font&gt;&lt;size&gt;</td> </tr> <tr> <td>Image</td> <td>“src=*.jpg gif png bmp”</td> </tr> <tr> <td>Audio</td> <td>“src=*.wav mp3 raw midi”</td> </tr> <tr> <td>Video</td> <td>“src=*.flv wmx mp4 avi”</td> </tr> </tbody> </table> *Improving Performance of DOM in Semi-structured Data Extraction... (Ily Amalina Ahmad Sabri)* 3.3 Content Structure Development Process using Visual Segmentation A visual segmentation is developed using each leaf node as an object. Visual segmentation has been proposed because it is easier to understand the structure of a web page visually compared to texting in details. This segmentation is important to check whether there are information that are required in each block. When conducting the experiment on image extraction using DOM and JSON method, it has been found that not all images can be extracted. As a solution, each block must be checked whether they have images or not. Figure 5 shows an example of visual segmentation of layout structure for www.wwf.org.my. ![Visual Segmentation of Layout Structure](image1) Figure 5. Visual segmentation of layout structure In this level, we aim to find all suitable visual block contained in the current web page. Basically, every node in the DOM tree can be presented as a visual block but nodes such as `<TABLE>` and `<P>` are not suitable to be represented as a single visual block. This is because they are commonly used for organization purpose. Several rules are considered in order to extract the visual block as below: - Tags cue such as `<hr>` usually displayed as a horizontal rule in visual browser. If each DOM node contain this tag, then we will partition the section. - If a DOM node has different background colour between from one of its child node. It will not be divided. - Separator can be used as indicator to divide different section within a page. These visual blocks segmentation is applied to check every single multimedia element so that all required information can be retrieved. 3.4 JavaScript Object Notation Data (JSON) JSON is a syntax for storing and exchanging data which originates from Java Script Object Notation. The advantage of JSON is that it is an open-standard format that uses human readable text to transmit data objects. [Yusof and Man [12], 31] stated that JSON is the best choice for storage and speedy in query information. The output can be ranged from simple to complex structure and highly nested. Figure 6 shows how JSON data set treats in column. $json_url_path$ is used as constructor to inform the JSON data set to include the nested structures of JSON object. In this particular example people need to input the url as json path. We specify the path using ‘src’ value which is simply find the information of image from the image nested structure. ``` #load json - web api for image extraction 1. $json -> $json_url_path 2. $json -> Sextract_content() 3. loop through records 4. search (jsonones key -> value) 5. #save selected images 6. $record -> save(); ``` Figure 6. Basic step JSON of WEIDJ programming Figure 7 shows WEIDJ algorithm that has been proposed in our research. This algorithm apply DOM to structure the html documents in hierarchical structure. Then visual segmentation blocks are developed for html page to check available element of images in each blocks. This approach is important to make sure all required images can be extracted without failure. JSON environment approach is applied in extracting images. Other rules like filtering similar filename and removing noisy images such as logo and button are to be considered to make sure images that has been extracted are valuable information. At last, final images and their details will be display in tabular format before users can store them into multimedia database. Figure 7. The WEIDJ algorithm <table> <thead> <tr> <th>Algorithm 1: Extraction Images</th> </tr> </thead> <tbody> <tr> <td>INPUT</td> </tr> <tr> <td>STEP 1</td> </tr> <tr> <td>STEP 2</td> </tr> <tr> <td>STEP 3</td> </tr> <tr> <td>STEP 4</td> </tr> <tr> <td>STEP 5</td> </tr> <tr> <td>STEP 6</td> </tr> <tr> <td>OUTPUT</td> </tr> </tbody> </table> Figure 8. Youtube channel Figure 9. Biodiversity Explorer Web Page Bhardwaj and Mangat [26] discussed that elements of tags whose size greater than 120000 have height value for extraction. As example, (size of tags 300x400 or 400x300). In contrast to our experimental testing on Youtube channel (Figure 8), we found that size for each image of video has been set to 196x110 and size for each icon has been set to 88x88. This can be used as a guideline that we must consider tag size below than 120000 for data extraction. For image extraction rules, we set that image tags whose size smaller than 50x50 will not be extract and be considered as noisy information. The rules for avoiding extraction repetitive files of images also will be added. The reason why image size larger than 50x50 must be considered to be extracted is because in certain web page, there are valuable images that have been set to size 70x70 as shown in Figure 9 such as in biodiversity explorer web page, images have been set to 70x70. 4. RESULTS The main motivation for this paper is to extract images and mine image details such as images, links of images, size of images and store selected images in single multimedia database. In ideal scenario, if people want to save image, it can be extracted manually. People can extract them manually by saving each image as many as possible. But how to extract and mine images manually if there are large of volume images? Therefore, another solution must be developed to extract images automatically to reduce time consuming. The important part of extraction system is database of records. This is because the records that have been extracted and saved can be used for beneficial purpose such as documentation, analyze reports and so on. A data extraction engine need to be able to extract all the data that are required from web page. We need to define the unified resource locator (url) of the web page where the objective data is located. This is initial process to extract data from a specific web page. Figure 10 shows Setiu Wetlands web page namely WWF- Malaysia. WWF stands for World Wide Fund for Nature. It was formerly known as the World Wildlife Fund but adopted its current name to show that it also works on other environmental issues, and not just wildlife. Figure 10. A snapshot images of the WWF web page In this experimental works, sample of wwf web pages were taken and the content extraction experiments was performed on the sampled data using html source file. This file contains images information that are going to be extracted. We can see that within brackets '{' and '}' there are list of commands that consist of image and image url from sample extractor specification of file. Most of the images in .jpg format file. Our wrapper are able to extract images in various format such as .jpg, .gif, .bmp and others. Looking at Figure 11, it shows extracted information which is arranged in structured way. The syntax shows the information will be display in tabular format. The extraction process in this example is performed by table definition. The initial command $json_url fetches the contents of the source file whose url is given in ['url']. After the file has been fetched, the contents will be specified into specific criteria such as $no, $img_url, image, $size_in_bytes and $total_time_load_page. The extraction results will be represented in tabular format. Figure 11. The extracted information in JSON format In this paper, we have worked on the dataset called Science and Technology Resources on the Internet, “Biodiversity Web Resources”, which is having 43 online databases [32]. This experimental focus on five web pages on single page to form as input in image extraction. Table 4 shows web address for selected web pages and their domain that have been used for extraction purpose. The dataset is composed of a collection of web domain with different page structures. The different page structures allows us to study the performance of image extraction in different contexts. Table 4. Domain for Web Pages <table> <thead> <tr> <th>url</th> <th>Unified Resource Locator (URL)</th> <th>Domain</th> </tr> </thead> <tbody> <tr> <td>1</td> <td><a href="http://tolweb.org/tree/">http://tolweb.org/tree/</a></td> <td>Tree of Life Project (ToL)</td> </tr> <tr> <td>3</td> <td><a href="http://ocean.si.edu/">http://ocean.si.edu/</a></td> <td>Ocean Portal: Smithsonian Institution</td> </tr> <tr> <td>4</td> <td><a href="http://www.iucn.org/">http://www.iucn.org/</a></td> <td>International Union for Conservation of Nature</td> </tr> <tr> <td>5</td> <td><a href="http://www.endangeredspeciesinternational.org">http://www.endangeredspeciesinternational.org</a></td> <td>Endangered Species International</td> </tr> </tbody> </table> Table 5 shows few familiar sections that are total extracted images in single page and time required to complete each extraction process. This table summarizes the results of the performed experiments. This extraction involves three approach DOM, JSON and WEIDJ. First column contains number of unified resource locator (url) that can be refer from Table 4. Column total of images shows the number of images in single page. Column image extracted shows the number of successfully images that has been extracted. Column time required in second represent time processing for extracting images in DOM, JSON and WEIDJ approach. Table 5. Extraction Results <table> <thead> <tr> <th>url</th> <th>Total of images</th> <th>Image Extracted DOM</th> <th>Time required in second</th> <th>Image Extracted JSON</th> <th>Time required in second</th> <th>Image Extracted WEIDJ</th> <th>Time required in second</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>3</td> <td>3</td> <td>1.892</td> <td>3</td> <td>1.8814</td> <td>3</td> <td>0.004</td> </tr> <tr> <td>2</td> <td>27</td> <td>24</td> <td>21.5508</td> <td>25</td> <td>3.42</td> <td>26</td> <td>0.007</td> </tr> <tr> <td>3</td> <td>13</td> <td>9</td> <td>13.35</td> <td>9</td> <td>9.58</td> <td>9</td> <td>0.006</td> </tr> <tr> <td>4</td> <td>15</td> <td>13</td> <td>8.23</td> <td>11</td> <td>6.34</td> <td>11</td> <td>0.006</td> </tr> <tr> <td>5</td> <td>22</td> <td>0</td> <td>59.63</td> <td>22</td> <td>11.7421</td> <td>22</td> <td>0.004</td> </tr> </tbody> </table> An experimental has been done to extract image using different approach. This experimental is important to identify the characteristics of DOM and JSON such as the ability to extract images and time taken for extraction process. Figure 12 and Figure 13 shows the experimental of performance image extraction using WEIDJ approach. The extraction of semi-structured data, images involves five web pages which have different of structure. It shows number of images can be extracted and time taken for extraction. The graphs show significant differences in image extraction. Time taken for image extraction using WEIDJ approach are speedy than both approaches, DOM and JSON. Figure 12. Image extraction using WEIDJ approach Figure 13. Time performance using WEIDJ approach Figure 14 shows comparison of time performance between DOM, JSON and WEIDJ approach. The graph consist of web address and time required for extraction process in seconds. The web address are represented by unified resource locator (url) that can be referred from Table 4. The graph shows time for image extraction is speedy rather than JSON and DOM. 5. CONCLUSION In this work, we presented a new approach to study how JSON and DOM can be composed together into web extraction applications. From the experiment findings, the implementation of extraction using DOM and JSON proves that the extraction of images can be done in efficient way. This indicates the efficiency of extraction process. Complementary to this, we intend to combine both approaches to get the best performance. This wrapper has been developed based on proposed model, WEIDJ. In this paper, we do experimental by extracting images from single web page using three approaches DOM, JSON and WEIDJ. Finally, selected images can be saved in single multimedia database. For further purpose, user can query all images that has been saved in multimedia database. WEIDJ is proposed to make the extraction process more accurate and efficient as possible. In addition, this model motivates leads to the increase performance of time and space complexities. REFERENCES
{"Source-Url": "http://www.iaescore.com/journals/index.php/IJEECS/article/download/10513/8067", "len_cl100k_base": 7203, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 32674, "total-output-tokens": 9383, "length": "2e12", "weborganizer": {"__label__adult": 0.0003614425659179687, "__label__art_design": 0.0008568763732910156, "__label__crime_law": 0.000576019287109375, "__label__education_jobs": 0.0025348663330078125, "__label__entertainment": 0.00019562244415283203, "__label__fashion_beauty": 0.0002332925796508789, "__label__finance_business": 0.0004322528839111328, "__label__food_dining": 0.0004305839538574219, "__label__games": 0.0007524490356445312, "__label__hardware": 0.0014886856079101562, "__label__health": 0.0007944107055664062, "__label__history": 0.0005521774291992188, "__label__home_hobbies": 0.0001538991928100586, "__label__industrial": 0.0005817413330078125, "__label__literature": 0.0006361007690429688, "__label__politics": 0.0002903938293457031, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.32275390625, "__label__social_life": 0.00019419193267822263, "__label__software": 0.06182861328125, "__label__software_dev": 0.60302734375, "__label__sports_fitness": 0.00019037723541259768, "__label__transportation": 0.000446319580078125, "__label__travel": 0.00023055076599121096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39991, 0.02701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39991, 0.50611]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39991, 0.88473]], "google_gemma-3-12b-it_contains_pii": [[0, 3150, false], [3150, 6966, null], [6966, 11591, null], [11591, 16211, null], [16211, 17561, null], [17561, 20963, null], [20963, 23694, null], [23694, 26906, null], [26906, 28812, null], [28812, 32656, null], [32656, 34376, null], [34376, 39991, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3150, true], [3150, 6966, null], [6966, 11591, null], [11591, 16211, null], [16211, 17561, null], [17561, 20963, null], [20963, 23694, null], [23694, 26906, null], [26906, 28812, null], [28812, 32656, null], [32656, 34376, null], [34376, 39991, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39991, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39991, null]], "pdf_page_numbers": [[0, 3150, 1], [3150, 6966, 2], [6966, 11591, 3], [11591, 16211, 4], [16211, 17561, 5], [17561, 20963, 6], [20963, 23694, 7], [23694, 26906, 8], [26906, 28812, 9], [28812, 32656, 10], [32656, 34376, 11], [34376, 39991, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39991, 0.26667]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a407bbb8f73f2ae79578f77f8dc44415505527bc
Developing and operating time critical applications in clouds: the state of the art and the SWITCH approach Zhao, Z.; Martin, P.; Wang, J.; Taal, A.; Jones, A.; Taylor, I.; Stankovski, V.; Garcia Vega, I.; Suciu, G.; Ulisses, A.; de Laat, C. Published in: Procedia Computer Science DOI: 10.1016/j.procs.2015.09.220 Citation for published version (APA): HOLACONF - Cloud Forward: From Distributed to Complete Computing Developing and operating time critical applications in clouds: the state of the art and the SWITCH approach Zhiming Zhaoa*, Paul Martina, Junchao Wanga, Ari Taalb, Andrew Jonesb, Ian Taylorb, Vlado Stankovski, Ignacio Garcia Vega, George Suciue, Alexandre Ulissesf, Cees de Laat *University of Amsterdam, Science Park 904, Amsterdam, 1098XH, the Netherlands *contact: z.zhao@uva.nl bCardiff University, Queen's Buildings, 5 The Parade, Cardiff CF24 3AA, Unite Kindom cUniversity of Ljubljani, Slovenia daWellness Telecom SL, Spain eBEIA Consult International SRL, Romania fMOG Technologies SA, Portugal Abstract Cloud environments can provide virtualized, elastic, controllable and high quality on-demand services for supporting complex distributed applications. However, the engineering methods and software tools used for developing, deploying and executing classical time critical applications do not, as yet, account for the programmability and controllability provided by clouds, and so time critical applications cannot yet benefit from the full potential of cloud technology. This paper reviews the state of the art of technologies involved in developing time critical cloud applications, and presents the approach of a recently funded EU H2020 project: the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). SWITCH aims to improve the existing development and execution model of time critical applications by introducing a novel conceptual model—the application-infrastructure co-programming and control model—in which application QoS and QoE, together with the programmability and controllability of cloud environments, is included in the complete application lifecycle. © 2015 The Authors. Published by Elsevier B.V. Peer-review under responsibility of Institute of Communication and Computer Systems. Keywords: Time critical applications, Cloud, quality of user experience, infrastructure programming, self-adaptable system 1. Introduction Many time critical applications (applications that must respond immediately to time-sensitive events over a prolonged period) often have very high business value (e.g. on-demand business collaboration platforms) or social impact (e.g. disaster early warning systems). These applications demand a high standard of Quality of Service (QoS) (e.g. tsunami emergency response time) or quality of experience (QoE) (e.g. smooth delivery of ultra-high definition audio and video for live events), but are very difficult to develop and operate because of their distributed nature and the high requirements they impose on the runtime environment—in particular the sophisticated optimization mechanisms needed to develop and integrate system components that must interact seamlessly with one another. Cloud environments are capable of providing virtualized, elastic, controllable and high quality on-demand services for supporting these kinds of complex distributed application. Indeed, many cloud providers already provide many of the technologies needed to develop and deploy these applications. However, what time-critical applications still need from the cloud is the ability to control the selection and configuration of infrastructural components in response to changing requirements and environmental pressures. Unfortunately current Cloud environments lack the tools and application programming interfaces that would allow the developers to exert such control on the underlying infrastructure in an intelligent, semi-autonomous manner. This paper reviews the state of the art of the technologies involved in developing time critical cloud applications, and present the approach in a recently funded EU H2020 project: the Software Workbench for Interactive, Time-Critical and Highly self-adaptive cloud applications (SWITCH). First we analyse the requirements for time critical cloud applications via three use cases, and then review the state of the art of supporting technologies for such applications. Afterwards we discuss the design of a software workbench to be developed in SWITCH. 2. Requirements and State of the art The development and operation of time critical applications in clouds are both very difficult, because they have such high requirements for system performance and service quality. In this section, we first use some examples to discuss the general requirements for developing, deploying and operating time critical applications in clouds, and then, after reviewing the technological state of the art, identify the key technical gaps that must be bridged. 2.1. Time critical applications and requirements The development of time critical applications faces multiple challenges, which can be seen in several use cases, a few of which are described below. **Use case 1—A collaborative real-time business communication platform.** Real-time communication plays an increasingly important role in many business applications, whether for videoconferencing, establishing a cooperative working environment for geographically dispersed colleagues, or performing remote diagnosis. However, renting very high bandwidth or private connection links is not affordable for many business users. The Web Real-Time-Communications (WebRTC)* project enables real-time communications directly in a web browser, but is limited when handling many-to-many situations—as the number of peers increases, so do the resources needed. When using a mixing architecture, this resource cost is concentrated on the server-side where all the video and audio data mixing is done. An ideal real-time business communication platform should be 1) very scalable with the number of peers, 2) very high quality in regard to its communication service, and 3) competitive in cost. **Use case 2—The elastic disaster early warning system.** Early warning for natural disasters is an important challenge for many countries. An early warning system often collects data from real-time sensors, processes the information using tools such as predictive simulation, and provides warning services or interactive facilities to allow the public to obtain more information. The implementation of this kind of system faces several challenges, as the system must: 1) collect and process the sensor data in nearly real time, 2) detect and respond to urgent events very rapidly, 3) predict the potential increase of load on the warning system when users increase, 4) operate reliably and robustly throughout its lifetime, and 5) be scalable with greater deployment of sensors. **Use case 3—A cloud studio for directing and broadcasting live events.** In a live event, like a football match or a music concert, the broadcaster or production company has to deploy a large number of personnel and many items of equipment to fully cover it. Multiple cameras are placed around the event venue to cover all the different angles that the director considers relevant. The cameras are then connected to an OB (outdoor broadcast) van through --- dedicated links. The OB van contains equipment used by an operator to control and mix the signal before sending it to the head office. Each camera is usually associated with an SDI (Serial Digital Interface) link that records the content at 1,485 Gbits/s (HD-SDI 1080i and 720p) and generally more than 3 cameras are present at a typical event. Furthermore, other cameras are used for interviews with the players and the bands, each one with similar characteristics to the ones recording the actual event. By virtualising the basic components that an OB van may have (typically the video switches) on the Cloud, directors and producers could then interact with different video sources remotely via the cloud environment instead of physically staying in the OB van. In order to realise this scenario however, not only does the network for delivering different video streams need to be of high quality, but also the connectivity between streams needs to be fully reconfigurable, based on different broadcast program scenarios. Due to the malleable nature of clouds, the network also has to be continually monitored and changes in state immediately responded to. In the meantime, the video material should also be archived and streamed to users who want to watch it afterwards. Several requirements can be enumerated from the above scenarios: 1) **System-level performance requirements**: there are constraints on quality of service that must be satisfied to deliver acceptable performance, regarding for instance latency in real-time collaborative activities in use case 1, emergency response time in use case 2, and delivery quality of high definition television in use case 3. 2) **Verifiability**: the programmer must find a way to prove that the application will actually achieve the desired QoS. For example, in use case 1, the load of the potential business activities and the number of real-time data switching services to be deployed in the cloud should be verified before activities are delivered to customers. 3) **Integration complexity**: the programmer must also deal with the integration and communication aspects of his or her application, which is a complex task, given the specialised knowledge needed of both the software and infrastructure parameters that can be controlled—for instance both uses cases 2 and 3 involve external hardware that must be able to feed data directly and reliably to application components deployed in the cloud. 4) **Use of virtualized resources**: in order to provide properties such as portability, replicability and availability, as well as reduced operational cost and flexibility of resource utilization, the application must be virtualized and made independent of physical infrastructure. This is usually perceived as a complex task that can be achieved only by cloud programming experts and necessitates the selection of appropriate cloud platforms and programming models—for example in use case 1 different parts of the collaboration platform are deployed in their own domains according to their own scaling requirements in anticipation of the services demanded by clients. 5) **Configuration of the infrastructure**: implementing the QoS constraints necessitates knowledge of the underlying infrastructure and furthermore of the configuration of components on the network. Hence there is the need for a common, reusable interface for programming the infrastructure (e.g. software-defined networks and infrastructures)—for example in use case 3, there is a need to monitor the QoS of the streaming service and respond to changes immediately, requiring an understanding of how to respond to quality issues programatically. 6) **Data intensive communication**: there is a need to collect and store large volumes of data, and decisions must be taken on where and how to process data in order to cope with problems of data volume, variety, velocity and veracity. There is therefore a need to define virtualized storage and communication configurations based on functional descriptions and to characterise data throughput and latency between application components. In situations where the supply of data can dramatically change, it is also necessary to provide support for elasticity and modify the configuration accordingly, for example in use case 2, where an event can create a surge of data from existing sensor deployments. 7) **Adaptability for quality-on-demand**: applications increasingly need to be configured to cope with quality-on-demand in the run-time environment. For example, when live events are being broadcast (as in use case 3), the follower might want the best possible video quality when watching a particular content, while otherwise being satisfied with lower quality when watching something else. 8) **Adaptability to changing infrastructure**: in many use cases (including the three described above), maintaining QoS without interruption is of particularly high importance. In such cases the programmer must implement code to address changes of infrastructure, for example by designing and implementing the ability to change packet routing if QoS attributes begin to deteriorate. 9) **SLA negotiation:** closely related to the necessity to adapt to changing infrastructure, it is also necessary to dynamically enforce explicit SLAs (Service Level Agreements) with cloud providers at runtime; this is particularly of concern where prolonged violations of SLAs can have a measurable business impact, such as for use cases 1 and 3. These requirements cover the entire lifecycle of the time critical cloud applications: including development, verification, programming, deployment, and runtime control. Programmable infrastructures, such as clouds and software defined networking, provide an elastic and flexible way of configuring and reconfiguring the infrastructure as needed (much more flexible than the traditional approach of configuring individual switches, firewalls, etc., directly). However, it is becoming increasingly apparent that the development of such time critical cloud applications presents complex requirements to programmers. We review existing technologies in three contexts: application development, infrastructure customization and deployment, and runtime QoS control. 2.2. State of the art Support for time critical applications in cloud environments is still at a very early stage, especially for applications that should be self-adapting in order to maintain the required system performance. We shall review the state of the art from the three most relevant technical aspects: 1) distributed application programming, 2) advanced infrastructure (and in particular programmable infrastructure), and 3) self-adaptive performance control. Programming distributed applications often depends on the adoption of a specific computing architecture or platform; typical examples include Message Passing Interface (MPI)-based parallel computing in distributed memory cluster architectures, service platform-based workflow applications, and cloud-based Map Reduce processing. Time critical applications may likewise involve some MPI or other parallel computing based components for high performance data processing; however, the distributed nature of the system components often makes the time critical application difficult to deploy in a parallel computing program based on technologies like MPI. Quality constraints are used in workflow applications for describing the abstract workflows, and for creating the runtime enactment, such as by Zhao et al. However, in those applications, the creation of the application logic is mostly separated from the customisation of the runtime environment; in particular, a formal model is rarely utilised in verifying time constraints. Co-programming of application and the runtime environment requires a formal underlying model for both applications and infrastructure, continuous interaction between the programming environment and the execution platform, and the ability to verify that the specified application is consistent and executable with the requested QoS. 1) **Application and infrastructure co-specification** benefits from the use of a formal taxonomy for application and infrastructure profiles, which would assist in developing the necessary mappings from QoS constraints to application/infrastructure characteristics to data acquired during runtime monitoring. A quality-aware model-driven engineering approach (such as described in Soley) can be rigorously specified (for example using OCL or a dedicated UML profile like MARTE) could be used by developers to describe applications in a way that could then be used to effectively deploy those applications on the cloud. 2) **Internal coordination** is necessary to provide live feedback between developer and execution platform. With regard to the programming interface via a Web interface, it is necessary to use lightweight, non-invasive technologies; for example high-level web frameworks that provide support for RESTful APIs, and allow the use of message queuing systems (e.g. based on AQMP) to manage the execution and monitoring of distributed components in conjunction with lightweight workflow management tools. 3) **Formal reasoning and verification** is needed to maintain consistency between application and infrastructure views. There is a lack of well-defined methodology for translating between system-objective QoS attributes and human-subjective QoE attributes. The use of application profiles (e.g. modelled using MARTE) that can be translated into formal models like timed Petri nets, makes it possible at an early stage to formally verify the satisfiability of certain non-functional requirements. For the profiling of infrastructures, the use of network- oriented description languages such as the Infrastructure and Network Description Language\textsuperscript{8} appear promising if they can be adapted for the cloud context. The use of Semantic Web technologies would make it easier to ensure interoperability with other information models such as used for SLA negotiation, monitoring or QoS specification. Advanced infrastructures enable quality guaranteed runtime environments for time critical applications, in which we can see two important foci directly related to time critical applications. The first one is from the transference of High Performance Computing (HPC) environments to virtualized infrastructure such as the Cloud, for instance HPC cloud services in the European Grid Initiative (EGI)\textsuperscript{9}. In those environments, supercomputers, and HPC and GPU clusters, are deployed in a cloud environment to support tasks with very high performance requirements, but underlying most of them is an MPI-based parallel computing model. The second focus is driven by the emergence of advanced network technologies—not only advanced hardware (e.g. quality-guaranteed optic networks) but also advanced protocols for controlling network behaviour and quality of service, such as Software Defined Networking (SDN)\textsuperscript{10}. These advanced infrastructures provide developers opportunities to program and customise qualified runtime environments for time critical applications. However, to do this effectively also requires 1) an effective planning model for defining the virtual runtime environment, 2) advanced network services for optimised communication, 3) agile encapsulation of application components, and 4) technologies for issuing Service Level Agreements (SLAs) and real-time negotiation. In detail: 1) **An effective planning model for defining virtual runtime environment** involves selecting and matching resources from a resource pool with specific requirements. Semantic modelling and searching technologies are commonly used. The semantic model for describing virtual infrastructure, in particular network topologies, is important in this context. Ghijsen et al.\textsuperscript{8} describe a semantic web based description language for virtual resources and network known as the Infrastructure Network Description Language (INDL), based on the Network Modelling Language (NML). For search, semantic matching and optimisation technologies such as genetic algorithms and Ant Colony Optimisation (ACO) have been explored extensively\textsuperscript{11}. 2) **Advanced network services** provide applications extra opportunities to optimise data communication. Software Defined Networking (SDN) protocols such as OpenFlow\textsuperscript{12} and Network Service Interface (NSI)\textsuperscript{13} have attracted substantial attention from both industry and academia. Compared to purely network level protocol optimisation, such as multiple path TCP, these SDN technologies allow applications 1) to customize network connectivity between services by defining suitable flow forwarding tables, or by reserving dedicated links, 2) to virtualize the network resources for different partition schemas by tuning the network slice for given set of computing and storage nodes, and 3) to control the network quality of service by either advanced reservation of links or dynamically controlling the packet flows. However, including these new features in data delivery services is still at a very early stage. 3) **Agile encapsulation of application components** is required to deploy non-monolithic applications effectively on clouds. A number of virtualisation solutions exist, but containers have recently gained an increased profile as a more lightweight and easily extensible mechanism that can be used in lieu of full virtualisation if a single standard operating system kernel can be employed\textsuperscript{14}. Container technologies such as Kubernetes or Rancher deployed on a dedicated container operating system such as CoreOS or RancherOS can be employed if supported by adequate scheduling mechanisms to ensure high availability of components\textsuperscript{15}, though more sophisticated monitoring facilities than are currently provided with these technologies may be needed. 4) **Service Level Agreement (SLA) issuing and real-time negotiation technologies** depend heavily on the complexity of the mapping between application requirements and the available resources, and the matching among quality requirements at different service layers. Most mapping approaches are based on graph mapping using key quality parameters such as execution time; however limited association between the application and infrastructure during application development makes the searching procedure over large resource graphs very time consuming. In this context, the main approach currently taken to improve the search procedure is to include different types of heuristics and optimization technologies, for instance parallelizing the searching procedure for matching resources and applications\textsuperscript{16}, pre-processing the resource information by clustering the resource information based on the SLA request, and multi objective optimisation for searching alternative solutions\textsuperscript{17}. Self-adaptable software architecture and performance control has attracted substantial attention amongst software engineering researchers during the past decade. Self-configuration permits services to reconfigure themselves in response to changes in their environment. The elasticity of clouds makes them a natural context for self-adaptive applications, with the ability to scale both vertically (conscripting more powerful resources) and horizontally (parallelising computation across more nodes). Despite this, current cloud providers have yet to solve all the problems associated with automating scaling strategies and providing runtime configuration features for virtualised applications. The self-adaptation problem is often modelled as a looped procedure of monitor-analyse-plan-take action18. 1) **Monitoring the system performance** has been extensively studied in the context of distributed systems19. Application quality can be monitored directly by the application itself or via the monitoring of the runtime environment. In distributed applications, such as workflow management systems, QoS monitoring is often associated with logging of the application’s execution. The workflow provenance system employs a semantic model to annotate logged events and allows users to query the events based on specific criteria and to reconstruct the execution sequence of the application20. Monitoring of QoS aspects of the infrastructure, such as network quality, has been recognised as an important service, especially in cloud environments. MonPaaS21 is an example. These basic monitoring services provide useful support for obtaining quality of service information, but they mainly focus on specific quality attributes of the infrastructure or the application. Semantically harmonising different kinds of monitoring information, and in particular harmonizing this information with the application logic, still remains challenge for self-adaptive systems. 2) **The analysis and planning mechanism** can be implemented using a centralised or distributed paradigm. SanGA22 and ReviveNet23 are two examples of the centralised paradigm—the advantage of such an approach is low communication overhead and easy implementation. In the distributed paradigm, the activities in the looped procedure are distributed. Advantages over the centralized approach include the fact that there is no single point of failure, and greater flexibility in monitoring and controlling distributed components, although the communication overhead may have a negative impact. Nallur and Bahsoon24 mention time related issues as a basis for self-adaptability (as a cost for selecting services in an auction procedure), but do not directly refer to the time critical attributes we highlighted in the proposal. 3) **The self-adaptive mechanism** has been implemented mainly using a so-called architecture style25, in which the internal structure of the application architecture is explicitly modelled and the application can manipulate the structure at runtime based on the results of certain decision-making procedures. Esfahani et al.26 proposed a different approach based on the application features and use a machine learning mechanism to implement adaptability. These early works either focus only on the application control, or only on the quality of the service from the provider point of view; they do not fully explore the adaptability of systems that contain programmable infrastructure. All these aspects (distributed application programming, advanced infrastructure, and self-adaptive performance control) need to be addressed to realise the deployment of time critical applications in cloud environments and have been subject to considerable prior research; but they all still far from solved problems, especially given recent developments in programmable infrastructures and cloud platforms. 2.3. **Technical gaps** From the above review, we can identify a number of technical gaps that need to be addressed: 1) Current time-critical application programming models lack consideration of the controllability of the infrastructure; they thus do not exploit the potential benefits offered by the programmable infrastructure provided by cloud environments. In particular, there is a lack of effective description mechanisms for specifying application logic, system quality constraints, and infrastructure controllability. Moreover, tools providing effective support for application-infrastructure co-programming have not yet been produced. 2) There have been many studies on the application of various optimisation mechanisms in selecting resources. However, there is currently no semantically well-modelled mapping between the application quality of user experiences, and infrastructure-level QoS attributes. This renders the modelling and negotiation of a Service Level Agreement between application and the resource providers difficult at best. Although optimisation of network protocols in data communication has been extensively studied in the network and communication domain, the inclusion of network controllability (such as SDN technologies) in applications’ data delivery services is still at a very early stage). 3) Although self-adaptive software has already been studied in software engineering, and autonomous systems, there lack effective mechanisms to semantically couple the various kinds of monitoring with application logic, and to make use of this information in order to adapt system-level performance by controlling both application and infrastructure. The existing self-adaptation intelligence focuses either on controlling application architecture or on the service quality of the runtime infrastructure; there is a lack of co-controlling mechanism addressing both aspects. 3. The SWITCH approach We propose a software workbench, namely the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). The overall objective of the SWITCH project is to address the entire life-cycle of time-critical, self-adaptive cloud applications by developing new middleware and front-end tools to enable users to specify their time-critical requirements for an application using an interactive user interface, then deploy their applications and adapt the infrastructure to changing requirements either automatically (using the specified requirements) or by human intervention if desired. 3.1. The basic idea The SWITCH project addresses the technical gaps described in section 2.3 by providing an interactive and flexible software workbench that, by using discovery tools at the networking level and QoS requirements from the application level, can provide the tools necessary to control the lifecycle for rapid development, deployment, management and dynamic reconfiguration of complex distributed time critical cloud applications. In particular, SWITCH provides integrated support for defining, optimising and controlling time critical constraints while programming, testing, deploying and executing the applications. Using a fully responsive web based interface and backend components for coordinating the data flows across the networking infrastructure, the SWITCH workbench can define dynamic application-level mappings for the time critical control rules and strategies to be employed on an application-by-application basis. At the core of SWITCH environment is a new development and execution model—an application-infrastructure co-programming and control model—that will be developed for time-critical cloud applications. This model brings together application composition, execution environment customisation, and runtime control, which are normally handled by separate processes, into one optimisation loop based on time critical requirements. In this model: 1) the application logic will be programmed with full regard to QoS/QoE constraints together with the programmability and controllability of the cloud environment such that both application and the virtual runtime environment for executing the application can be optimised at the design phase; 2) the virtual runtime environment can be customised for time critical application requirements, and can be provisioned in the cloud with a time critical application oriented Service Level Agreement (SLA); and 3) the application can autonomously adapt its own behaviour and that of the virtual runtime environment when performance drops during runtime. The SWITCH environment employs formal performance reasoning mechanisms to guide each step in the development and the tools are delivered to the users via three subsystems, which are shown in Figure 1. 3.2. SWITCH integrated development environment The SWITCH Interactive Development Environment (SIDE) subsystem provides interfaces for all user- and programmer-facing tools, by exposing a collection of graphical interfaces and APIs that tie SWITCH's services to a Web-based environment. SIDE will be engineered using fully responsive HTML5 on the front end, providing interactivity and end-user device portability, and the back-end Web services (hooks) and APIs will be constructed using Python and tools such as Django, Flask, or similar. 3.3. Dynamic real-time infrastructure planning The Dynamic Real-time Infrastructure Planner (DRIP) subsystem prepares the execution of the applications developed in the SIDE subsystem by 1) semantic modelling and linking of different QoS/QoE attributes, 2) defining an optimal virtual runtime environment, 3) creating a Service Level Agreement with the resource provider, and 4) deploying the platform required by the application. Autonomous System Adaptation Platform The Autonomous System Adaptation Platform (ASAP) 1) monitors the status of the application and the runtime environment, 2) examines the actual performance of the required quality attributes, 3) autonomously manipulates the application and runtime environment to maintain optimal system level performance against the time critical constraints, and 4) learns from its own decision history to improve its intelligence in making future decisions for autonomous reconfiguration of the application. Basic scenario Figure 1 shows a basic scenario of SWITCH. The application developer begins with composing the application logic and defining the QoS constraints (e.g., maximum latency for state visualisation or maximum sensor event handling delay) that apply to it (step 1). The developer can also provide an abstract network overlay to define the runtime environment (step 2). These activities can be optimized and aided using a knowledge base of successful patterns of applications and infrastructure, which employs a formal reasoning component (step 3). The results will be passed from SIDE to DRIP; the developer can also specify requirements such as specific resource providers to be used, and... the total cost budget for application execution (step 4). DRIP plans the concrete virtual runtime environment of computing, storage and network elements by reasoning about the provided application-level QoS constraints (step 5). DRIP then generates SLAs with the resource provider(s) (step 6), and the resource provider(s) provision the virtual environment accordingly (step 7). After that, DRIP customises the virtual environment and deploys necessary services for the application (step 8), and unbundles and executes the application (step 9). At runtime, SIDE allows the user to: query and visualise the run-time status of the application and run-time environment (step 10a); receive notification of system status and inform the user (step 10b); directly manipulate the system execution (step 10c); and expose real-time monitoring information from the database, which is generated by ASAP. The ASAP subsystem can monitor the runtime status of the application and its environment (step 11a), pass this information (via the backend database) to SIDE, diagnose system performance and make decisions on control actions needed to restore performance where necessary (step 11b), take action to maintain system performance (step 11c), and learn from history in order to improve the subsequent effectiveness of decision making (step 11d). 4. A use case The functionality of SWITCH can be better illustrated by referring back to one of the use cases of section 2.1. The SWITCH environment can contribute to the use case of the cloud studio for directing and broadcasting live events with all three of its subsystems: 1) The SIDE subsystem can provide the cloud studio developer an intuitive, interactive interface to 1) describe the application logic among: video camera sources, streaming services, archiving, video switch service and possible video processing services, 2) define quality requirements at system-level or each individual process level, 3) describe an abstract virtual runtime environment for switching videos, and 4) describe the cost and quality requirements for the runtime environment on Cloud, and select potential cloud providers. The formal reasoner included in SIDE can then validate the quality constraints a developer has put in the description. 2) The DRIP subsystem will create a concrete virtual runtime environment based on the input received from SIDE, and create a Service Level Agreement (SLA) to negotiate with the resource provider, and deploy the virtual video switch services to the virtual runtime infrastructure after it is provisioned. 3) The ASAP subsystem will detect the key quality attributes such as video quality, and packets lost during streaming, and dynamically tune the network quality by adding new flows (network paths) or rescheduling the network traffic. With the support of SWITCH, the development and deployment effort associated with a cloud based production service for live events can be dramatically reduced. Instead of transporting large numbers of equipment and crew members to the event venues, the broadcaster will only need to rent sufficient bandwidth network for connecting the video content to the virtual video switch in the cloud using the SWITCH environment. This use case concentrates on the broadcast scenario, but actually this technology can bring much broader business potential, for instance for enterprises to broadcast their own live events, or even for individuals to start up their own personalized TV stations. 5. Summary In this paper, we introduced the basic idea and approach of a newly funded EU H2020 project called SWITCH. The project started in February 2015 and will last three years. The software of SWITCH will be open source. The consortium draws half of its members from industry, and the other half from academic institutions; the development of the SWITCH software will be tested and validated using industrial-strength use cases. Moreover, the industrial partners will actively explore the future market value of developments. 5.1. Relevant projects and innovations Related in scope with SWITCH, several other projects have tackled similar problems in cloud service development, provisioning and QoS control, including MOSAIC27,28, MODAClouds29, NetIDE30, DICE31, SeaClouds32 and U-QASAR33. From the software engineering point of view, the FP7 project MODAClouds (MOdel-Driven Approach for design and execution of applications on multiple Clouds) focuses on methodology and tools for designing multi-cloud systems with guaranteed QoS. MODAClouds follows an MDE approach, where the applications are defined at 3 different levels (Cloud-enabled Computation Independent Model, Cloud Provider Independent Model and Cloud Provider-Specific Model) using a meta-model called MODACloudML, allowing developers to specify both functional and non-functional requirements. For validation, the CPIM and CPSM models can be transformed into Palladio Component Models (PCM) so that the software engineer can run quality prediction tools over those models to verify in an early stage if the non-functional requirements can be satisfied. The project shares a similar mission with SWITCH, finding a new approach for managing the lifecycle of cloud applications; however, SWITCH has a specific focus on the programmability of infrastructures, using techniques such as SDN. The FP7 project NetIDE (“An integrated development environment for portable network applications”) developed an Eclipse-like IDE and associated tools for developing network applications based on SDN. NetIDE follows a Model Driven Architecture (MDA)-like approach, where the network applications are defined using a platform-independent model written in a domain-specific language called IRF (NetIDE Interchange Representation Format). The project aims at developing network control plane programs, defining mechanisms to abstract SDN programming independently from the underlying SDN flavour. NetIDE contributes valuable input for SWITCH’s development, however the SWITCH project has a different focus, namely on combining infrastructure programming (e.g., at SDN-level) with the verification of time critical quality constraints at the application level. The FP7 project MOSAIC (“Open Source API and Platform for multiple clouds”) focused on provisioning and deploying applications on multiple Clouds. The open source platform offered APIs for developing cloud applications with abstractions of IaaS (Infrastructure-as-a-Service) services that enable the migration of these applications from one cloud to another, implementing a semantic engine for discovering cloud API components, resources and services driven by functional and application domain concepts, cloud patterns and inference rules. In SWITCH, semantic components will be used to describe not only application service components, but also infrastructure properties, such as network topology and QoS. In contrast to MOSAIC, SWITCH highlights the time critical application oriented programming and control model. Several other projects have also addressed application QoS control in clouds. The EU H2020 DICE (“Developing Data-Intensive Cloud Applications with Iterative Quality Enhancements”) project aims to continuously enhance data-intensive cloud applications with the goal of optimizing their service level by making quality-aware MDE accessible to developers of Big Data applications. DICE tools rely on UML meta-models annotated with information about data, data processing and data transfers. The DICE QA tool chain covers simulation, verification and architectural optimization. These tools are coupled with feedback analysis methods to help the developer iteratively improve the application design, based on monitoring data obtained from test or production environments. The EU FP7 U-QASAR (Universal Quality Assurance & Control Services for Internet Applications with Volatile Requirements and Contexts) project created a flexible Quality Assurance, Control and Measurement Methodology to measure the quality of Internet-related software development projects and their resulting products. The development of SWITCH will carefully review the achievements from these projects when modelling and controlling quality attributes of services and user experiences for time critical applications. SWITCH focuses not only on these QoS control and optimization intelligence at specific stages, but more importantly on putting them in the context of a user-centred software workbench during the entire lifecycle of application development, provisioning and operation. Aside from cloud-specific projects, there are other projects, specifically in the research infrastructure space, that could benefit from SWITCH. The European Strategy Forum on Research Infrastructures (ESFRI) has produced a roadmap for European research infrastructures in various scientific domains; in particular, the ENVRIPLUS project55 seeks to provide common services to research infrastructure projects in the Environment cluster. Many research tasks can benefit from the exploitation of cloud technologies, and many of these tasks could be deemed time critical—especially tasks involving the continuous and timely processing of data from large-scale sensor deployments (similar to our use case 2, the disaster early warning system described in section 2.1). The SWITCH workbench could provide a tool for developers of research applications in much that same way as it provides a tool for commercial applications, especially for ‘long-tail’ researchers who are not members of larger research collectives and do not have privileged access to high-performance computing resources. 5.2. Agenda The SWITCH environment builds on proven technologies and is therefore capable of being developed to a fully operational standard. The workbench will be developed in six phases over three years: technology review and requirement analysis, system design, technology development, tool integration, validation by use cases, and release. Development will initially be conducted within private clouds, but project results will be pushed into public clouds and the federated clouds in order to identify gaps between the current orthodoxy of public cloud provision and the requirements for supporting real-time applications on clouds. Acknowledgements This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreements No 643963 (SWITCH project) and No 654182 (ENVRIPLUS project). References
{"Source-Url": "https://pure.uva.nl/ws/files/10838307/Developing_and_operating_time_critical_applications_in_clouds.pdf", "len_cl100k_base": 7972, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34328, "total-output-tokens": 10813, "length": "2e12", "weborganizer": {"__label__adult": 0.00030803680419921875, "__label__art_design": 0.0004012584686279297, "__label__crime_law": 0.00030541419982910156, "__label__education_jobs": 0.0007982254028320312, "__label__entertainment": 9.673833847045898e-05, "__label__fashion_beauty": 0.0001666545867919922, "__label__finance_business": 0.0005970001220703125, "__label__food_dining": 0.0002956390380859375, "__label__games": 0.0005092620849609375, "__label__hardware": 0.0014791488647460938, "__label__health": 0.0005822181701660156, "__label__history": 0.0003116130828857422, "__label__home_hobbies": 9.644031524658204e-05, "__label__industrial": 0.00042891502380371094, "__label__literature": 0.00028228759765625, "__label__politics": 0.00024056434631347656, "__label__religion": 0.0003883838653564453, "__label__science_tech": 0.08612060546875, "__label__social_life": 9.429454803466796e-05, "__label__software": 0.0182037353515625, "__label__software_dev": 0.88720703125, "__label__sports_fitness": 0.00022494792938232425, "__label__transportation": 0.0005779266357421875, "__label__travel": 0.00021338462829589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50542, 0.03393]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50542, 0.23299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50542, 0.8854]], "google_gemma-3-12b-it_contains_pii": [[0, 637, false], [637, 3053, null], [3053, 7776, null], [7776, 12768, null], [12768, 17564, null], [17564, 22968, null], [22968, 27918, null], [27918, 32063, null], [32063, 33732, null], [33732, 38023, null], [38023, 43405, null], [43405, 47976, null], [47976, 50542, null]], "google_gemma-3-12b-it_is_public_document": [[0, 637, true], [637, 3053, null], [3053, 7776, null], [7776, 12768, null], [12768, 17564, null], [17564, 22968, null], [22968, 27918, null], [27918, 32063, null], [32063, 33732, null], [33732, 38023, null], [38023, 43405, null], [43405, 47976, null], [47976, 50542, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50542, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50542, null]], "pdf_page_numbers": [[0, 637, 1], [637, 3053, 2], [3053, 7776, 3], [7776, 12768, 4], [12768, 17564, 5], [17564, 22968, 6], [22968, 27918, 7], [27918, 32063, 8], [32063, 33732, 9], [33732, 38023, 10], [38023, 43405, 11], [43405, 47976, 12], [47976, 50542, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50542, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
354de37b868c4f3d221028af3a948145b2f5041f
How validation can help in testing business processes orchestrating web services Damian Grela\textsuperscript{1*}, Krzysztof Sapiecha\textsuperscript{1†}, Joanna Strug\textsuperscript{1‡} \textsuperscript{1}Department of Computer Science, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland Abstract – Validation and testing are important in developing correct and fault free SOA-based systems. BPEL is a high level language that makes it possible to implement business processes as an orchestration of web services. In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. The paper describes an experiment that aims at answering a question whether or not the validation test scenarios are also adequate for testing an implementation of BPEL processes. The experiment employs a Software Fault Injector for BPEL Processes that is able to inject faults when the test scenarios are running. The results of the experiment seem very promising. Hence, it seems that validation tests might give a strong support for testing. 1 Introduction Recently, SOA (Service Oriented Architecture) \cite{sol} has become the most promising architecture for IT systems. It offers a way of composing systems from loosely coupled and interoperable services. The services are independent business functions made accessible over a network by remote suppliers. A developer of a SOA-based system should only select the most appropriate services and coordinate them into business processes that cover specification requirements for the system. BPEL (Business Process Execution Language) \cite{bpe} is a high level language that makes it possible to implement business processes as an orchestration of web services. orchestration consists in subsequent invoking the web services by a special element of the process, called its coordinator. It leads to a very simple and structured SOA where only the coordinator and communication links between the coordinator and the services need to be tested. A correctness of the services may be assumed, as they are provided as ready-to-use components and should be tested by their developers before being shared. Both, validation and testing may be performed with the help of test scenarios. In [3, 4] a method of generation of test scenarios for validation of a BPEL process was given. Test scenarios obtained by means of the method cover all functional requirements for the process and provide high validation accuracy [4]. This paper presents a case study that aims at answering a question to what extent such test scenarios are adequate for testing an implementation of the process. To this end an experiment employing Software Fault Injector for BPEL Processes (SFIBP) was carried out and fault coverage for the test scenarios was calculated. The paper is organised as follows. In Section 2 a related work is briefly described. In section 3 the problem is formulated. Section 4 defines fault coverage for the test scenarios. Section 5 contains a description of a case study. The paper ends with conclusions. ## 2 Related work The problem of testing the SOA-based systems is not new, but most researchers focused on test generation [5, 6, 7, 8, 9, 10, 11, 12]. Their works fall loosely into two categories: developing efficient algorithms for selection of adequate tests [6, 7, 8, 9] and automation of the selection process [10, 11, 12]. Y. Yuan and Y. Yan [6, 7] proposed the graph-based approaches to handle concurrency activities of BPEL processes, in addition to basic and structured activities. Their approach was extended, combined with other techniques and implemented by several other researchers [8, 9]. M. Palomo-Duarte, A. Garcia-Dominguez, and I. Medina-Bulo based their approaches on the traditional white-box testing methods [10, 11, 12] and used formal methods and hybrid approaches along with the ActiveBPEL [13] and BPELUnit [14] test library for generating tests. However, in the works there are not any studies concerning the adequacy of generated tests for both validation and testing of BPEL processes. The adequacy of tests can be measured with regard to some predefined metrics or by injecting faults and observing whether they are detected or not [15]. Fault injection is a popular technique that has been already applied in the context of SOA-based systems [16, 17, 18, 19]. The technique was often used for test generation [15]. PUPPET (Pick UP Performance Evaluation Test-bed) [16] is a tool for automatic generation of test-beds to empirically evaluate the QoS [17] features of a Web Service under development. GENESIS [18] generates executable web services from a description provided by the user and provides an environment in which the services can be tested prior to deployment in a production system. Another fault injection tool, WSInject [19], is a script-driven fault injector that is able to inject interface and communication faults. WSInject works at the SOAP level and intercepts SOAP messages. All of these approaches concern web-services or communication between a BPEL process and web-services (i.e. a fault is injected when a Web service is invoked). In the case of business processes various types of faults (e.g. replacement of input values) may appear. Therefore, SFIBP should be easily configurable to inject a rich variety of faults appearing in the very specific operational environment. 3 Problem statement Validation aims to determine whether a software system satisfies requirements specification or not [20]. Requirements specification defines, in a formal way, what the system is expected to do. Test scenarios derived from such specification may be successfully used for the validation. In [3] an effective method for generation of test scenarios for validation of BPEL processes against specification requirements defined in SCR [21] was given. However, specification requirements should not contain anything that is not of interest for a user. Thus, test scenarios derived from the specification can check all specified requirements, but not necessarily implementation details that are introduced in further stages of development of the system. Therefore, the system should be tested to detect implementation errors. As generation of tests is usually time consuming, it is of high importance to find out to what extent the validation test scenarios are useful for the testing. To this end, an experiment might be performed and the implementation error coverage for the test scenarios could be calculated. In general, the testing requires much more test scenarios than the validation. However, in the case of BPEL processes, which have very simple and well structured implementation, test scenarios limited to the validation may also be efficient. To measure the coverage of implementation errors by the validation test set, Software Fault Injector [22] for BPEL Processes will be applied. Implementation errors of BPEL process will be simulated by injecting faults when the test is running. 4 Faults in the SOA-based systems In the SOA-based systems faults may be caused by two reasons: 1. incorrect interaction between web-services, and 2. incorrect internal logic of the system components (web-services and/or coordinator). Interaction faults affect communication between different web-services or between the coordinator and the web-services. Internal logic errors are introduced by human developers or production facilities when components of the system are implemented. Eight types of interaction faults and four types of internal logic errors were identified [23]. Three out of them concern the systems orchestrating web services. These are the following: 1. Misbehaving execution flow. The fault occurs when a programmer invokes improper web-service\(^1\) (i.e. different from the specified one). Fig. 1 gives an example of an improper web-service invocation error (a) and a faulty free version of the code (b). ![](image1) Fig. 1. Improper (a) and correct (b) web service invocation. 2. Incorrect response. The fault is caused by incorrect processing, within a coordinator, of correct response of a web-service (other causes related to incorrect interanal logic of a web-service, as defined in [23], are not considered due to the assumption of correctness of web-services). Incorrect processing means, that: - a response from a wrong output port is used (Fig. 2), - a response is assigned to a wrong variable (Fig. 3), or - a response is not assigned at all (Fig. 4). \(^1\)The invoked web-service should exist and the invocation should be correct with regard to the specification of the web-service (otherwise such error will be reported by the compiler). 3. **Parameter incompatibility.** It occurs when a web-service receives, as an input data, incorrect arguments or arguments of incorrect types. The following four errors introduced into the implementation of a coordinator cause such a fault: How validation can help in testing business services... - a different operation of a web-service is invoked (Fig. 5). The operation should belong to the web-service (otherwise such error will be reported by a compiler). - a wrong input port is used (Fig. 6). The port used should be consistent with the one that should be used (otherwise such error will be reported by a compiler). - a wrong output port is used (Fig. 6), or - a wrong value is assigned to an input port (Fig. 7). ![invoke](http://ai.annales.umcs.pl) ![assign](http://ai.annales.umcs.pl) ![invoke](http://ai.annales.umcs.pl) ![assign](http://ai.annales.umcs.pl) Fig. 5. Different (a) and proper (b) operations of a web-service are invoked. Fig. 6. Wrong (a) and correct (b) input and output ports are used. Fig. 7. Wrong (a) and correct (b) values are assigned to an input port. Effects of the faults are visible because the faults make the external behaviour of the coordinator be different from the expected one. The cause-effect table is shown in Fig. 8. ![Cause-Effect Table](image) Fig. 8. Implementation errors, interaction and development faults and their effects. All other faults defined in [23] are not relevant for this work. These faults are either related to a physical layer or caused by providers of web-services (incorrectness of web-services or interaction between web-services). ## 5 Case study The goal of the case study is to evaluate the adequacy of validation test scenarios for testing BPEL processes. The test scenarios are evaluated based on their fault coverage calculated with respect to the faults generated by the SFIBP. The SFIBP generates the following three types of faults: 1. replacing web-service output parameters (OP), 2. replacing values of a web-service input parameters (IP), 3. replacing requested web-service with another one (WS). The faults generated by SFIBP give the same observable effects as those described in Section 4, but their injection does not require the implementation of a coordinator to be changed. The fault coverage for a set of test scenarios (FC) is expressed as a percentage of detected faults to all injected faults. \[ FC = \frac{F_D}{F_I} \cdot 100\% , \] where: $F_D$ – a number of detected faults, $F_I$ – a total number of injected faults. As the faults are artificially generated and injected, their total number is known. However, it is not possible to determine the number and the types of all errors that might be the real source of the faults. Nevertheless, this is not a shortcoming of the approach because only the coverage has considerable meaning. The subsequent subsections describe briefly SFIBP that was used in the experiment to generate and inject faults (Section 5.1), an example system and test scenarios generated for the system (Section 5.2), and the experiment and its results (Section 5.3). ### 5.1 Software Fault Injector for the BPEL Processes SFIBP is an execution-based injector [15], which is able to inject faults into the BPEL processes when test scenarios are running. The SFIBP has been implemented as a special local service that is invoked instead of the proper web-service. Such an approach helps reduce costs of the experiment, as the faults are injected without changing the implementation of a coordinator. A configuration file produced by the SFIBP defines three parameters of the proper web-services: - identifiers of all methods provided by the web-services (ID), - names of the methods, - the number and names of parameters of the methods. It also includes predefined values of input and output parameters, values of alternative web-services IDs that are used to generate faults and the probability that a fault will be injected. Information about the injected faults is stored in a log file. ### 5.2 Football Reservation System Football Reservation System (FRS) is a simple system allowing its users to book tickets for football games, hotels to stay during the games and plane or train tickets to arrive at the games. The system was implemented as a BPEL process orchestrating five web-services. Each of the services is accessible on a different server and the whole process of reservation is coordinated through a central coordinator (Fig. 9). Short descriptions of the web-services and their input and output parameters are given in Table 1. Types of the parameters are placed in brackets next to the parameter names. A set of test scenarios generated for the system consists of 4 test scenarios having between two and five input/output events. The total number of the events is 16. The test scenarios were generated by means of the checking path method presented in [3]. Their usage provided high validation accuracy for the system. Fig. 9. Service orchestration for a Football Reservation process. Table 1 <table> <thead> <tr> <th>web-service ID</th> <th>description</th> <th>Parameters</th> </tr> </thead> <tbody> <tr> <td>Client</td> <td>retrieves data from the client and sends information about order</td> <td>input: Date [String] output: Result [String]</td> </tr> <tr> <td>TicketRS</td> <td>checks an availability of a football ticket at the given date</td> <td>input: Date [String] output: Result [String]</td> </tr> <tr> <td>HotelBS</td> <td>checks an availability of a hotel room at the given date</td> <td>input: Date [String] output: Result [String]</td> </tr> <tr> <td>TrainTR</td> <td>checks an availability of a train at the given date</td> <td>input: Date [String] output: Result [String]</td> </tr> <tr> <td>PlaneTR</td> <td>checks an availability of a plane at the given date</td> <td>input: Date [String] output: Result [String]</td> </tr> </tbody> </table> 5.3 The experiment The experiment consisted in: 1. implementing a fault free BPEL process for FRS and generating validation test scenarios, 2. configuring the SFIBP, 3. starting the SFIBP and running the BPEL process with the test scenarios, 4. comparing the outputs generated by the BPEL process with the expected ones given by test scenarios, 5. saving the results, 6. calculating the fault coverage. Steps 3, 4 and 5 were repeated 1000 times. At each of the iteration randomly generated faults were injected into the BPEL process. Table 2 shows the setting for all web-services of the FRS. The first row of the table shows IDs of web-service. The next two rows show the values of output and input parameters that are used to replace the proper ones when the faults are injected. IDs of web-services that are invoked instead of the proper ones are shown in the last row. The probability that a fault will occur was set to 33% for all faults. <table> <thead> <tr> <th>Web-service</th> <th>TicketRS</th> <th>HotelBS</th> <th>TrainTR</th> <th>PlaneTR</th> </tr> </thead> <tbody> <tr> <td>output parameter</td> <td>„Yes”, „No”</td> <td>„OK”, „No”</td> <td>„Success”, „Failure”</td> <td>„True”, „False”</td> </tr> <tr> <td>alternative web-services</td> <td>„HotelBS”, „PlaneTR”, „TrainTR”</td> <td>„TicketRS”, „PlaneTR”, „TrainTR”</td> <td>„TicketRS”, „HotelBS”, „PlaneTR”</td> <td>„TicketRS”, „HotelBS”, „TrainTR”</td> </tr> </tbody> </table> The outputs generated by TicketRS, HotelBS, TrainTR and PlaneTR depend on an interval between a date of reservation and a date of football match. If the interval is equal or longer than it was assumed, then the respective web-service generates positive answer, otherwise the answer is negative. The intervals were set as follows: 15 days for TicketRS, 5 days for HotelBS, 1 day for TrainTR and 30 days for PlaneTR. These rules were introduced into the implementation of the web-services. In the experiment the reservation date is an actual date (a day on which the process was invoked) and the date of the football match is the date that was specified by the user during the FRS invocation. During the experiment the SFIBP could generate various combinations of the three types of faults (Section 5) or not introduce any fault. This gives eight different configurations of faults for each of the web-services and about 4000 for the whole system. At the end of the experiment its results were analyzed and the fault coverage for the test scenarios was calculated. Table 3 summarises the results. It reports, for each of the web-services, the total number of fault injected, and detected. The fault numbers were grouped based upon the type of faults. Table 3 <table> <thead> <tr> <th>Faults</th> <th>TicketRS</th> <th>HotelBS</th> <th>TrainTR</th> <th>PlaneTR</th> </tr> </thead> <tbody> <tr> <td></td> <td>IP OP WS</td> <td>IP OP WS</td> <td>IP OP WS</td> <td>IP OP WS</td> </tr> <tr> <td>injected</td> <td>304 212 348</td> <td>144 148 157</td> <td>134 94 139</td> <td>32 21 34</td> </tr> <tr> <td>detected</td> <td>295 208 348</td> <td>140 145 157</td> <td>132 92 139</td> <td>31 20 34</td> </tr> <tr> <td>FC</td> <td>97% 98% 100%</td> <td>97% 98% 100%</td> <td>98% 98% 100%</td> <td>97% 95% 100%</td> </tr> </tbody> </table> Due to the nature of the example majority of the injected faults is related to the first web-service (TicketRS) and the minority of them to the last web-service (PlaneTR). Almost all injected faults were detected by the test scenarios. The average fault coverage calculated based on the results of the experiments was 98%. 6 Conclusions The paper describes a statistical experiment carried out to evaluate the test scenarios generated for validation of BPEL processes in context of testing the processes. Test generation is a time consuming activity, thus the possibility of having one set of tests scenarios providing accurate results for both validation and testing, was worth investigating. The experiment was performed on a small example orchestrating five web-services. For the system, the SFIBP was able to generate three types of faults giving in total 4000 different fault configurations. For more complex systems the number of different fault configurations may be much higher than for the FRS. That is why not exhaustive but statistical testing was performed. It illustrates a general approach to the problem. The experimental results seem very promising. The calculated fault coverage shows that almost all injected faults (98%) were detected by the test scenarios. The results confirmed the earlier assumptions that in the case of BPEL processes validation test scenarios may be adequate, also when they are used for testing. Hence, it seems that validation tests might give a strong support for testing. However, the experiment was carried out only on one simple system and focused on faults that only simulate implementation errors. More experiments are needed in order to make the conclusions more general. This will be one of the main goals of our further research. References
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3372/2566", "len_cl100k_base": 4453, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 24761, "total-output-tokens": 6213, "length": "2e12", "weborganizer": {"__label__adult": 0.0003113746643066406, "__label__art_design": 0.0003063678741455078, "__label__crime_law": 0.0003323554992675781, "__label__education_jobs": 0.0006389617919921875, "__label__entertainment": 6.61611557006836e-05, "__label__fashion_beauty": 0.0001500844955444336, "__label__finance_business": 0.0002033710479736328, "__label__food_dining": 0.00032639503479003906, "__label__games": 0.0004911422729492188, "__label__hardware": 0.0008225440979003906, "__label__health": 0.0004835128784179687, "__label__history": 0.0001850128173828125, "__label__home_hobbies": 5.78761100769043e-05, "__label__industrial": 0.0003371238708496094, "__label__literature": 0.00028061866760253906, "__label__politics": 0.00020003318786621096, "__label__religion": 0.000377655029296875, "__label__science_tech": 0.0274658203125, "__label__social_life": 9.119510650634766e-05, "__label__software": 0.0090789794921875, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.0002269744873046875, "__label__transportation": 0.000377655029296875, "__label__travel": 0.0001729726791381836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24036, 0.03693]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24036, 0.53862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24036, 0.86654]], "google_gemma-3-12b-it_contains_pii": [[0, 1897, false], [1897, 5013, null], [5013, 7857, null], [7857, 8873, null], [8873, 9115, null], [9115, 9968, null], [9968, 11329, null], [11329, 13857, null], [13857, 15009, null], [15009, 17685, null], [17685, 19973, null], [19973, 23581, null], [23581, 24036, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1897, true], [1897, 5013, null], [5013, 7857, null], [7857, 8873, null], [8873, 9115, null], [9115, 9968, null], [9968, 11329, null], [11329, 13857, null], [13857, 15009, null], [15009, 17685, null], [17685, 19973, null], [19973, 23581, null], [23581, 24036, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24036, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24036, null]], "pdf_page_numbers": [[0, 1897, 1], [1897, 5013, 2], [5013, 7857, 3], [7857, 8873, 4], [8873, 9115, 5], [9115, 9968, 6], [9968, 11329, 7], [11329, 13857, 8], [13857, 15009, 9], [15009, 17685, 10], [17685, 19973, 11], [19973, 23581, 12], [23581, 24036, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24036, 0.13669]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
dc1ebc8933c4a3091673ffe3638e58b613af07f4
[REMOVED]
{"len_cl100k_base": 7621, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 29619, "total-output-tokens": 9275, "length": "2e12", "weborganizer": {"__label__adult": 0.0005397796630859375, "__label__art_design": 0.0010404586791992188, "__label__crime_law": 0.0007915496826171875, "__label__education_jobs": 0.003345489501953125, "__label__entertainment": 0.00018846988677978516, "__label__fashion_beauty": 0.0002777576446533203, "__label__finance_business": 0.0006442070007324219, "__label__food_dining": 0.0006513595581054688, "__label__games": 0.0019474029541015625, "__label__hardware": 0.0025157928466796875, "__label__health": 0.00074005126953125, "__label__history": 0.000644683837890625, "__label__home_hobbies": 0.00028014183044433594, "__label__industrial": 0.0014286041259765625, "__label__literature": 0.000927448272705078, "__label__politics": 0.0005321502685546875, "__label__religion": 0.0006022453308105469, "__label__science_tech": 0.4248046875, "__label__social_life": 0.0002372264862060547, "__label__software": 0.0231781005859375, "__label__software_dev": 0.52685546875, "__label__sports_fitness": 0.0004835128784179687, "__label__transportation": 0.007244110107421875, "__label__travel": 0.00032520294189453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42888, 0.01916]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42888, 0.77985]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42888, 0.9234]], "google_gemma-3-12b-it_contains_pii": [[0, 2825, false], [2825, 7305, null], [7305, 12545, null], [12545, 17257, null], [17257, 19982, null], [19982, 23497, null], [23497, 30224, null], [30224, 32891, null], [32891, 36906, null], [36906, 42888, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2825, true], [2825, 7305, null], [7305, 12545, null], [12545, 17257, null], [17257, 19982, null], [19982, 23497, null], [23497, 30224, null], [30224, 32891, null], [32891, 36906, null], [36906, 42888, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42888, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42888, null]], "pdf_page_numbers": [[0, 2825, 1], [2825, 7305, 2], [7305, 12545, 3], [12545, 17257, 4], [17257, 19982, 5], [19982, 23497, 6], [23497, 30224, 7], [30224, 32891, 8], [32891, 36906, 9], [36906, 42888, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42888, 0.0604]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
c80d1ddefbc8e13b2eb16e084261e1c17e816740
A self–stabilizing algorithm for finding weighted centroid in trees Halina Bielak\(^1\)*, Michał Pańczyk\(^2\) \(^1\)Institute of Mathematics, Maria Curie-Sklodowska University, pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland. \(^2\)Institute of Computer Science, Maria Curie-Sklodowska University, pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland. Abstract – In this paper we present some modification of the Blair and Manne algorithm for finding the center of a tree network in the distributed, self-stabilizing environment. Their algorithm finds \(\frac{n}{2}\)-separator of a tree. Our algorithm finds \textit{weighted centroid}, which is a direct generalization of the former one for tree networks with positive weights on nodes. Time complexity of both algorithms is \(O(n^2)\), where \(n\) is the number of nodes in the network. 1 Introduction A notion of self-stabilizing algorithms on distributed systems was introduced by Dijkstra [1] in 1974. A survey in the topic can be found in the paper by Schneider [2], and more details in the book by Dolev [3]. The notions from the graph theory not defined in this paper can be found in the book by Harary [4]. A distributed self-stabilizing system consists of a set of processes (computing nodes) and communication links between them. Every node in the system runs the same algorithm and can change state of local variables. These variables determine \textit{local state} of a node. Nodes can observe the state of variables on themselves and their neighbour nodes. The state of all the nodes in the system determines the \textit{global state}. Every self-stabilizing algorithm should have a class of global states called \textit{legitimate state} defined, when the system is stable and no action can be done by the algorithm itself. Every other global state is called \textit{illegitimate} and for the algorithm to be correct. \(*\)hbiel@hektor.umcs.lublin.pl \(\dagger\)mjpanczyk@gmail.com there has to be some possibility to make a move if the state is illegitimate. The aim of the self-stabilizing algorithm is to bring the legitimate (desirable) state of the whole system after some alteration (from the outside of the system) of variables in the nodes or after the system has been started. The tree leader (medians) notions were first studied by Zelinka in [5] in 1967. The self-stabilizing leader electing in unweighted trees was presented in [6] by Bruell, et al. and improved in [7] by Blair and Manne. Recently Chepoi et al. [8] have presented the self-stabilizing algorithm for the so-called partial rectangular grids. Their algorithm uses generalization of the algorithm from [6], which computes also the median of a tree, but in a different way from that in our paper. In this paper we present an algorithm for electing a centroid node in weighted trees. Every node in the system has its weight. We are going to find the node whose removal would split the tree into connected components with the sum of weights not greater than the half of the weight of the whole tree. Of course, for any node there would be as many such components as the degree of the node is. In other words, we are looking for weighted centroid of a tree. Fig. 1 presents the weighted tree of order 7. 2 Computational Model We consider a self-stabilizing system modelled by a finite, undirected graph $G = (V, E)$. Let the number of vertices be $n = |V|$ and the number of edges $e = |E|$. In this paper we consider trees, so the number $e$ of the edges in the graph is $n - 1$. A set of all neighbours of the node $i$ is $N(i)$. We think of every process as a node in the tree, whereas edges are the connection links between them. There are some variables in each node. Names and types of the variables are set during design of an algorithm. Every node can also look up the state of the variables at its neighbours. Each node runs the same algorithm. The algorithm consists of a set of rules. A rule has the form ``` : label If guard then assignment instructions. ``` A _guard_ is a logic predicate which can refer to variables in the node itself and its neighbours. We say that a rule is _active_ if its guard is evaluated to be true. A node is _active_ if it contains any active rule. If there is no active node in the graph, we say that the system is _stabilized_. A self-stabilizing system contains also a _scheduler_. Its task is to choose one process from the set of active processes and to trigger an active rule in it. We call such an action a _move_. In this article the scheduler is assumed to be distributed and adversarial, so the order of activating the nodes is nondeterministic. As a consequence, while describing pessimistic complexity of the algorithm (number of moves), we must take the worst case scenario of triggering actions in particular nodes. ## 3 Finding Centroid Node Algorithm Blair and Manne [7] presented a fast algorithm for leader election in unweighted trees. Now we present an algorithm for finding centroid in weighted trees. The idea is quite similar to the original algorithm without weights. Let us have an unrooted tree. Every node \( i \) of the tree has got the assigned weight \( w_i \) which is a positive natural number. A _weighted centroid_ is such a node that its removal splits the tree into connected components with the sum of weight of each one not greater than \( W/2 \), where \( W \) denotes the entire tree weight. **Lemma 1.** There is at least one weighted centroid node in a weighted tree. **Proof.** Let us assume that there is no weighted centroid node in a tree. This means that if we remove any node from the tree, there will be one connected component with the weight greater than \( W/2 \). Starting at an arbitrary node we can go to a neighbour which is a part of the component with the weight greater than \( W/2 \). By repeating this step, at some point we will get back to the previous node we have been in. Further steps would loop infinitely between these two nodes. If we cut the edge between them, there will be two connected trees, both with the weight greater than \( W/2 \). But this cannot be true since the weights should sum up to \( W \). As the existence of centroid nodes has been proven, now we state an upper bound on the number of such nodes. **Theorem 1.** The number of centroid nodes in a tree with positive weights on the nodes is either one or two. Moreover, if there are two centroid nodes, they are adjacent. A self-stabilizing algorithm for finding weighted centroid nodes Proof. Let us assume that there are two weighted centroid nodes $i$, $j$ in a tree, such that they are not adjacent. There is at least one node between them from the subtree containing nodes from the set $B$ as shown in Fig. 2. Let $B$ contain neither $i$ nor $j$. Also let $A$ and $C$ denote the sets of nodes in subtrees (other than the subtree containing $B$) rooted on the neighbours of nodes $i$ and $j$ respectively. Let $W$ denote the weight of the entire tree, so we can write $$w_A + w_i + w_B + w_j + w_C = W,$$ where $w_A, w_B, w_C, w_i, w_j$ mean the weights of subgraphs induced by the sets $A, B, C, \{i\}, \{j\}$, respectively. Since $i$ is a weighted centroid we can write down: $$w_B + w_j + w_C \leq W/2.$$ The same applies for $j$, so $$w_A + w_i + w_B \leq W/2.$$ By adding the above two inequalities side by side we have $$w_A + w_i + 2w_B + w_j + w_C \leq W,$$ which compared to (1) implies that $w_B = 0$. But this cannot be true since all weights in the tree are positive. This proves that two centroid nodes are always adjacent. To end up the proof, it is sufficient to show that there cannot be more than two centroid nodes. Let us assume that there are at least 3 centroid nodes. Since every two of them must be adjacent, they must form a cycle with 3 nodes as a subgraph. But this is impossible since a tree is an acyclic connected graph (net). Now we are ready to present the searching weighted centroid node algorithm in the weighted trees. The algorithm consists of two phases. The first one determines weights of some components adjacent to every node of a tree. After stabilization of the first phase every node can also find out the weight of entire entire tree. The second phase of the algorithm selects the centroid node relying on the information given by the first phase. Both phases of algorithms run in parallel. However, while the first one is running, moves made by the second phase are meaningless — they only increase the number of moves made by the algorithm. 3.1 Phase one — determining weights of components In our algorithm each node contains in itself an array $W_i$ of weights. After stabilization of the system, the value of array $W_i[j]$ for every neighbour $j$ of the node $i$ is the weight of connected component containing node $i$ but with $\{i,j\}$ edge cut. : R1 If $\exists j \in N(i) \ W_i[j] \neq w_i + \sum_{k \in N(i)-\{j\}} W_k[k]$ then $W_i[j] := w_i + \sum_{k \in N(i)-\{j\}} W_k[k]$. Note that for any leaf $i$ and its neighbour $j$, $W_i[j]$ will be set to the weight $w_i$ of the node $i$. ![Diagram](image.png) Fig. 3. The tree from Fig. 1 with $W_i[j]$ calculated for every node $i$ and its neighbour $j$ — after stabilization of R1 algorithm. Below we state a lemma about stabilization of R1 algorithm. It is analogous to that from [7], but in this case for the weighted tree network algorithm. The idea of the proof is also the same, but in this paper we give an extension to the weighted tree network algorithm. **Lemma 2.** Algorithm R1 stabilizes. **Proof.** Let us have any sequence of executions $E_1, E_2, \ldots, E_k$ of the rule R1 on a sequence of the consecutive nodes $n_1, n_2, \ldots, n_k, n_{k+1}$ ($E_i$ is an execution of R1 rule on $n_i$ node) such that $E_i$, $1 < i \leq k$, is the first update of $W_{n_i}[n_{i+1}]$ that makes use of the value $W_{n_i-1}[n_i]$. For example, the first such dependency is the use of $W_{n_1}[n_2]$ for updating $W_{n_2}[n_3]$. Now we show that $n_i \neq n_j$ for $i \neq j$. According to the rule R1, it is impossible to have $n_i = n_{i+1}$. To show that also $n_i \neq n_{i+2}$ for every $i$, let us consider that propagation of R1 moves along a path in the tree (see Fig. 4). For any node $i$ the value of $W_i[j]$ depends only on the values $W_{h}[k]$ in the neighbours $h \neq j$. On the other hand, the value $W_i[j]$ influences the value $W_{j}[k]$ in a node $j$ for its any neighbour $k \neq i$. If we take $n_i = h$ and $n_{i+2} = j$, then the equation $n_i = n_{i+2}$ would imply that there is a cycle in the graph. But since it is a tree it is not possible. The above argument can be repeated for any \( c \geq 2 \) in rejecting the equation \( n_i = n_{i+c} \). Thus we have \( k < n \), where \( n \) is the number of nodes of the tree. There exists exactly one path between any two nodes in a tree. Thus the set of all maximal paths of executing the rule R1 contains \( n^2 \) elements when the orientation of such paths (the order from left to right, and from right to left) is distinguished. The only way the number of paths is unbounded is if the same path sequence appears more than once in the set but with different input values. To show that it is impossible, note that the first move of every maximal sequence of moves must depend only on the system initial state. So if \( E \) and \( E' \) are two maximal paths of execution of the rule R1 such that \( n_i = n'_i \), then the first move for both sequences must be the same. Now it follows by induction that \( E \) and \( E' \) must consist of the same moves since the \( i \)-th move is uniquely dependent on the previous \((i-1)\)-th move. \[ \square \] Fig. 4. Propagation of R1 moves along a path in a tree. **Lemma 3.** When algorithm R1 has stabilized \( W_i[j] \) is correctly computed for every node \( i \) and its neighbour \( j \). **Proof.** The proof is by induction on the number of nodes in a tree. As the base case let us take a leaf. There are two possibilities: either \( W_i[j] \) in a leaf \( i \) is correct or it is not correct at the beginning of running of the algorithm. If it is not correct, the rule R1 is active and it will be triggered finally. After this move, the rule R1 becomes inactive. If the \( W_i[j] \) is correct in a leaf, it will never be changed or become incorrect. Now our induction hypothesis is that for a node \( i \) its every neighbour \( k \) except for a node \( j \), i.e. \( k \neq j \) has computed the value \( W_k[i] \) correctly. Given that, if node \( i \) has the value \( W_i[j] \) incorrect, now it can compute it and set a correct value to \( W_i[j] \) relying on its neighbours’ information. The proof is done. \[ \square \] We will now consider a number of moves the algorithm makes to stabilize. Let \( v_i(j) \) be the number of nodes in the connected component containing the node \( i \) in the graph with \( \{i,j\} \) edge cut, and let \( c_i(j) \) be the number of the value \( W_i[j] \) changes during execution of algorithm R1. Now we comprise lemma from [7] that will allow us to deduce about the complexity of our algorithm. **Lemma 4.** When algorithm R1 has stabilized $c_i(j) \leq v_i(j)$ for every node $i$ and its neighbour $j$. The proof of Lemma 4 is similar to that for lemma 3, so it is omitted. The number of moves in our algorithm is the same as in that for the network tree without weights, so the following lemma from [7] also holds. **Lemma 5.** The rule R1 is executed at most $n(n - 1)$ times. **Proof.** For every pair of adjacent nodes $i, j$ the property $v_i(j) + v_j(i) = n$ holds, so from Lemma 4 the total number of times that the values $W_i[j]$ and $W_j[i]$ change is at most $n$. Since the network system is a tree, so the number of edges is $n - 1$, the result follows. 3.2 Phase two — rooting the tree Once the R1 phase has stabilized the system, every node can determine the weight of the tree. To show this, we introduce a predicate similar to that from [7], which in our algorithm states whether from the node $i$ point of view, it can determine correct weight of the whole tree. The following predicate: $$w\text{Correct}_i = \left( \forall_{j \in N(i)} \ ( W_i[j] = w_i + \sum_{k \in N(i) - \{j\}} W_k[i] ) \right)$$ is evaluated in order to run the following part of the algorithm. It is worth mentioning that for any node $i$, the predicate $w\text{Correct}_i$ is true iff the rule R1 is not active for the node. Given that the system has been stabilized, every node can determine the weight of the tree. The following lemma gives the method how the node $i$ can calculate the weight of the whole tree. **Lemma 6.** If $w\text{Correct}_i$ is true then $W_i[j] + W_j[i] = W_i[k] + W_k[i]$ for the neighbours $j, k$ of node $i$. **Proof.** Since $w\text{Correct}_i$ is true: $$W_i[j] + W_j[i] = w_i + \sum_{q \in N(i) - \{j\}} W_q[i] + W_j[i] = w_i + \sum_{q \in N(i)} W_q[i] = w_i + \sum_{q \in N(i) - \{k\}} W_q[i] + W_k[i] = W_i[k] + W_k[i]$$ This way each node can determine whether any of its neighbours has component weight bigger than half of the weight of the whole tree. If this occurs, the node itself cannot be the weighted centroid, and the neighbour is closer to it. In fact, it can be the centroid itself. On the other hand, if for a node its every neighbour has component weight less than half of the whole tree weight, then this node is considered the weighted centroid of the tree. There can be a situation, where two adjacent nodes can be considered as the weighted centroid, actually when two neighbours have component weight exactly equal to half of the whole tree weight. In such a case we arbitrarily choose the node with greater ID. All the time the \( wCorrect_i \) predicate is true for the node \( i \), from its point of view it can determine the weight of the whole tree, but it is not necessarily correct. It is correct after the rule R1 has stabilized in the whole system. Apart from the above correctness, the weight of the entire tree calculated by node \( i \) we denote by \( W_i \). We present below four further rules of our algorithm. Their meaning is to make a pointer to the centroid: after stabilization if a node is centroid then it points to itself, otherwise the node points to the neighbour closer to the centroid. So each node \( i \) contains the variable \( p_i \) whose integer value points to the neighbour closer to the centroid or to itself if it is the centroid itself. : R2 \[ \text{If } (wCorrect_i) \land (\forall j \in N(i) W_j[i] < W_i/2) \land (p_i \neq i) \\ \text{then } p_i := i \] : R3 \[ \text{If } (wCorrect_i) \land (\exists j \in N(i) W_j[i] > W_i/2) \land (p_i \neq j) \\ \text{then } p_i := j \] : R4 \[ \text{If } (wCorrect_i) \land (\exists j \in N(i) W_j[i] = W_i/2) \land (ID_i > ID_j) \land (p_i \neq i) \\ \text{then } p_i := i \] : R5 \[ \text{If } (wCorrect_i) \land (\exists j \in N(i) W_j[i] = W_i/2) \land (ID_i < ID_j) \land (p_i \neq j) \\ \text{then } p_i := j \] In all above rule guards there is \( wCorrect_i \) predicate. This makes the rules inactive if from the node \( i \) point of view weights of components have not been calculated yet. Thus for a node any of the rules R2-R5 can be active if and only if the rule R1 is inactive. The meaning of the rule R2 is to indicate in the node \( i \) that the weighted centroid is only one (unique) and \( i \) is the one. The rule R3 makes the \( p_i \) pointing to the neighbour of \( i \) closer to the centroid if \( i \) is not the one. The rules R4 and R5 are activated only if there are two weighted centroid nodes. Then the one with greater ID becomes elected: by itself (the rule R4) and by the other one (the rule R5). Lemma 7. The algorithm R1–R5 stabilizes on every weighted tree network after at most $2n^2 - n$ moves. Proof. The first phase of the algorithm (stabilization of the rule R1) takes no more than $n^2 - n$ moves. The second phase consisting of the rules R2-R5 gives the following number of moves: every node can make one move according to the rules R2-R5 before any node triggers the rule R1. This gives $n$ moves. Furthermore, after every R1 move a node can make one of the R2-R5 rules move — this gives extra $n^2 - n$ moves. All in all, it gives $n^2 - n + n + n^2 - n = 2n^2 - n$ moves. □ Lemma 8. After stabilization of the R1-R5 algorithm, the pointer values $p_i$ for every node $i$ in the weighted tree determine a rooted tree with the weighted centroid as the root. Proof. It is obvious that execution of the R2-R5 rules does not have any influence on the rule R1. Thus the rule R1 will stabilize with correct values $W_i[j]$, for every node $i$ and its neighbour $j$. Let us now assume that the entire algorithm has stabilized the weighted tree network. Take a look at the nodes that are not the weighted centroid. Any such node $i$ has exactly one neighbour $j$ for which $W_j[i] > W/2$, where $W$ is the weight of the entire tree. For these nodes $i$ the rule R3 may apply after stabilization of the rule R1, but none of the R2, R4, R5 rules is applicable. After possible application of the rule R3, $p_i$ will point to the neighbour which is part of the connected component with its weight greater than $W/2$. So these nodes point to the neighbour which is part of the subtree containing the weighted centroid. Now let us assume that there is a unique weighted centroid. Thus there exists no node with $W_j[i] = W/2$. The centroid $i$ has all the neighbours $j$ with the property $W_j[i] < W/2$. Then the node $i$ which is the unique centroid may apply the rule R2, pointing to itself as a centroid node. But it is also impossible that $i$ can make a move according to the rules R3-R5. Given that all the other nodes may apply only the rule R3, the whole tree has been stabilized with orientation to the root denoted. The other case is when there are two nodes with the weighted centroid property. This means that there is a pair of adjacent nodes $p$ and $q$ such that $W_p[q] = W_q[p] = W/2$. Then no node $i$ in the tree can have $W_j[i] < W/2$ for all its neighbours. Thus the nodes with the weighted centroid property cannot apply the rules R2 or R3, and all other nodes can apply only the rule R3. The non-centroid nodes will properly set their $p$ variables pointing towards the centroid node, as shown above. The weighted centroid nodes are adjacent so one of them can apply the rule R4 and the other one can apply the rule R5 if necessary. So the one with greater ID becomes a leader and the other one points to the leader. This proves the last case. The proof is done. □ To end up we can state the following corollary. **Corollary 1.** Starting with any global state of weighted tree network system, the algorithm R1-R5 stabilizes in at most $2n^2 - n$ moves having the values $p_i$ for every node $i$ pointing towards the just elected leader as one of the weighted centroid nodes. The final state of the system Fig. 1 is shown in Fig. 5. The arrows symbolize the pointers $p_i$ for every node $i$. We have two weighted centroids in the example (nodes 4 and 5). The leader is marked with the circle. ![Diagram of a weighted tree network](image) **Fig. 5.** A state of variable $p_i$ for every node $i$ in a tree after stabilization of the algorithm R1–R5. The weighted centroid node elected by the algorithm is circled. ### 4 Conclusions We have presented the self-stabilizing algorithm for finding a centroid in the weighted trees. It is generalization of the algorithm [7] for tree networks without weights (precisely with all node weights equal to 1). Our algorithm can be applied in the networks with the weights for electing a node which is the center of the network. One can think of it as limelight of the network. In the future work we would like to study the weighted centroid and the weighted median in the self-stabilizing systems with topologies other than trees. Some results connected with the above subject can be found in [9]. ### References
{"Source-Url": "https://journals.umcs.pl/ai/article/download/3346/2540", "len_cl100k_base": 5845, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33565, "total-output-tokens": 6758, "length": "2e12", "weborganizer": {"__label__adult": 0.0004410743713378906, "__label__art_design": 0.0005064010620117188, "__label__crime_law": 0.0007014274597167969, "__label__education_jobs": 0.0010080337524414062, "__label__entertainment": 0.0001481771469116211, "__label__fashion_beauty": 0.00023877620697021484, "__label__finance_business": 0.00045990943908691406, "__label__food_dining": 0.0005698204040527344, "__label__games": 0.0011415481567382812, "__label__hardware": 0.0022258758544921875, "__label__health": 0.0017156600952148438, "__label__history": 0.0005955696105957031, "__label__home_hobbies": 0.00025200843811035156, "__label__industrial": 0.0009212493896484376, "__label__literature": 0.00047516822814941406, "__label__politics": 0.0005002021789550781, "__label__religion": 0.0007762908935546875, "__label__science_tech": 0.4541015625, "__label__social_life": 0.00014090538024902344, "__label__software": 0.00843048095703125, "__label__software_dev": 0.5224609375, "__label__sports_fitness": 0.000461578369140625, "__label__transportation": 0.001140594482421875, "__label__travel": 0.00032067298889160156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23149, 0.02401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23149, 0.66177]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23149, 0.9235]], "google_gemma-3-12b-it_contains_pii": [[0, 1957, false], [1957, 3871, null], [3871, 6465, null], [6465, 8551, null], [8551, 10586, null], [10586, 13032, null], [13032, 15032, null], [15032, 17789, null], [17789, 20686, null], [20686, 22325, null], [22325, 23149, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1957, true], [1957, 3871, null], [3871, 6465, null], [6465, 8551, null], [8551, 10586, null], [10586, 13032, null], [13032, 15032, null], [15032, 17789, null], [17789, 20686, null], [20686, 22325, null], [22325, 23149, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23149, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23149, null]], "pdf_page_numbers": [[0, 1957, 1], [1957, 3871, 2], [3871, 6465, 3], [6465, 8551, 4], [8551, 10586, 5], [10586, 13032, 6], [13032, 15032, 7], [15032, 17789, 8], [17789, 20686, 9], [20686, 22325, 10], [22325, 23149, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23149, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
d249714341cf121e5f1271578c1668bdef39b742
Query Optimization by Indexing in the ODRA OODBMS Tomasz M. Kowalski\textsuperscript{1}, Michał Chromiak\textsuperscript{2}, Kamil Kuliberda\textsuperscript{1}, Jacek Wiślicki\textsuperscript{1}, Radosław Adamus\textsuperscript{1}, Kazimierz Subieta\textsuperscript{3} \textsuperscript{1} Technical University of Lodz, Stefanowskiego 18/22, 90-924 Lodz, Poland \textsuperscript{2} Institute of Computer Science, Maria Curie Skłodowska University, pl. M. Curie-Skłodowskiej 1, 20–031 Lublin, Poland \textsuperscript{3} Polish-Japanese Institute of Information Technology, Koszykowa 86, 02-008 Warsaw, Poland Abstract We present features and samples of use of the index optimizer module which has been implemented and tested in the ODRA prototype system. The ODRA index implementation is based on linear hashing and works in a scope of a standalone database. The solution is adaptable to distributed environments in order to optimally utilize data grid computational resources. The implementation consists of transparent optimization, automatic index updating and management facilities. 1. Introduction Indices are auxiliary (redundant) data structures stored at a server. A database administrator manages a pool of indices generating a new or removing an existing one depending on the current needs w.r.t. improving overall performance of applications. As indices at the end of a book are used for quick page finding, a database index makes quick retrieving objects (or records) matching given criteria possible. As indices have a relatively small size (comparing to the whole database) the gain in performance is fully justified by some extra storage space. Due to single aspect searching, which allows one for very efficient physical organization, the gain in performance can be even several orders of magnitude. The general idea of indices in the object–oriented databases does not differ from indexing in relational databases [1]. Many indexing methods can be adopted from relational database systems and even their applicability can be significantly extended. There are also situations where indexing methods from RDBMSs become outdated in object–oriented databases. In particular, join operations do not require extensive optimizations because in object databases the necessity for joins is much lower due to object identifiers and explicit pointer links. ODRA (Object Database for Rapid Applications development) is a prototype object-oriented database management system based on the Stack Based Architecture (SBA) [2, 3]. The main goal of the ODRA project is to develop new paradigms of database application development and to introduce a new, universal declarative programming language, together with a distributed database-oriented and object-oriented execution environment. ODRA introduces its own query language SBQL (Stack Based Query Language) that is integrated with programming capabilities and abstractions, including database abstractions: updatable views, stored procedures and transactions. An important feature of ODRA concerns the optimization engine responsible for increasing the performance of query execution. The essential component of the engine is the module that optimize queries by using indices. The main features of the indices implementation include: transparent choosing appropriate indices for a given query (if available), automatic update of indices in response to update of corresponding data and administrative management of indices. The paper presents the above three aspects of indices implementation in ODRA. Section 2 contains a brief overview of selected OODBMSs index capabilities. Section 3 presents overall architecture of the ODRA query optimization engine. Section 4 discusses the features of indices in ODRA. Section 5 describes ODRA index management facilities. Section 6 exemplifies query optimization based on indices. Section 7 presents performance gain of proposed solution based on an example query. Section 8 concludes. 2. Query Optimizations with Indices in OODBMSs In the case of the Versant ODBMS [4] a $B$–tree index can be used in an exact or range predicate processing. No index inheritance is present in the Versant database. An index can be created on an attribute of only one class. No class... Inheriting from the one with the index can inherit the index. To index subclass attributes, it is necessary to specifically set indices on each subclass. This results in the need for providing index consistency by a database administrator. In the Objectstore DBMS [5] there are two types of improving query performance by indexing, i.e., with indices and with superindices. The first solution involves building indices on a collection of objects. A superindex is an index kind that is specially used for optimizing queries involving types that have many subtypes. By default, adding an index on a type results in recursive adding of indices to all its subtypes. Still for queries with a large and intricate hierarchy of subtypes the regular indexing can seriously deteriorate processing. Adding a superindex to a type with many subtypes differs from a default index in one essential feature: the superindex is only one. It eliminates a recursion; consequently, only one parent query operation occurs in contrast to multiple queries when using the regular index. There is also a possibility to create a query that uses a multistep index, which is an index on a complex navigational path that accesses multiple public data members. It optimizes queries that use the same path. For example, if a query concerns all employees who works in the Sales department, an index on WorksIn.Name to Emp collection can be used. However, updating an index entry after data modification must be explicitly determined by the programmer. It is a serious drawback of an ObjectStore multistep index. The ObjectStore ODBMS automatically optimizes a query applied to a collection. If an index is added to a collection, then the database first evaluates indexed fields and establishes a preliminary result set. Then, ObjectStore applies non-indexed fields and methods to the elements in the preliminary result set. In ObjectStore the optimization can be done manually by preparing a query or automatically otherwise. This means that a query is optimized to use exactly indices which are available on the collection being queried. The automatic optimization is convenient and effective. Moreover, it supports data independence, i.e., the database administrator is not constrained in establishing new or removing indices because application programs do not refer to them explicitly. Let us consider the index usage in Objectivity/DB [6]. The main goal of an index is to optimize predicate scans and this is how it is implemented in Objectivity. The predicate used in the scan can be one of the following: - A single optimized condition (\(=, ==, >, <, >=, <=, \sim\) – string match) that tests the first key field of the index - A conjunction (\&\&) of conditions in which the first conjunct is an optimized condition that tests the first key fields of the index (no disjunction – OR ) In the case of Objectivity/DB creation of an index for a class means indexing also objects references of all classes derived from the indexed one. The index structure maintains references to persistent objects of a particular class (so called indexed class) and its derived classes. An indexed class is specified during creation of an index. Objectivity/DB additionally supports concatenated index on several attributes (key fields). The order of key values of an index is very relevant regarding the proper activity of predicate. While considering indexing in OODBMSs the way the GemStone database server handles the issue should also be noticed [7, 8]. GemStone indices address path-expressions. A variable name appearing in the beginning of a path is called path prefix. Then, a path contains a sequence of links and a path suffix; e.g. Employee.worksIn.manager. For each link (for an instance variable of an object) in the path suffix one index is available thus forming a sequence of index components. In GemStone identity indices directly support exact match lookups; whereas, equality indices and identity indices on Boolean, characters and integers directly support =, >, >=, <, <= and range lookups. 3. Query Optimization Engine Architecture Fig. 1 shows the ODRA query optimization process in the context of a query evaluation process. The input for the optimization process is an abstract syntax tree (AST) of a query. The optimization modules are divided into optimization by rewriting and optimization by indices. The theoretical idea for these methods is presented in several documents, see e.g. [9, 10, 2, 3]. The rewriting optimization process modifies a query during compile-time with the use of information stored in the metabase augmented with static query evaluation results. Currently ODRA supports several rewriting methods: changing the order of execution of algebraic operators; view rewrite (replacing a view invocation by a view body); removing dead sub-queries; factoring out independent sub-queries; shifting conditions as close as possible to the proper operator; methods based on the distributivity property of some query operators. Optimization by indices searches for parts of an input query that can be transparently replaced with an index call. If such an index exists (added previously by the administrator) the query is rewritten to the form where the target part is replaced with an index invocation. 4. The idea of ODRA indexing In general, an index can be considered a two-column table, where the first column consists of unique key values and the other one holds non-key values, Fig. 1. ODRA optimization architecture which in most cases are object references. Fig. 2 shows the example indices for a given object-oriented database store. Key values are used as an input for index search procedures. As a result, such a procedure returns suitable non-key values from the same table row. Keys are usually values of database objects specific attributes (dense indices) or represent ranges of these values (range indices). Key values can be also calculated with the use of expressions that can contain build-in query language functions or user defined functions (function-based indices [11]). This approach enables the administrator to create an index matching exactly predicates within frequently occurring queries, so their evaluation is faster and uses the minimal amount of I/O operations. In query optimization indices are used in the context of a where operator, when the left operand is indexed by key values of the right operand selection predicates. Let us make an example using the database store structure presented in Fig. 2. If the administrator will set an index named For big databases, replacing the where clause evaluation with an index function call may cause performance gain even orders of magnitude. However, to achieve this the database server should ensure index transparency and automatic index updating. 4.1. Index Transparency. In the common approach a programmer should not involve explicit operations on indices into an application program. To make indexing transparent from the point of view of a database application programmer, the database management system should ensure two important functionalities index-based optimisation and automatic index updating. The first functionality means that indices are used automatically during query evaluation. Therefore, the administrator of a database can freely establish new indices and remove them without changing the codes of applications. The The second functionality, i.e. an automatic index updating, is required due to possible changes in a database. Indices, like all redundant structures, can lose cohesion if a database is updated. An automatic mechanism should improve, eliminate or generate a new index in the case of database updates. This paper focuses on the first functionality, i.e. index optimisation, which is the main topic of Section 6. 4.2. Index Classification. The most common classification of indices distinguishes primary and secondary ones or dense and range ones. From the query optimizer point of view the distinction between primary and secondary indices is less crucial because it does not lead to significant differences in optimizer algorithms, whereas the division into dense and range indices is essential: - A dense index is applied when for each value in the object attributes a separate position in an index is created, e.g. for a person objects index, where any name occurring in the database can be a key–value, - A range index means that index items concern values within a given range, e.g. a range index for a salary attribute is a table where each index item describes a range of salaries: \(<0 - 500)\), \(<500 - 1000)\), \(<1000 - 1500)\), \(\ldots\) etc (Table 1). Similarly, range index items for names can take the following form: "names starting with a letter A", "names starting with a letter B", \(\ldots\), "names starting with a letter Z". Table 1. Example range index for Employees objects according to Salary attribute <table> <thead> <tr> <th>Range</th> <th>Search key-value</th> <th>References to Employees</th> </tr> </thead> <tbody> <tr> <td>(&lt;0, 500))</td> <td>0</td> <td>(i_{15})</td> </tr> <tr> <td>(&lt;500, 1000))</td> <td>500</td> <td>(i_{72}, i_{43})</td> </tr> <tr> <td>(&lt;1000, 1500))</td> <td>1000</td> <td>(i_{48}, i_{25}, i_{30})</td> </tr> <tr> <td>(&lt;1500, 2000))</td> <td>1500</td> <td>(i_{45}, i_{59}, i_{48}, i_{32})</td> </tr> <tr> <td>(\ldots)</td> <td>(\ldots)</td> <td>(\ldots)</td> </tr> </tbody> </table> Indices can also be categorized according to physical data structures used for index organization. The most important data structures for implementing indices are the following: 4.3. Features of ODRA indices. Currently the implementation supports indices based on Linear Hashing [12] which can be easily extended to its distributed version SDDS [13] in order to optimally utilize data grid computational resources. Nevertheless, there is a wide range of different index structures that could be used in indexing in object-oriented databases similarly to those in the solutions occurring in relational ones [11, 1, 14, 15]: B–Trees, bitmap indices, etc. An extended idea of an ODRA index works with multiple key indices. Additionally to the key types mentioned earlier (dense and range) enum type was introduced to improve multiple key indexing (among other things). Moreover, thanks to properties of the SBQL language, i.e. orthogonality and compositionality, the implemented solution provides generic support for variety of index definitions including usage of complex expressions with polymorphic methods and aggregate operators. ODRA supports local indexing which ensures index transparency by providing a mechanism (optimization framework) to automatically utilize an index before query runtime evaluation and therefore to take the advantage of indices. ODRA C.R.U.D. (Create, Read, Update and Delete) is also equipped with triggers to ensure automatic index updating so existing indices are consistent with the database state. 5. Index Management All indices existing in a database are registered and managed by the ODRA index manager. The list of all indices and auxiliary information needed by the index optimizer are stored inside a special admin module. Each index is associated with a module where it was created and its name has to be unique. Therefore, the index manager checks whether a given index exists in the list of references to meta-base objects describing indices using the combination of a module name and an index name: "module_name.index_name". 5.1. Example Schema. The schema in Fig. 3 is introduced to exemplify the usage of indices. The example schema illustrates personnel records of a company. It introduces several classes PersonClass, StudentClass, EmpClass, EmpStudentClass and two structure types DeptType and AddressType. Persistent instances of the classes mentioned above can be accessed using their instance names Person, Student, Emp and finally EmpStudent. The objects called Dept have DeptType structure with a primary attribute name and represent departments of the company. Each Person object stands for a person somehow connected with the company. Its attributes provide some basic information. Additionally, each Dept and Person object includes an address subobject which specifies data according to the AddressType structure. Instances of the EmpClass represent current employees of the company and extend Person object attributes with the salary attribute. Emp and Dept objects are associated with pointer objects named worksIn and employs. Another class, which extends the PersonClass, is the StudentClass. This class introduces the scholarship attribute. The last class presented in the schema is called EmpStudentClass and inherits from EmpClass and StudentClass. It is introduced to represent students who are simultaneously employees of the company. Using Person in an SBQL query results in returning all instances of the PersonClass class and its subclasses. Similarly, via Emp the programmer refers both to EmpClass and EmpStudentClass instances. Beside attributes, classes comprise methods. Taking advantage of the polymorphism some methods are overridden in derived subclasses. E.g. getTotalIncomes() method of EmpClass returns the value of a salary attribute, but 5.2. Index Types. The syntax for creating index allows the administrator to specify general index key properties, i.e. concerning key values or the goal of optimization. These are achieved by introducing optional type indicators: *dense*, *range* and *enum*. The *dense* indicator implies that the optimization of queries which use the given key as a condition will be applied only for selection predicates based on ‘=’ or in operators. Therefore the distribution of indexed objects in index (e.g. in hash table) can be more random. The order of key values has no significance for indexing. The *dense* indicator is always used for reference values (regardless of an indicator set by the administrator). Moreover, it is the default type indicator for integer, string, double or reference key values. ``` add index idxEmpSalary(dense) on Emp(salary) ``` The *range* indicator implies that optimized selection predicates will be based not only on ‘=’ or in operators but also on range operators: ‘>’, ‘≥’, ‘<’ and ‘≤’. Within an index a hash function groups objects according to key value ranges. In the current implementation, ranges are dynamically split because each range is associated with an individual bucket of a linear hash map. ``` add index idxDeptSalary(range) on Dept(sum(employs.Emp.salary)) ``` The *idxDeptSalary* index returns references to departments according to a value (or a value range) of a sum of department employees salaries. Its advantage is avoiding calculation of a complex selection predicate multiple times because it is already calculated during index creation. On the other hand, the maintenance of the *idxDeptSalary* index is very expensive and can cause serious deterioration during database updating. The *enum* indicator is introduced in order to take the advantage of keys with a countable limited set of distinct values, i.e. keys with low values cardinality. The performance of an index can be strongly deteriorated if key values have low cardinality e.g. person eye colour, marriage status (*Boolean* value) or the year of birth. Using the *enum* key type index internally stores all possible key values (or *range* for *integer* values) and uses this information to optimize the index structure. The enum key type can deal with optimizing selection predicates exactly as in the case of the range indicator, i.e. for: '=' , 'in' , '>', '≥', '<' and '≤' operators. Another important property of enum keys occurring when index is set on multiple keys is that the optimizer can omit them if necessary during optimization of queries. If enum is set on all index keys and the number of indexed objects is large then index call evaluation should prove great efficiency (each key value combination points to a separate object references array called bucket). ``` add index idxPerAgeMarCity(enum|enum|enum) on Person(age, married, address.city) ``` Other examples of creating indices commands are as follows: ``` add index idxPerZip(enum) on Person(address.zip) ``` The enum index which returns Person objects queried by a zip attribute of its subobject address. It is important to note that a zip attribute is optional and therefore this index stores only Person objects containing this attribute. ``` add index idxPerBirthYear(range) on Person(2009 - age) ``` The index returns Person objects according to the value of expression 2009 – age. It is assumed that this index is capable of processing range queries. ``` add index idxEmpTotalIncomes on Emp(getTotalIncomes()) ``` The dense index uses the Emp class method getTotalIncomes() as a key for selecting Emp objects. This method is overridden for instances of the EmpStudent class. The only action required from the administrator in order to take advantage of indexing is creation of proper indices since the rest of optimization is transparent for programmers. The next section describes the rules used by the Index Optimizer. ### 6. Query Optimization In ODRA the use of indices is entirely transparent for an application code. The programmer may be aware or not of existence of indices, but the code does not depend on it. The index optimizer automatically applies all possible indices during query compilation process. Besides this possibility, a user can also use indices explicitly. This feature is introduced for testing purposes in order to check semantic equivalence of introduced index optimizations and research into new possibilities in indexing. In the following we briefly describe ODRA indices optimization engine module used for query optimization based on indices. 6.1. Index Usage Syntax. From the SBQL syntax point of view an index invocation is simply a procedure invocation: ``` <indexname>( <key_param_1> [; <key_param_2> ...] ) ``` The number of parameters is equal to the number of index keys. Each key parameter defines a desirable value of a key. An index function call returns references to objects matching specified criteria. A key parameter expression can define a single value as a criterion. In that case its evaluation should return integer, double, string, reference or Boolean value or reference to such a value. Below we present an example calls for the sample index `idxDeptName`: ``` idxDeptName("HR" groupas $equal) ``` A single value key parameter can be passed through a value of a binder named "$equal". Binders are used to increase readability and to make introducing new types of parameters for index calls easier. To specify a range as a key value criterion parameter, an expression should return a structure consisting of four parameters: ``` (< lower_limit >, < upper_limit >, < lower_closed >, < upper_closed >) ``` where: - `<lower_limit>` and `<upper_limit>` are key values specifying range, - `<lower_closed>` is a Boolean value indicating whether `<lower_limit>` belongs to a criterion range, - `<upper_closed>` is a Boolean value indicating whether `<upper_limit>` belongs to a criterion range. Examples of index calls: The last example returns references to persons whose year of birth is below the average of all the persons from the database. Like in the case of single value key parameters, parameters specifying a range are passed using the value of a binder named "$range". A key parameter can specify also collection of single key values as a criterion. This is done when a key parameter returns a bag of key values. The binder named "$in" is used to pass a collection of key values. If a criterion parameter returns an empty bag then the index call returns an empty bag too. 6.2. Transparent Index–based Optimization. The mechanism responsible for index transparency during query evaluation is called the index optimizer. Its function is to replace a part of a query with an index call in order to minimize amount of data processed. This section describes general rules used in solving the problem of semantic equivalence of queries rewritten by the index optimizer and original input queries. Most of the following rules concern optimizing range queries. The index optimizer analyzing the right operand of a \textbf{where} non–algebraic operator takes into consideration all selection predicates joined with conjunction (\textbf{and}) or disjunction (\textbf{or}) operators. 6.2.1. Optimization procedure. The basic index optimizer procedure works on selective queries where left side of the \textbf{where} operator is <\textit{object}\textit{expression}> indexed by one or more indices. The algorithm analyses all selection predicates joined with \textbf{and} operators and tries to find an index that keys matches the predicates. If more than one index is found, the optimizer selects one with the best selectivity. 6.2.2. Semantic Equivalence Issue in Optimization Involving Optional Keys. Firstly, let us to consider how [0..1] key cardinality affects optimization. Using criteria with the discussed cardinalities may cause runtime errors because selection predicates based on '=' , '>' , '≥' , '<' and '≤' operators force using single values as left and right operands. An unexpected number of operand values causes a runtime error. Using an index call in optimization with those predicates would eliminate threat of error and therefore optimized query would not be semantically equivalent to the original. In these cases the optimization is allowed only if \textit{in} operator is used as a predicate because it does not constrain the cardinality of a right operand. The example of an unsafe predicate evaluation that may cause a run–time error (left side of selection predicate has cardinality \([0..1]\) due to \textit{zip} attribute) is presented below: \[ \text{Person where address.zip = 94107} \] To avoid the possibility of a run–time error the ”safe” \textit{in} operator should be used: \[ \text{Person where 94107 in address.zip} \] In the discussed case the index optimizer supports optimization when predicates are defined using ‘\(=\)’, ‘\(>\)’, ‘\(\geq\)’, ‘\(<\)’ and ‘\(\leq\)’ operators and only if a proper \textit{exists} predicate is used. The example of safe predicate evaluation when ‘\(=\)’ operator is used can be as follows: \[ \text{Person where exists(address.zip) where address.zip = 94107} \] Only in the case of two previous examples of queries the Index Optimizer can apply the following query transformation: \[ \text{idxPerZip(94107 groupas $equal)} \] The minimal cardinality of a key equal to zero indicates that the index may not contain references to all objects defined by an index \(<\text{object-expression}\>\). In the case of multiple key index, if such a key is omitted in selection predicates, it is possible that evaluation of the \textit{where} operator may return references to the objects that are not stored inside the index. Therefore, the index optimizer would not apply optimization using such an index. To sum up, keys with the minimal cardinality equal to zero are obligatory even if they are declared with enum type indicator. 6.2.3. Keys with Plural Cardinality. Currently the maximum cardinality of keys greater than one is not supported by the ODRA indices. However theoretically it would imply that an index call may return the same object reference more than once. To prevent such problems in the future, the index optimizer uses the `uniqueref` operator to remove redundant object references. 6.2.4. Aspects of Range Predicates Optimization. If optimized query selection predicates specify only one limit of a range (lower or upper) then the second limit is generated automatically i.e. a possible smallest or biggest value for a given key. For example, the following query concerns the departments located in the Warsaw city whose employees together earn less than the best paid employee of the whole company. Original query: ```sql Dept where sum(employs.Emp.salary) < max(Emp.salary) and address.city = "Warszawa" ``` Optimized query: ```sql idxDeptSalary((-2147483648, max(Emp.salary), true, false) groupas $range) where address.city = "Warszawa" ``` If there are more than one predicate or two opposite predicates describing the range on a given key then `min`, `max`, `union` and comparison expressions are used to obtain a correct key range parameter. Original query: ```sql ``` Optimized query: ```sql (sum(Person.(2009 - age)) / count(Person)) as avgyear). idxPerBirthYear((max(avgyear union 1970), 1980, 1970 > avgyear, false) groupas $range) ``` 6.2.5. Avoiding Unnecessary Index Calls. In some cases, the index optimizer can use if then expression to predict whether a given query returns no result (and calling the index is unnecessary) i.e. if selection predicates are in contradiction. This is to be checked e.g. when for a given key there exists more than one selection predicate and at least one is based on ‘=’ or in operator. If any of these selection predicates contradicts with a predicate based on ‘=’ or in operator then such a query returns an empty bag: Original query: ```sql ((sum(Person.(2009 - age)) / count(Person)) as avgyear). (Person where 2009 - age >= avgyear and 1977 = 2009 - age) ``` Optimized query: ```sql (sum(Person.(2009 - age)) / count(Person)) as avgyear).if (1977 >= avgyear) then idxPerBirthYear(1977 groupas $equal) ``` This procedure is used also when the key cardinality is different from [1..1], i.e. in the case of two or more selection predicates based on in operator. 6.2.6. Omitting Individual Index Keys. For multiple keys indices, enum keys may be usually omitted in an index call. The index optimizer, in order to omit a key when no selection predicates were specified, sets both lower and upper bounds to the smallest and largest key values: Original query: ```sql Person where true = married and address.city in "Wrocław" ``` Optimized query: ```sql idxPerAge&Mar&City((-2147483648 , 2147483647 , true , true) groupas $range; true groupas $equal ; "Wrocław" groupas $equal ) ``` To omit the Boolean key in an index call, set key parameter criteria are used (false union true). Original query: Optimized query: 6.2.7. Predicates Disjunction and Considering Inheritance. The index optimizer is also prepared to deal with queries where selection predicates are joined with \texttt{or} operators. As disjunction weakens a selection, it also makes optimization more complex. Therefore if the application of an index is possible without considering predicates joined with \texttt{or} operator then the optimizer may skip deeper analysis. In another case, in order to check all possibilities for indexing, the optimizer removes \texttt{or} operator and splits non-algebraic \texttt{where} operator expression on two partial selection expressions. The objects returned by both these expressions can be duplicated so it is necessary to leave only distinct object references which is achieved using a \texttt{uniqueref} expression. Indexing reduces the amount of data processed in a query only if it can be applied to both partial expressions. This procedure is recursive if there is more than one \texttt{or} operator. Let us consider the following example of optimization: \begin{verbatim} Emp where age = 28 and married = true and (address.city = "Szczecin" or "Szczecin" in worksIn.Firm.address.city) \end{verbatim} The query can be split by the index optimizer into the following form: \begin{verbatim} uniqueref((Emp where age = 28 and married = true and "Szczecin") union (Emp where age = 28 and married = true and "Szczecin" in worksIn.Firm.address.city)) \end{verbatim} and depending on a current cost model and existing indices, the optimizer can apply the transformation: \begin{verbatim} uniqueref(( (Emp) idxPerAgeMar&City(28 groupas $equal; true groupas $equal; "Szczecin") union (idxEmpAge&WorkCity(28 groupas $equal; "Szczecin" groupas $equal) where married = true) \end{verbatim} Selection predicates based on age, married and address.city expressions concern EmpClass’s superclass, i.e. PersonClass, and for that reason the administrator can equip the whole Person collection with the idxPerAge&Mar&City index. It can return instances that do not belong to EmpClass thus the optimizer has to introduce a facility removing non–EmpClass instances from the index invocation result. This can be done using an SBQL coerce operator. The syntax of the coerce operator was taken from the typical syntactic convention that is known from the languages such as C, C++, Java, etc. as cast. Consequently, the result of the idxPerAge&Mar&City index call is automatically cast to Emp collection because the original query concerns only employees. In the presented approach to reusing an index in inheritance, indices considering the class which introduces the given key are more versatile as they can be used for optimising selection queries addressing subclasses collections. 7. Optimization Gain Let us discuss the following test example. If an index call is located on the right side of a non–algebraic operator, e.g. a dot, then it is likely to be evaluated more then once during the query execution. This is shown using the following example with an idxEmpTotalIncomes index: Query 1a. For 61 year old, married employees living in Łódź, working in Łódź or Wrocław retrieves a name concatenated with a surname and a number of employees with an equal amount of total incomes. <table> <thead> <tr> <th>reference</th> <th>index optimised</th> </tr> </thead> <tbody> <tr> <td>(Emp where address.city = &quot;Łódź&quot; and</td> <td>(Emp where address.city = &quot;Łódź&quot; and</td> </tr> <tr> <td>worksIn.Dept.address.city in (&quot;Łódź&quot;</td> <td>worksIn.Dept.address.city in (&quot;Łódź&quot;</td> </tr> <tr> <td>union &quot;Wrocław&quot;) and married = true and</td> <td>union &quot;Wrocław&quot;) and married = true and</td> </tr> <tr> <td>age = 61) as e). (e.name + &quot; &quot; + e.surname, count(Emp</td> <td>age = 61) as e). (e.name + &quot; &quot; + e.surname,</td> </tr> <tr> <td>where getTotalIncomes() = e.getTotalIncomes())</td> <td>count(idxEmpTotalIncomes(e.getTotalIncomeS()))</td> </tr> </tbody> </table> In Figs 4 and 5 the logarithmic scale is used also on the y–axis. The dependency between the optimization gain and the number of persons is close to linear and grows to 457 for 300000 objects. Fig. 4. Evaluation times and optimization gain for Query 1a Additionally, introducing another index – `idxEmpAge&WorkCity` – in order to optimise evaluation of the first part of the query can significantly influence the performance: Query 1b. <table> <thead> <tr> <th>reference</th> <th><code>((Emp where address.city = &quot;Łódź&quot; and worksIn.Dept.address.city in (&quot;Łódź&quot; union &quot;Wrocław&quot;) and married = true and age = 61) as e), (e.name + &quot; &quot; + e.surname, count(Emp where getTotalIncomes() = e.getTotalIncomes()))</code></th> </tr> </thead> <tbody> <tr> <td>index optimised</td> <td><code>((Emp where address.city = &quot;Łódź&quot; and worksIn.Dept.address.city in (&quot;Łódź&quot; union &quot;Wrocław&quot;) and married = true and age = 61) as e), (e.name + &quot; &quot; + e.surname, count(idxEmpTotalIncomes(e.getTotalIncomes())))</code></td> </tr> </tbody> </table> For a database consisting of 300000 person objects two indices give the gain approximately 40 times greater. Despite such difference, the most important is an index repeatedly invoked, i.e. `idxEmpTotalIncomes`. Without this index the performance is not noticeably improved. In the paper the rules concerning creating and taking advantage of indices in the ODRA prototype have been briefly described. In the presented approach the optimization is achieved through the described query transformation. The proposed implementation of indexing in ODRA enables creation and transparent automatic maintenance of indices facilitating processing of selection predicates based on arbitrary deterministic expressions consisting of path expressions, aggregate functions, class method invocations (taking into consideration inheritance and polymorphism). All functionalities necessary to provide the desired behaviour of indices are already implemented and functional. Still, the ODRA indexing is under development and requires further research. Future works include employing different index structures (e.g. B-Trees) and implementing new optimization methods taking advantage of indices (e.g. optimization of rank queries). Additionally we consider extending indexing capabilities onto distributed environment using the SDDS method and currently developed volatile indexing technique. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3251/2445", "len_cl100k_base": 8021, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 44154, "total-output-tokens": 9538, "length": "2e12", "weborganizer": {"__label__adult": 0.00023794174194335935, "__label__art_design": 0.00021755695343017575, "__label__crime_law": 0.0002803802490234375, "__label__education_jobs": 0.0006275177001953125, "__label__entertainment": 4.363059997558594e-05, "__label__fashion_beauty": 0.00010758638381958008, "__label__finance_business": 0.00026416778564453125, "__label__food_dining": 0.0002237558364868164, "__label__games": 0.00037479400634765625, "__label__hardware": 0.0005388259887695312, "__label__health": 0.00031304359436035156, "__label__history": 0.00017786026000976562, "__label__home_hobbies": 6.42538070678711e-05, "__label__industrial": 0.0003097057342529297, "__label__literature": 0.00016379356384277344, "__label__politics": 0.00016164779663085938, "__label__religion": 0.0002884864807128906, "__label__science_tech": 0.0165252685546875, "__label__social_life": 5.507469177246094e-05, "__label__software": 0.01422119140625, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.00015401840209960938, "__label__transportation": 0.0002932548522949219, "__label__travel": 0.0001556873321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38612, 0.02342]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38612, 0.47629]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38612, 0.84154]], "google_gemma-3-12b-it_contains_pii": [[0, 1576, false], [1576, 4273, null], [4273, 7138, null], [7138, 9764, null], [9764, 10868, null], [10868, 11708, null], [11708, 13864, null], [13864, 16078, null], [16078, 17515, null], [17515, 19760, null], [19760, 21746, null], [21746, 23770, null], [23770, 25614, null], [25614, 27505, null], [27505, 29091, null], [29091, 30717, null], [30717, 32500, null], [32500, 34890, null], [34890, 35888, null], [35888, 37370, null], [37370, 38612, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1576, true], [1576, 4273, null], [4273, 7138, null], [7138, 9764, null], [9764, 10868, null], [10868, 11708, null], [11708, 13864, null], [13864, 16078, null], [16078, 17515, null], [17515, 19760, null], [19760, 21746, null], [21746, 23770, null], [23770, 25614, null], [25614, 27505, null], [27505, 29091, null], [29091, 30717, null], [30717, 32500, null], [32500, 34890, null], [34890, 35888, null], [35888, 37370, null], [37370, 38612, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38612, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38612, null]], "pdf_page_numbers": [[0, 1576, 1], [1576, 4273, 2], [4273, 7138, 3], [7138, 9764, 4], [9764, 10868, 5], [10868, 11708, 6], [11708, 13864, 7], [13864, 16078, 8], [16078, 17515, 9], [17515, 19760, 10], [19760, 21746, 11], [21746, 23770, 12], [23770, 25614, 13], [25614, 27505, 14], [27505, 29091, 15], [29091, 30717, 16], [30717, 32500, 17], [32500, 34890, 18], [34890, 35888, 19], [35888, 37370, 20], [37370, 38612, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38612, 0.06883]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
1f435d9ef5f061db878255d54b3ea86e1ac77c47
[REMOVED]
{"Source-Url": "http://www.cs.princeton.edu/courses/archive/spr19/cos217/lectures/11_Performance.pdf", "len_cl100k_base": 4220, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 48905, "total-output-tokens": 5509, "length": "2e12", "weborganizer": {"__label__adult": 0.00032258033752441406, "__label__art_design": 0.00019407272338867188, "__label__crime_law": 0.0002646446228027344, "__label__education_jobs": 0.00043272972106933594, "__label__entertainment": 4.249811172485352e-05, "__label__fashion_beauty": 0.0001181960105895996, "__label__finance_business": 0.00012755393981933594, "__label__food_dining": 0.00034689903259277344, "__label__games": 0.0006246566772460938, "__label__hardware": 0.0009355545043945312, "__label__health": 0.00028634071350097656, "__label__history": 0.0001189112663269043, "__label__home_hobbies": 8.20159912109375e-05, "__label__industrial": 0.00029969215393066406, "__label__literature": 0.00014841556549072266, "__label__politics": 0.0001971721649169922, "__label__religion": 0.0004045963287353515, "__label__science_tech": 0.0021190643310546875, "__label__social_life": 5.6624412536621094e-05, "__label__software": 0.00292205810546875, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.0003223419189453125, "__label__transportation": 0.00039839744567871094, "__label__travel": 0.0001646280288696289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13288, 0.02362]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13288, 0.62692]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13288, 0.75683]], "google_gemma-3-12b-it_contains_pii": [[0, 214, false], [214, 554, null], [554, 941, null], [941, 1022, null], [1022, 1237, null], [1237, 1439, null], [1439, 1686, null], [1686, 2068, null], [2068, 2673, null], [2673, 2846, null], [2846, 2925, null], [2925, 3392, null], [3392, 3889, null], [3889, 4296, null], [4296, 4765, null], [4765, 5230, null], [5230, 5651, null], [5651, 5995, null], [5995, 6394, null], [6394, 6607, null], [6607, 7683, null], [7683, 9144, null], [9144, 9400, null], [9400, 9831, null], [9831, 9910, null], [9910, 10249, null], [10249, 10443, null], [10443, 10718, null], [10718, 11000, null], [11000, 11256, null], [11256, 11477, null], [11477, 11946, null], [11946, 12226, null], [12226, 12676, null], [12676, 13028, null], [13028, 13288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 214, true], [214, 554, null], [554, 941, null], [941, 1022, null], [1022, 1237, null], [1237, 1439, null], [1439, 1686, null], [1686, 2068, null], [2068, 2673, null], [2673, 2846, null], [2846, 2925, null], [2925, 3392, null], [3392, 3889, null], [3889, 4296, null], [4296, 4765, null], [4765, 5230, null], [5230, 5651, null], [5651, 5995, null], [5995, 6394, null], [6394, 6607, null], [6607, 7683, null], [7683, 9144, null], [9144, 9400, null], [9400, 9831, null], [9831, 9910, null], [9910, 10249, null], [10249, 10443, null], [10443, 10718, null], [10718, 11000, null], [11000, 11256, null], [11256, 11477, null], [11477, 11946, null], [11946, 12226, null], [12226, 12676, null], [12676, 13028, null], [13028, 13288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13288, null]], "pdf_page_numbers": [[0, 214, 1], [214, 554, 2], [554, 941, 3], [941, 1022, 4], [1022, 1237, 5], [1237, 1439, 6], [1439, 1686, 7], [1686, 2068, 8], [2068, 2673, 9], [2673, 2846, 10], [2846, 2925, 11], [2925, 3392, 12], [3392, 3889, 13], [3889, 4296, 14], [4296, 4765, 15], [4765, 5230, 16], [5230, 5651, 17], [5651, 5995, 18], [5995, 6394, 19], [6394, 6607, 20], [6607, 7683, 21], [7683, 9144, 22], [9144, 9400, 23], [9400, 9831, 24], [9831, 9910, 25], [9910, 10249, 26], [10249, 10443, 27], [10443, 10718, 28], [10718, 11000, 29], [11000, 11256, 30], [11256, 11477, 31], [11477, 11946, 32], [11946, 12226, 33], [12226, 12676, 34], [12676, 13028, 35], [13028, 13288, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13288, 0.06341]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
25cc70936f834c411f4743029e790bfeb2cb9e5d
Chapter 4 DESIGN CHALLENGES AND OVERVIEW OF PROPOSED FRAMEWORK Introduction Recent advancements in the computational capabilities of portable devices with their increased wireless connectivity have favoured the emergence of service provisioning. Service provisioning in mobile ad hoc networks faces special difficulties due to the constraints of the ad hoc environment—such as lack of central infrastructure, high level of device heterogeneity, degree of mobility, limited device and network resources. As the mobility of users may lead to different contexts, users can increasingly benefit from services whose results can be adapted to the changing context such as variations in users’ position, preferences and requirements and locally available resources (Bellavista et al (2003)). In these context-aware service provisioning scenarios, it is crucial to enable the dynamic retrieval of available services in the nearby of the user’s current location and environment, while minimizing user involvement in service selection, configuration and binding. Two main features of the proposed service provision system are (1) the ability of the service entities to tolerate the disconnections of services and (2) the ability to handle changes. This can be achieved by making the different entities of the architecture less dependent by loosening their strong ties with other entities. Being less dependent means that each service entity is more autonomous, this in turn means less needed interactions. This is essential in mobile environments. Additionally loosening the ties means more tolerance to change and failure. The ability to handle change can help improve performance of the service provision. This notion of loose coupling is currently used in Service oriented paradigm. It has effectively influenced the late-binding of consumers to provide. In order to design a viable service provision system, loose coupling is highly desirable on many challenging points. The other aspect is selecting the best matching service provider node which meets the user’s need by taking the current context into consideration. Providing services even in the condition of mobility is also the major issue to be considered. In 4.1 elements in service provisioning are discussed. Section 4.2 focuses on the requirement to be satisfied by a service provisioning framework for mobile ad hoc networks. Section 4.3 focuses on the design challenges to design a service provisioning framework. Finally, section 4.4 provides an overview of the proposed framework followed by the chapter summary. ### 4.1 Elements in Service Provisioning A framework for service provisioning should have the following salient elements, with each element fulfilling a very specific role in the framework. - The *service description* element is responsible for describing a service. The service to be provisioned has to be specified and named. This phase is named as service specification phase. The services have to be specified with the service description, preferences. - The *service advertisement* element is responsible for advertising a given service description on a directory service or directly to other hosts in the mobile ad hoc network. - The *service discovery* element is the keystone of the framework and carries out three main functions. It formulates a request, which is a description of the needs of a user. This request is formatted similar to the service description. The element also provides a matching function that selects the best matching service provider for the request. Finally, it provides a mechanism for the user to communicate with the service provider. - The *service invocation* element is responsible for facilitating the use of a service. Its functions include transmitting commands from the user to the service provider and receiving the results. A good invocation mechanism abstracts the communication details from the user and, in case of network failure, redirects requests to another provider or gracefully terminates. - The *service maintenance* element is responsible for maintaining the service throughout its entire time. Its functions include upgrading the functionality offered to the clients, upgrading the quality of assistance already offered. In the course of service maintenance the dynamic adaptation of the service to the resource variations must be assured. The performance degradation of the service due to dynamic change in network condition or other resources has to be considered. A good service maintenance mechanism performs its tasks with no or little interaction with the client. - **Service termination** is a special kind of adaptation needed when the participating node moves. The node on the move can be either the consumer or the provider. Because of the mobility the status about the network has to be change. Dynamic adaptation to the resource variation and adaptation requires continuous monitoring of resources and the service context. Since all these elements are to be implemented in case of every applications, it is a reasonable design choice to gather and implement them at one place in the form of framework, so that the application developers do not have to deal with these common functions rather can focus on only application specific issues. ### 4.2 Requirements to be Fulfilled by the Service Provisioning Framework for MANET There is a broad spectrum of mobile applications which require feasible environment for deployment. A framework that supports for the development of applications for the mobile ad hoc network is a novel approach that will offer information access and sharing by considering the challenges and constraints of the mobile ad hoc networks. Though the framework for mobile ad hoc networks is tightly coupled with the applications one point framework may not be designed. Utmost care has been taken to develop a framework to adhere to the design constraint of the framework. From the point of view, mobile ad hoc networks are created by interconnection of individual nodes and are formed independently i.e. not based on any kind of external infrastructure. This implies that the service discovery process must follow network formation stage, service and resource discovery, which can take place by advertising the service available in them. In order to propose a solution for service discovery protocol for mobile ad hoc networks, the requirements of such a network should first be analysed and classified as a realisable requirements. The requirements are classified into functional, technical and performance requirements. **Functional requirements** Functional requirements identify what functionalities must be provided by discovery protocol. Following are the functionalities required. *Decentralised directory:* the service directory must be accessed by the node always regard less of the nodes mobility. The discovery should not depend on any particular node. *Support for service mobility:* the service discovery process must support service mobility. Due to mobility the services can be handed over from one server to another. Service discovery must identify equivalent/adaptable services to support the aforementioned handover. Making the type of handover depends on the nature of the service used. Sometimes a service may be offered to the user on an ‘online basis’ i.e. the service provision point must move when the user and terminal move. For example if a user watches a real-time video, the stream must also be provided at the new location. Sometimes, the nature of a service does not allow handover e.g. an uncompleted printer job. In this case, the service cannot be handed over to another printer, but the original printer must either complete the job in unconnected mode or terminate the printing process. *Support for service provisioning:* the service discovery process must provide proper interfaces to other service elements using the discovered service specifications. The elements include service provision, service selection, service control, service user notification, service adaptation etc. Service discovery is the initial phase of service provisioning. It must support following phases that are expected to occur after discovery. Support must be offered providing proper interfaces to the other service elements such as: service provision (how to benefit from the service), service selection (what is the most appropriate service), service control (how to control service provisioning), service user notifications (how to notify user about the status of the service) and service adaptation (how the service is to be adapted to the user context). _Support for multicast_: the transport layer that the CASP resides on must support multicasting along with unicast and broadcast, because this is the only efficient way to address multiple devices with a service request. _Technical requirements_ Technical requirements describe what kind of support should be provided to meet the functional requirements _Browsing capability_: the service discovery protocol must provide the ability to browse the available services within the network. The entire local i.e. one hop and to a minimum of 3 hop neighbour nodes’ services can be discovered & listed and the size of the list has to be small. The user needs to benefit from all the services available. The browsing enables the user to be aware of all the available services in the network. The user must be able to browse at least the one hop neighbour nodes’ services. _Proper specification of services_: the service discovery protocol must specify the services in a proper manner. The specification of the service must be machine-readable. The network entities must be able to recognise the service and be able to select it and use it without any user interaction, because user likes to automatically use the most appropriate service for his/her requirements. _Proper service search mechanism_: a service discovery protocol must provide a proper search capability. Sometimes users search for a specific service, or a particular service supporting specific attributes. The user must be able to search for their required services. The service search must support for various search operations (i) exact or approximate search i.e. finding an available type of service, (ii) attribute search i.e. finding a service which contains a particular attribute, or (iii) attribute value search i.e. finding a service which contains a specific attribute with a particular value or range of values. Proper mechanism to handle wireless network dynamics: A service discovery protocol must be able to handle the issues of wireless network’s dynamics and failures. The specific problems to wireless networks like link failure; node failure etc. in such situations, the service discovery protocol must not become trapped in a deadlock situation. Performance requirements The performance requirements set the targets for success and cost of a solution. Reasonable discovery time: the discovery time in mobile ad hoc networks must be acceptable and of the same order as the network formation time. Reasonable network load: the service discovery process must use the network capacity efficiently and not slow down normal network traffic. The incurred overhead should be of the order of other discovery processes in the network, such as route discovery, address assignment. Proper usage of computing resources: the computing resources in mobile ad hoc network nodes’ are very limited. The Service provisioning framework must be designed to be as light as possible, using minimal amount of CPU time, memory, battery lifetime. These limitations dictate simplicity of the code and applications that can be executed on such devices. Some service discovery protocols are lightweight but do not provide support for provision of service and deal only with discovery while others deliver both discovery and provision with a higher expense. Reasonable adaptation delay: because of the dynamic changes during formation of the network, the protocol must adapt itself to these changes. Many factors affect this delay, such as mobility of nodes, changes in the status of nodes e.g. on, off, sleep, change in availability or properties of the service i.e. while change is not reflected on the registry and change in availability of the network. Some of the protocols mentioned earlier are based on service registration with servers, which is time consuming process and the frequency of it adapts to the changes of the network. Another factor affecting the delays is the content of the transferred data. Using protocols based on the transfer of data instead of text based code can reduce the delay. 4.3 Design Challenges and Proposed Solutions Due to the dynamic nature of the mobile Ad hoc networks, it exhibits frequent and unpredicted topology changes. Mobile ad hoc networks have to be able to adapt to the traffic and propagation conditions to the nodes mobility pattern. As mobility affect the neighbour nodes and the providers of the services may join or leave the network. The protocol should monitor and select a service based on the mobility pattern, battery life, distance between the provider and the consumer. This mobility and topology change can be addressed by the following mechanisms. *Decentralized directory service.* The service information in the network has to known by the participant of the network. In order to store them one cannot rely on central server because of the liveliness problem of either the server or the service provider which is depicted in Figure 4.1. This situation may lead to the problem of inconsistency because of the nodes mobility. So decentralized directory service is needed. In order to alleviate this centralised directory, every node in the network has to maintain the available services which are reachable by it. The proposed structure of service directory in the networks is shown in Figure 4.2. In Chapter 5 service discovery issues with this proposed architecture are discussed. **Figure 4.2 Decentralised Peer-Peer Directory Structure** **Context awareness** The wide spectrum of applications demonstrates that mobile ad hoc networks have some distinct advantages over wired networks, mainly due to their fault-tolerant and self-organizing characteristics. At the same time, mobile ad hoc networks present a number of complexities and design constraints that are not existent in wired networks. The most important factor characterising a mobile ad hoc network is the high variability of the network state. The network state is affected by link connectivity, power control and mobility effect (Bisnik (2005)). Link connectivity: In wired environment, the link connectivity is a binary decision, i.e. link exists between two nodes when they are connected by a physical medium like optical fibre or coaxial cable. In a mobile ad hoc network, the broadcast nature of the communication allows each node to be connected with multiple receiver nodes. Power control: The broadcast nature of the wireless communication determines that each node may increase/reduce the number of neighbouring nodes by tuning its transmitting power. Thus, the topology of the network as perceived by each node is strongly dependent by the transmit power of each node. Mobility effect: The nodes belonging to a mobile ad hoc network are free to move and organize themselves arbitrarily. The mobility effect affects the performance of the network protocols. At routing layer the mobility factor governs the performance of the routing protocols. Meeting the requirements of the application despite variable link connectivity, network topology and power levels imply two issues in protocol design: - Information sharing: each layer of the protocol stack should be able to access the information about the current network state. - Protocol cooperation: performance gains may be obtained if joint solutions at multiple network layers are considered. Unfortunately, the layered open system architecture (OSI) does not seem to support these requirements. The layered architecture is remarkably successful for networks which are made up of wired links, where the key assumptions and abstraction boundaries work well. The main drawback of the layered model is the lack of cooperation among non adjacent layers: each layer work in isolation with a very few information about the network. This cooperation among the layers can be achieved by cross layering. By using cross layering the network state or otherwise known as environment context can be accessed. Context means every aspect that can impact the behaviour of an application. Therefore the framework should be context aware. The awareness can be broadly categorized as device context and environment awareness. The environment context can be accessed using cross layering. Through cross layering information from one layer can be passed on to any layer in the stack. For example information in the MAC layer may be needed by PHY, Network, Transport and application. To have cross layering in effect adaptors are used. Adaptors are methods to access information from that layer. They are the interfaces between cross layer and layer N. These are set of functions which enable the cross layer to get and put information into the corresponding layer. For each context prediction predictors are used. These are methods used to process the information accessed from adaptors and decide the values to be taken by the parameters and put them through adaptors. ![Diagram of interactions between various layers](image) *Figure 4.3 Interactions among Various layers for Context Accessing using Cross Layer* Figure 4.3 shows the interactions between the units and layers used for cross layering. These predictors are the sources of network context to the framework. The predictor gets information from the lower layer PHY can identify the signal strength of the incoming signal. The use of signal strength is motivated by the communication hardware. To identify or predict the nodes movement signal strength can be used. A predictor for node movement has been designed based on this. Whenever the node movement is identified, it will be informed to the event handler which will raise the ‘node_movement’ event. It also notifies the happening of this event to all the event listeners. Any number of predictors can be placed for each of the event that needs to be monitored. Though the location of the device can be had by using GPS systems, the usage of this cannot be taken as assumption because many a time the GPS devices will not be available. So rely on the lower layer information is the technique used in this thesis. The speed of the moving device, the direction of the device is available to determine the distance between the nodes (Lyes Dekar and Hamamache Kheddouci (2008)). Any number of predictors can be used on the available information by using the adaptors. The general architecture to be used while designing the cross layering is available in the work of Licia Carpa (2001). The framework should take advantage of this awareness with minimum effort and network overhead. Chapter 8 elaborates on this topic. Description on this has been presented in the paper of thesis author’s (Ponmozhi (2012)). The second challenge to be considered is Ability to tackle changes. *Late binding:* By hiding the implementation technology, a change in the implementation of a service provider will not require any change at the consumer side. But if the service contract changes that will affect the consuming client. The service provision system has to manage these changes to decrease the loss of service from a client’s point of view. Therefore the service-oriented paradigm enforces late-binding between consumers and providers in order to free their coupling until the last moment before invocation. Yet during the invocation, the service contract binds the consumer to the provider. As the devices move frequently the unavailability of the service may be a situation when the protocol should identify alternative services. The late binding to the service providers and switching to new providers based on the context is enabled by policy management. Rebinding to other provider when the already connected Service Provider cannot be accessed is shown in figure 4.4. Policy-based management has been considered by two working groups namely IETF and Resource Allocation Protocol Working Group. The RAPWG has described a framework for policy-based admission control specifying the two main architectural elements: - The Policy Enforcement Point (PEP) represents the component that always runs on the policy-aware node and it is the point where the policy decisions are actually enforced. - The Policy Decision Point (PDP) is the point where the policy decisions are made. Policy Framework WG describes four major functional elements namely: - **A Policy Management Tool**, to enable an entity to define, update and optionally monitor the deployment of Policy Rules. - **A Policy Consumer**, which is a convenient grouping of functions, responsible for acquiring, deploying and optionally translating Policy Rules into a form useable for Policy Targets. - **A Policy Target**, which is an element whose behaviour is dictated by Policy Rules carrying out the action indicated by the Policy Rule. This thesis applies policy management to implement the concept of rebinding. Challenges to be solved for the implementation are: - What are the system artefacts that need to be monitored to enable realistic policies and reconfiguration rules? How fine grained the monitoring should be? Is this monitoring a periodic or continuous one? • What are the system artefacts that can be adapted by reconfiguring rules and polices? How fine-grained this adaptation should be? • How to define domain-specific policy languages that are easy to use and express concerns by a wide range of users? To what extend can the system’s behaviour be programmed as policies? Effective use of the distributed systems requires an approach that enables them to be managed and configures easily and uniformly through, a policy based approach Nelson (Matthys et al (2008)). An application may, specify a policy that express that the most efficient service in the environment must be used on what conditions. Such policy-based management approach consists of several steps (i) Gathering about the execution context (ii) Interpretation of policy files matching their conditions and triggering any related events (iii) The enforcement of their associated actions on the current execution context. *Interoperability* Service providers and clients should always be able to understand each other regardless of their implementation technology. The abstraction of the implementation can only be achieved using universal standards and specifications. A service provider implemented in Java should seamlessly interact with clients of other languages. This implementation abstraction is well done in service provision system like web services using XML as an interaction language and specifically using specifications like WSDL and SOAP for service description and invocation. The service platform proposed in the following chapters also uses WSDL and SOAP in the discovery and invocation interactions in a simple form. ### 4.4 Proposed Service Provisioning Framework Overview Service provisioning in mobile ad hoc networks is challenging due to the characteristics of the network. In fixed networks service provisioning can be done based on the centralized entity called directory to store the information related to the services provider by different providers. And the applications developed need not worry about the bandwidth usage as they are resource rich environments. The device capabilities of the mobile ad hoc networks vary and most of time limited. This thesis defines Context Aware Service Provisioning (CASP) framework to provide basic modules needed for service provisioning in mobile ad hoc networks. This framework consists of service advertising and matching required services based on functional and non functional properties and maintenance of the binding between the service provider and the service requester. Service advertising and its related issues are discussed in chapter 5. Chapter 6 is dedicated to deal with the issues related to non functional properties management in CASP. Binding issues and maintenance phase is dealt in chapter 7. As service discovery is application layer task, the framework need to be nearer to the application. The proposed framework sits between the application layer and the network layer. Figure 4.5 shows the placement of CASP framework. In order to cope with the changes in the underlying network it interacts with the network layer and other lower layers for information as cross layer interaction. ![Figure 4.5 Placement of CASP Framework in the Protocol Stack](image) An overlay on top of the network layer has been created, (i.e.) at the application level and disseminate service advertisements, requests and replies. The placement of service discovery above the routing layer provides the advantage of (i) a modular and layered approach and one can replace the protocols at any layer without affecting other layers. (ii) No assumption about any specific routing protocol or the underlying network is necessary. It is also possible to create pervasive service discovery across many domains. In order to separate the concerns of communications and those of service interactions, separate modules for communication with other nodes have been designed. Other modules can consume the functions provided by this module. There are two primary agents service provider agent, user agents which uses the communication module to interact with each other. Four main components of the frameworks were identified, each responsible for a particular aspect. Figure 4.6 shows the various components of CASP. The resource manager module is responsible for keeping track of installed components, implementing the core methods of managing the component framework in a device-specific language (resource manager). To hide the physical distribution of the environment to other levels and application a module for distribution has been added (advertisement and query manager), thirdly, for the adaptation process, environment monitoring framework have been defined, which collects information about the environment, provides a set of API to set monitoring triggers for single or multiple conditions, the monitoring framework may gather this information by using probes on resource availability. A policy parser and policy enforcer will interpret and enforce the policies. Policies are seen as a way to guide the behaviour of a network or distributed system through high-level, declarative directives. The proposed framework uses XML to represent Policies. This proposed framework splits the service provisioning into four components, which can be tuned by every node according to their capabilities and policies. The framework will also break service discovery into programmable components and allow tuning depending upon application needs and the device capabilities. The various tunable components and facilitators in CASP are given in Figure 4.7. The modules are grouped into two the (i) self management facilitators are components which access the context and provides for decision making, policy management is the on which takes necessary adaptation based on the context value. (ii) Tuneable components they will working according to the context accessed. These are the target components which will be changed by the policy management modules. The basic components are (i) facilitators which includes modules for Context-awareness and Policy management (ii) Tuneable components which includes Service advertisements (Query Vs Announcement), Service selection (based on user preferences and service provider’s capability) and Service maintenance (based on binding, reselection and rediscovery policies) Given the above approach, a high level of autonomy can be introduced into the nodes so that they can automatically cope with the increased levels of heterogeneity and volatility, which present in a mobile ad hoc network environment. Summary Challenging points that must be addressed by the proposed framework were presented in this chapter. Proposed framework splits into four modules to take care of the different phases of service provisioning. It has been decided to use (i) peer-peer caching instead of central entity for directory service (ii) cross layer to access dynamic context (iii) policy management for adaptation. Supporting tunable and self-managements components were identified to increase user’s satisfaction by giving due considerations to the users centric data. Implementation of these modules aims to respect the design principle of independence and loose coupling. These design principles have been followed throughout the design process of communication, discovery & invocation and maintenance phases. From next chapter onwards the modules and their architecture in CASP will be elaborated.
{"Source-Url": "https://shodhganga.inflibnet.ac.in/bitstream/10603/89553/13/13_chapter_4.pdf", "len_cl100k_base": 5292, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 33977, "total-output-tokens": 5956, "length": "2e12", "weborganizer": {"__label__adult": 0.00044345855712890625, "__label__art_design": 0.0010004043579101562, "__label__crime_law": 0.00042128562927246094, "__label__education_jobs": 0.0012731552124023438, "__label__entertainment": 0.0001386404037475586, "__label__fashion_beauty": 0.0002237558364868164, "__label__finance_business": 0.0005011558532714844, "__label__food_dining": 0.000408172607421875, "__label__games": 0.0007510185241699219, "__label__hardware": 0.00506591796875, "__label__health": 0.00079345703125, "__label__history": 0.0007395744323730469, "__label__home_hobbies": 0.00012874603271484375, "__label__industrial": 0.0006041526794433594, "__label__literature": 0.0003688335418701172, "__label__politics": 0.0003314018249511719, "__label__religion": 0.0005612373352050781, "__label__science_tech": 0.1907958984375, "__label__social_life": 9.53078269958496e-05, "__label__software": 0.017333984375, "__label__software_dev": 0.775390625, "__label__sports_fitness": 0.0004553794860839844, "__label__transportation": 0.0016460418701171875, "__label__travel": 0.0005202293395996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29321, 0.00467]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29321, 0.39194]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29321, 0.92169]], "google_gemma-3-12b-it_contains_pii": [[0, 1814, false], [1814, 4206, null], [4206, 6208, null], [6208, 8283, null], [8283, 10545, null], [10545, 12773, null], [12773, 13758, null], [13758, 14515, null], [14515, 16797, null], [16797, 18486, null], [18486, 20893, null], [20893, 21816, null], [21816, 23897, null], [23897, 25364, null], [25364, 27054, null], [27054, 28206, null], [28206, 29321, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1814, true], [1814, 4206, null], [4206, 6208, null], [6208, 8283, null], [8283, 10545, null], [10545, 12773, null], [12773, 13758, null], [13758, 14515, null], [14515, 16797, null], [16797, 18486, null], [18486, 20893, null], [20893, 21816, null], [21816, 23897, null], [23897, 25364, null], [25364, 27054, null], [27054, 28206, null], [28206, 29321, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29321, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29321, null]], "pdf_page_numbers": [[0, 1814, 1], [1814, 4206, 2], [4206, 6208, 3], [6208, 8283, 4], [8283, 10545, 5], [10545, 12773, 6], [12773, 13758, 7], [13758, 14515, 8], [14515, 16797, 9], [16797, 18486, 10], [18486, 20893, 11], [20893, 21816, 12], [21816, 23897, 13], [23897, 25364, 14], [25364, 27054, 15], [27054, 28206, 16], [28206, 29321, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29321, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
4aa4121e289ac804f9a7ee5bce721a07fe1d2f3d
m3pl: A Work-FLOWS ontology extension to extract choreography interfaces Haller, Armin; Oren, Eyal 2006 http://hdl.handle.net/10379/557 m3pl: A Work-FLOWS ontology extension to extract choreography interfaces Armin Haller and Eyal Oren Digital Enterprise Research Institute (DERI) Galway, Ireland firstname.lastname@deri.org Abstract. Cross-organisational interoperability is a key issue for success in B2B e-commerce applications. To achieve this interoperability, choreography descriptions are necessary that describe how the business partners can cooperate. In existing approaches, these choreography descriptions are independent of the internal workflows of the partners. We present a framework for extracting choreography interface descriptions from internal workflow models. Our approach comprises two steps: first we map internal workflow models into a intermediary formal model, then we generate choreography interfaces from it. In this paper we present m3pl, an ontology extension based upon the First Order Ontology for Web Services (FLOWS) [2]. The extensions provide relations to model workflow views and choreography interfaces. 1 Introduction Organisations offer business functionalities to their customers, and implement these functionalities in their business processes. For years, organisations have used Workflow Management Systems (WfMSs) to describe and execute their business processes [6]. Underlying these WfMS are different workflow languages with many different metamodels. These workflow languages vary in the available modelling constructs and in the semantics of their constructs. To capture these semantics and to allow interoperability of WfMS the Process Specification Language (PSL) [16] was developed. PSL is an ontology that defines workflow concepts and their semantics. Various extensions have been developed (as part of the PSL standard), including the First Order Logic for Web Services (FLOWS) [2] ontology for modelling (compositions of) Web Services. With the advent of Service Oriented Computing [13] organisations started to expose their business functionality explicitly as reusable and composable services using standardised protocols such as WSDL and SOAP. Web Services abstract the access to the business functionality from the specifics of programming languages. For using these services organisations provide choreography descriptions written in languages such as WS-CDL [11], Abstract BPEL [20] or ebXML CPP [10]. A choreography describes the message exchange patterns employed by a Web Service interface. These patterns describe how consumers should interact... with the Web Service; they can be described from a global (collaboration) viewpoint or from a local (participant) viewpoint. We will use the term *choreography* for the global viewpoint, and *choreography interface* for the participant’s viewpoint. A fundamental limitation in current approaches to model choreographies is its independence to the underlying workflow definitions. Although a few recently published work address the correlation between a choreography interface and its underlying workflow, current approaches do not focus on an automated mapping between them. This independence leads to two problems: (1) if any change occurs in the internal workflow model, choreography descriptions have to be manually synchronised with the workflow definition, and (2) it is not possible to automatically verify consistency of internal workflow descriptions and external choreography interfaces. This paper presents a framework for combining internal workflow definitions and external choreography descriptions; an overview is shown in Fig. 1. With the framework one can semi-automatically generate choreography interfaces in various languages from workflow models in various languages. The framework is based on PSL [16], an ontology for capturing business processes and FLOWS [2], an extension to PSL for Web Service interactions. ![Fig. 1. Relating workflow models to choreography interfaces](image) The paper is structured as follows: based on a motivating RosettaNet collaboration example described in section 2, we analyse the requirements for our framework in section 3. We present our ontology in section 4. In section 5 we outline the methodology to follow to map from the internal workflow model to m3pl and to extract different choreography interfaces. Finally we discuss related work in section 6 and conclude in section 7. --- 1 The choreography interface is also called behavioural interface by Dijkman and Dumas [5] or abstract process in BPEL4WS [20]. 2 Motivating Example In this section we present an example cross-organisational collaboration. We will illustrate the problems that companies face when designing collaborative business processes with a request-for-quote (RFQ) process. 2.1 Current situation An automotive parts vendor implements and executes his internal processes with IBM Websphere MQ Workflow\(^2\). One of the vendor’s processes concerns the processing of requests for quotes. Figure 2 shows a simplified view of this modelled in MQ Workflow. The symbols on the left of the picture denote a source and sink node and represent the start and end of the MQ Workflow process model. Dashed arrows show data transferred between activities and solid arrows denote the control flow. ![IBM MQ Workflow RFQ](image) **Fig. 2.** IBM MQ Workflow RFQ The process starts with an RFQ from a customer. The vendor checks whether the requested part, say an electric generator, is available in stock and can be delivered within the time specified. If the product is available the vendor prepares a quote, otherwise he returns a referral including the reason for non-delivery. 2.2 Preferred situation The vendor wants to automate the collaboration with his partners. This would minimise the manual labour by enforcing partners to directly invoke interfaces to \(^2\) for our analysis we have used v3.4 of the product. its internal WfMS. An example for such an automation is the initial data input. Currently this data is manually entered into the system; the goal of the vendor is for this input to come directly from the external business partner. To enable automatic collaboration the vendor needs to describe the public view on his business processes. To comply with industry standards this public process should conform to the standardised RosettaNet choreography interface PIP 3A1; which describes a request for quotation. Figure 3 shows a RosettaNet collaboration and the internal process model described above in a UML activity diagram. Public activities (the RosettaNet PIP 3A1) are displayed in black and private activities in white. The seller’s choreography is formed by the black activities in the right swimlane and the buyer’s choreography by the black activities in the left swimlane respectively. ![Fig. 3. External Process (RosettaNet PIP)](image) In this example the internal workflow is straightforward and for the purpose of simplification it is already aligned to an external standard process in terms of a RosettaNet PIP 3A1. Thus it is not difficult to model the external part of the process in any choreography description language. However, in reality the processes can be significantly more complex, and automatic extraction of choreography interfaces is desired. In order to automatically extract the choreography interface, the internal business process has to be extended by information specific to external processes identified in the following section. Subsequently the model should be extracted to a choreography descriptions language. These features are currently not offered by MQ Workflow or any other WfMS. --- 3 Requirements Analysis We can derive four basic requirements for the above collaboration scenario. They also reflect requirements on a choreography language identified in [1]. 1. **Model internal workflow**: we need to model the internal workflow (shown in figure 2) of the business partner whose choreography interface we want to generate (the supplier in this example). The internal workflow has to be formally specified to describe the business processes unambiguously. A mere syntactic model could lead to inconsistent interpretations; e.g. a `split` can have different meanings in different models. 2. **Model choreography-related concepts in the workflow**: to generate the choreography interface from the internal workflow we need to add additional annotations. These annotations (such as visibility of activities or role of partners) are not part of the internal workflow, because they are of no significance for workflow enactment, but only for a cross-organisational choreography. It is necessary to annotate: (a) the **choreography interface** as a partial view on the internal workflow of the service provider. Choreography interfaces are currently modelled in a multitude of languages. These languages are either **task-flow based** (i.e. WS-CDL, BPEL4WS) or **dependency based** (i.e. WSMO Choreography, OWL-S Process Model). Thus the choreography interface model has to be capable of capturing both modeling alternatives. (b) the **visibility** of tasks: some tasks in a collaboration are **private**, other are **public**. Also, some tasks might be publicly visible to one participant, but private to another. The generated choreography interface for one partner should only include the tasks marked as visible to him. (c) the **role** a party can play when engaging in collaborations. A role defines the observable behaviour a party exhibits in order to collaborate with other parties. A “buyer” role for example is associated with the purchase of goods or services and the “seller” role is associated with providing those goods or services. (d) the **direction of the communication** represents the communication route in a specific interaction and represents constraints on what roles have to be adopted by the participants. A wholesaler for example might play the “seller” role in one interaction and the “buyer” role in some other interaction. A direction relation requires a sending and a receiving **participant**. (e) **messages**. As it can be seen in figure 3 messages are used to transfer data between activities. The explicit representation of messages is commonly not part of workflow models. Even if this fundamental approach to model data flow is possible in the underlying workflow model, it is only used to transfer data internally between activities. In the case of a collaboration these messages are sent between the participants and have usually a message type and some payload associated with it. (f) the **transactional boundaries** of activities to facilitate recovery in the event of a participant failure. The model should allow to define transactional blocks that contain one or many activities that are followed when the effects of a service need to be reversed. 3. **Construct choreography interface from internal workflow:** this is the requirement that drives the framework: the internal workflow model of a particular business process should be reused when constructing a choreography interface, and this process should be automated. Automation requires that mediators are available to different choreography specification representations. 4. **Validate compatibility of choreography interface to internal workflow:** there are several cases where a pre-existing choreography interface has to be validated against an existing workflow model. For example, when partners use a standardised choreography, and extract the choreography interfaces of the participants from this agreement. But a participant might very well already have a workflow model implemented for his business functionality. It is then necessary to verify whether the existent workflow model is compatible to the choreography interface (behavioural equivalence). 4 **Ontology for Choreography Interfaces** In what follows we describe the relations in m3pl capturing the requirements identified in the previous section. Our ontology is an extension to PSL [16] modelled in a first-order language. Due to space limitations we do not include the axioms constraining the relations described below. However, where possible we explain how a relation is constrained by the primitive lexical relations axiomatised in PSL-Core. 4.1 **Introducing m3pl** To model the internal workflow we base our model on PSL [16]. PSL follows a layered approach in the language design, which gives us the resources to express information involving concepts that are not part of the PSL-Core. Thus we can represent arbitrary any workflow model in PSL by introducing extensions which are either defined by relations in the PSL-Core or by axioms that are constraining the interpretation of each new language construct. The first requirement on the relations associated with the choreography model in m3pl is to encompass the two prominent modelling primitives. First we have to be able to extract to different task-flow based choreography description languages, i.e. to Abstract BPEL [20], WS-CDL [11] and ebXML CPP [10] and second to dependency-based ontology-based choreography descriptions, i.e. WSMO Choreography [18], OWL-S Process Model [12]. PSL provides relations to incorporate both workflow modelling primitives. The m3pl extension offers a model to describe the choreography interface of some internal workflow model, whereas the choreography interface represents a model of some functionality (i.e. services). The functional entity in m3pl is a member of the set of such services in the universe of discourse of the interpretation. Services are reusable behaviours within the domain and relate to activities in PSL. A service occurrence models an occurrence of a PSL complex activity that is associated with the service. ```prolog service(?functional_entity) service_activity(?functional_entity,?activity) service_occurrence(?functional_entity,?occurrence) ``` **Listing 1.1. Service Relations** The views extension defines a relation to give one the possibility to restrict the visibility of specific activity occurrences to a certain role and thus create different views [4, 17] on a workflow model. The `visible(?occurrence,?role)` relation associates an activity occurrence to a role. By relating a participant to a specific role the visibility of activity occurrences is guaranteed to be constrained to the defined business partner only. ```prolog visible(?occurrence,?role) ``` **Listing 1.2. Visibility Relation** **Roles** define the conversational relationship between two or more partners by defining the part played by each of them in the conversation. The `partner_link(?role,?functional_entity)` relation models such conversational relationships. The `participate(?agent,?role)` relation is used to relate an organisation to a role that it is playing in a specific collaboration. ```prolog partner_link(?role,?functional_entity) participate(?agent,?role) ``` **Listing 1.3. Conversational Role Relations** A key element in choreography description languages as identified above is the notion of **messages**. Since there exist different strategies of data passing in commercial workflow systems and workflow models, we offer relations which allow to model all three strategies as identified in [14]. Data is modelled with predicates and terms in first-order language. They act as fluents whose values may change as the result of service occurrences. Similar to FLOWS we use the `described_by` relation to associate a message_type to a fluent. Multiple fluents might be associated with one message_type, which should be interpreted as a conjunction of them. Further we allow to associate the fluent to a channel. The `send` and `receive` relations are used to “transfer” fluents from one activity occurrence to the next. Both relations are associated with the `participates_in` relation of PSL, which is used to constrain which objects are involved in a particular occurrence of an activity. Thus in this data passing modelling approach no occurrence of an activity can begin without first receiving, and cannot send before it ends. The `read` relation is similar to receive, but with a weaker ontological commitment on the occurrence. It is not required that a send occurrence preceded the occurrence associated with the read relation. Channels are used to model message-based communication as used in Web Services. We adopt the relations offered in FLOWS [2]. However, we do not relate channels to service occurrences, but to activity occurrences. Channels are a way to model explicit data passing, but are not required to exist since fluents can be related to activity occurrences via the read relation. In order to capture dependency based workflow models every atomic activity occurrence can be associated with preconditions and effects. The occurrence of an atomic activity therefore transforms an initial state of the world (preconditions) into a final state that represents the world (effects) after the execution. Essentially the two relation are similar to the send and receive relation described earlier, since preconditions and effects are also represented by fluents in the ontology. The only difference being that they are not associated with a channel. However, they are a different concept in the real world, since preconditions and effects are not necessarily constraints on data, but might be constraints on the existence of objects. A sequence relation specifies that all subactivity occurrences of a complex activity are totally ordered. It corresponds to a soo precedes relation in the Duration and Ordering Theory of PSL. The subactivity occurrences of a split (corresponds to a flow construct in BPEL) activity are constrained by two relations from the PSL ontology. One subactivity occurrence soo precedes any number of subactivity occurrences while they are strong parallel to each other. The IfThenElse activity is a nondeterministic activity such that the subactivity occurrences are constrained on the state conditions that hold prior to the activity occurrence. The use of IfThenElse is equivalent to a conditional activity in PSL. The subactivity occurrences of a LoopUntil activity occur multiple times until the state condition is satisfied. It is equivalent to a conditional activity in the PSL ontology whose occurrences are repetitive, whereas the occurrence trees have different structure, depending on the cardinality. The subactivity occurrences of a wait activity delays the process for a certain time period or until a time point has passed. **Error handling** in collaborative interactions is as important as transactional support in local application environments. The use of ACID transactions [7] is not feasible in collaborations, because locks on some activities cannot be maintained for periods of an asynchronous interaction. Error handling therefore relies heavily on the well-known concept of compensation. That is, if some state occurs alternate activities are performed which reverse the effects of a previous activity that was carried out and caused the error. To model such situations, we add failure handling activities, which are conditioned over an exception state raised by an earlier activity occurrence. 5 A methodology to extract choreographies In the following section we show the applicability of the m3pl ontology extensions by outlining the methodology to follow when extracting choreography descriptions from internal workflow definitions. We apply the methodology to our example introduced in section 2, whereas we will focus on the supplier. 1. First the syntactic model (c.f. figure 2) underlying most Workflow Management Systems has to be lifted to the PSL/FLOWS ontology. In order to generate it automatically, mapping rules are required. This is not a trivial task since the generic mapping rules have to capture the operational semantics of the underlying WfMS. In our scenario the supplier models and enacts its business processes with IBM MQ Workflow. The workflow model is serialised in a proprietary description file called .ftl. We have identified the mapping rules necessary to translate our example workflow. However, it is not in the scope of this paper to define a generic mapping framework for arbitrary any workflow in IBM MQ Workflow. Listing 2.1. shows a snippet of the model from figure 2, i.e. the *Check Product Availability* activity followed by a decision point and either the *Prepare Referral* or the *Prepare Quote Response* activity. The full listing can be found at [http://m3pe.org/ontologies/PSLRFQ.kif](http://m3pe.org/ontologies/PSLRFQ.kif). state(productListedOK) state(productListedFailed) ∀(?occRFQWorkflow) occurrence_of(?occRFQWorkflow, RFQWorkflow) ⇒ ∃(?occProcessRFQ, ?occCheckProductAvailability) occurrence_of(?occProcessRFQ, ProcessRFQ) ∧ occurrence_of(?occCheckProductAvailability, CheckProductAvailability) ∧ subactivity_occurrence(?occProcessRFQ, ?occRFQWorkflow) ∧ subactivity_occurrence(?occCheckProductAvailability, ?occRFQWorkflow) ∧ root_occ(?occProcessRFQ) ∧ root_occ(?occCheckProductAvailability, ?occRFQWorkflow)) ∧ (holds(productListedFailed, ?occCheckProductAvailability) ∧ not(productListedOK, ?occCheckProductAvailability)) ⇒ ∃(?occPrepareReferral) occurrence_of(?occPrepareReferral, PrepareReferral) ∧ leaf_occ(?occPrepareReferral, ?occRFQWorkflow)) ∧ (holds(productListedOK, ?occCheckProductAvailability) ∧ not(productListedFailed, ?occCheckProductAvailability)) ⇒ ∃(?occPrepareQuoteResponse) occurrence_of(?occPrepareQuoteResponse, PrepareQuoteResponse) ∧ leaf_occ(?occPrepareQuoteResponse, ?occRFQWorkflow)) ∧ Listing 2.1. Snippet of internal workflow in PSL/FLOWS 2. Next, the ontology instance representing a semantically equivalent model to the underlying workflow definition has to be annotated with choreography specific constructs from m3pl. Since domain experts knowledgeable of what parts of the workflow model are required to be published to partners and technology experts competent in defining the message exchange are not necessarily familiar with formal frameworks (i.e. First Order Logic), editor support is required to ease the annotation task. We are currently building a domain specific GUI-based tool for annotating the extracted model with concepts defined in our ontology. However, in the context of this paper we have manually annotated the generated ontology instance without tool support. The complete annotated model can be found at http://m3pe.org/ontologies/RFQm3pl.kif. This annotation is comprised of relations defined in section 4 capturing the collaborative role model, the visibility constraints, the message descriptions and its passing directions. Listing 2.2. shows the m3pl annotations added to the snippet of our internal workflow from Listing 2.1. service(RFQProcessing) service_activity(RFQProcessing, RFQWorkflow) service_occurrence(service, activity_occurrence) role(Customer, RFQProcessing) role(Supplier, RFQProcessing) participant(Bosch, Supplier) ... ∀(?occRFQWorkflow) occurrence_of(?occRFQWorkflow, occRFQWorkflow) ⇒ ∃(?occProcessRFQ, ?occCheckProductAvailability) ... visibility(?occProcessRFQ, Customer) ∧ visibility(?occCheckProductAvailability, Supplier) ∧ input_port(transmitRFQ, ?occProcessRFQ) ∧ ... ⇒ ∃(?occPrepareReferral) ... visibility(?occPrepareReferral, Customer) ∧ output_port(transmitReferral, ?occPrepareReferral) ∧ ... ⇒ ∃(?occPrepareQuoteResponse) ... visibility(?occPrepareQuoteResponse, Customer) ∧ output_port(transmitQuoteResponse, ?occPrepareQuoteResponse) ∧ Listing 2.2. Annotation added to the extracted PSL/FLOWS model 3. Based on this ontological model choreography interfaces for each partner in the collaboration can be generated. Most importantly all activities marked in the previous step as private to the supplier will be omitted in the choreography interface. The split modelled in the choreography interface published to the customer is abstracted in a way that the evaluation of the condition is non-deterministic to the buyer and is modelled in the choreography interface as follows: (holds(True, ?occCheckProductAvailability) ∧ not(False, ?occCheckProductAvailability)) 4. If required a multiparty choreography can be assembled. Since our model is based on a formal ontology this matching process can be on different levels of abstraction. The simplest matching algorithm compares the message type and the counterparting message passing direction. More complex matching can include full reasoning over the first-order models proving the equivalence of two choreography interface models. 5. Finally, choreography descriptions in existing languages such as WS-CDL, Abstract BPEL or ebXML CPP can be generated. Similar to step one, mapping rules have to be defined for each choreography description language. Different to above though the mapping is unidirectional, since the choreography descriptions represent an abstraction omitting information necessary in the internal workflow definition An example extracted choreography interface from our model in a BPEL description is shown in listing 2.3. The interface starts with the definition of the partners, generated from the manually added annotations. The actual process starts at line 8 and contains the three workflow activities as invoke and receive operations. The check-product-availability activity, the split conditions and the internal data transfer are omitted from the choreography interface since they were marked as private information. Listing 2.3. Abstract BPEL description 1 <wsdl> 2 <partnerLinkType name="buyerSellerRelation"/> 3 <plnk:partnerLinkType name="buyerSellerRelation"> 4 <plnk:role name="seller"> <plnk:portType name="rfqpw"/></plnk:role> 5 <plnk:role name="buyer"> <plnk:portType name="rfqpwCallback"/></plnk:role> 6 </partnerLinkType> 7 </wsdl> 8 <process name="RFQProcessing"> 9 <partnerLink name="buyerSellerRelation" partnerLinkType="lns:buyerSellerRelation" myRole="seller" partnerRole="buyer"/> 10 <variables> 11 <variable name="rfqMessage" messageType="lns:rfq"/> 12 <variable name="quoteMessage" messageType="lns:quote"/> 13 <variable name="referralMessage" messageType="lns:referral"/> 14 </variables> 15 <sequence name="main"> 16 <receive name="processRFQ" partnerLink="buyerSellerRelation" portType="lns:rfqpw" operation="initiate" variable="rfqMessage"/> 17 <assign><copy> 18 <from opaque="yes"/> <to variable="condition" property="xsd:boolean"/> 19 </copy></assign> 20 <switch name="quoteDecision"> 21 <case condition="if bpws:getVariableData('condition') = true"> 22 <invoke name="prepareReferral" partnerLink="buyerSellerRelation" portType="lns:rfqpwCallback" operation="onResult" inputVariable="quoteMessage"/> 23 </case> 24 <otherwise> 25 <invoke name="processQuote" partnerLink="buyerSellerRelation" portType="lns:rfqpwCallback" operation="onResult" inputVariable="referralMessage"/> 26 </otherwise> 27 </switch> 28 <otherwise/> 29 <end> 30 <end> 31 </sequence> 32 </process> 33 </process> 34 </sequence> 35 </process> 6 Related Work Our work is most closely related to several approaches to views on process mod- els, i.e. [3, 4, 17, 15, 21]. Chebbi et al. [3] propose a view model based on Petri Nets. They introduce cooporative activities, which can be partially visible for different partners. The approach is validated on mappings from two different WfMSs. However, the model requires n² mappings and does not explain how to model the data aspect, i.e. the message transfer between partners. Chiu et al. [4] present a cross-organisational meta model which is imple- mented in XML. Similar to the cooperative activities in [3] the model provides so called cross-organisational communications, which allow to define message transfer and its direction. The model is implemented in an extension to the ADOME-WfMS, called E-ADOME. The model deals only with sequential ac- tivities in the abstracted view and does not tackle an integrated approach in choreography extraction and requires the specific model to be used in the E- ADOME tool. Schulz and Orlowska [17] introduce a Petri-Net based state transition approach that binds states of private workflow tasks to their adjacent workflow view-task, where existing workflows are augmented by means of one or more activities or sub-workflows of an external workflow. The model is conceptualised in a supporting architecture. The approach identifies mappings in its conceptual architecture, but it does not describe how to integrate different workflow models. Further the approach abstracts from the data aspect. Sayal et al. [15] introduce service activities (that represent trade partner interaction) as workflow primitives, but their approach is specific to one workflow modelling tool and addresses neither workflow integration nor choreography interface extraction. Zhao et al. [21] define a relative workflow model representing the view of one partner on local workflows of another partner. They present composition rules how to generate the relative workflow and a simple matching algorithm to connect two local workflow process. Similar to the other approaches it is meta model independent. Several approaches address interoperability issues between Workflow Management Systems (WfMS), such as Mobile [9], Meteor [19] and CrossFlow [8]. However, all of these approaches require a pre-established partner agreement on the semantics of the process models. Further they were all proposed before the advent of Service Oriented Architectures and therefore do no deal with standard choreography description languages. 7 Conclusion In existing approaches, choreography descriptions are independent of the internal workflows of the partners and have to be manually mapped. We presented m3pl, an ontology extension to PSL and FLOWS to formally capture choreography-specific information. The ontology extension together with PSL can act as a connecting ontology to integrate different workflow models and subsequently extract external process models. We have shown how the framework can be used to extract a choreography interface of an example workflow in a RosettaNet collaboration. This initial validation is based on the translation of an example workflow represented in IBM Websphere MQ Workflow to PSL, which is then manually annotated with relations offered in the m3pl extension to further extract a BPEL process. One direction of our future work is to check the equivalence of choreography models. Given a choreography interfaces it is desirable to verify whether it is compatible with the choreography interface of a partner and -if they are indeed compatible- to construct a multiparty choreography. Acknowledgment This material is based upon works supported by the Science Foundation Ireland under Grant No. SFI/04/BR/CS0694. References
{"Source-Url": "https://aran.library.nuigalway.ie/bitstream/handle/10379/557/m3pl-SBPM2006.pdf;jsessionid=3FCFB20227745CC83A866693BBA54034?sequence=1", "len_cl100k_base": 6388, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 34930, "total-output-tokens": 8318, "length": "2e12", "weborganizer": {"__label__adult": 0.00032138824462890625, "__label__art_design": 0.0005631446838378906, "__label__crime_law": 0.0004603862762451172, "__label__education_jobs": 0.001010894775390625, "__label__entertainment": 0.00012445449829101562, "__label__fashion_beauty": 0.0001964569091796875, "__label__finance_business": 0.0021953582763671875, "__label__food_dining": 0.0003883838653564453, "__label__games": 0.000518798828125, "__label__hardware": 0.0007476806640625, "__label__health": 0.0005393028259277344, "__label__history": 0.00033283233642578125, "__label__home_hobbies": 8.976459503173828e-05, "__label__industrial": 0.0006489753723144531, "__label__literature": 0.0004489421844482422, "__label__politics": 0.0004191398620605469, "__label__religion": 0.0003917217254638672, "__label__science_tech": 0.0986328125, "__label__social_life": 0.00014913082122802734, "__label__software": 0.0386962890625, "__label__software_dev": 0.85205078125, "__label__sports_fitness": 0.0002684593200683594, "__label__transportation": 0.0007410049438476562, "__label__travel": 0.00025582313537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34531, 0.03269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34531, 0.26419]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34531, 0.88248]], "google_gemma-3-12b-it_contains_pii": [[0, 351, false], [351, 2833, null], [2833, 4808, null], [4808, 6184, null], [6184, 7991, null], [7991, 10953, null], [10953, 13789, null], [13789, 16684, null], [16684, 18417, null], [18417, 20998, null], [20998, 23219, null], [23219, 25414, null], [25414, 28535, null], [28535, 31290, null], [31290, 34531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 351, true], [351, 2833, null], [2833, 4808, null], [4808, 6184, null], [6184, 7991, null], [7991, 10953, null], [10953, 13789, null], [13789, 16684, null], [16684, 18417, null], [18417, 20998, null], [20998, 23219, null], [23219, 25414, null], [25414, 28535, null], [28535, 31290, null], [31290, 34531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34531, null]], "pdf_page_numbers": [[0, 351, 1], [351, 2833, 2], [2833, 4808, 3], [4808, 6184, 4], [6184, 7991, 5], [7991, 10953, 6], [10953, 13789, 7], [13789, 16684, 8], [16684, 18417, 9], [18417, 20998, 10], [20998, 23219, 11], [23219, 25414, 12], [25414, 28535, 13], [28535, 31290, 14], [31290, 34531, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34531, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
de6ce4163306f884191d181f975fc0250907ef45
Challenges in Preserving Intent Comprehensibility in Software Valentino Vranić, Jaroslav Porubän, Michal Bystrický, Tomáš Frťala, Ivan Polášek, Milan Nosáľ, and Ján Lang 1 Institute of Informatics and Software Engineering, Faculty of Informatics and Information Technologies, Slovak University of Technology in Bratislava, Ilkovičova 2, 84216 Bratislava, Slovakia, vranic@stuba.sk, tomas.frtala@stuba.sk, michal.bystricky@stuba.sk, jan.lang@stuba.sk 2 Department of Computers and Informatics, Faculty of Electrical Engineering and Informatics, Technical University of Košice, Letná 9, 04200 Košice, Slovakia, jaroslav.poruban@tuke.sk, milan.nosal@tuke.sk 3 Gratex International, a. s., Galvaniho 17/C, 821 04 Bratislava, Slovakia, ipo@gratex.com Abstract: Software is not only difficult to create, but it is also difficult to understand. Even the authors themselves in a relatively short time become unable to readily interpret their own code and to explain what intent they have followed by it. Software is being created with the goal to satisfy the needs of a customer or directly of the end users. Out of these needs comes the intent, which is relatively well understandable to all stakeholders. By using other specialized modeling techniques (typically the UML language) or in the code itself, use cases and other high-level specification and analytical artifacts in common software development almost completely dissolve. Along with dedicated initiatives to improve preserving intent comprehensibility in software, such as literate programming, intentional programming, aspect-oriented programming, or the DCI (Data, Context and Interaction) approach, this issue is a subject of contemporary research in the re-revealed area of engaging end users in software development, which has its roots in Alan Kay's vision of a personal computer programmable by end users. From the perspective of the reality of complex software system development, the existing approaches are solving the problem of losing intent comprehensibility only partially by a simplified and limited perception of the intent and do this only at the code level. This paper explores the challenges in preserving the intent comprehensibility in software. The thorough treatment of this problem requires a number of techniques and approaches to be engaged, including preserving use cases in the code, dynamic code structuring, executable intent representation using domain specific languages, advanced UML modularization, 3D rendering of UML, and representation and animation of organizational patterns. 1 Introduction Software is not only difficult to create, but it is also difficult to understand. Even the authors themselves in a relatively short time become unable to readily interpret their own code and to explain what intent they have followed by it, i.e., what they wanted to achieve. Similar situations arise with models and in particular with more detailed, design models. This problem is being solved by introducing another artifact into software development: documentation. This brings in a further complex problem: the need to keep documentation up to date. In the case of internal documentation (comments), the traceability of the artifacts the documentation is related to has to be ensured, too. Furthermore, a considerable effort is needed to initially create the documentation, inevitably with a disputable and difficult to control quality because its consistency, as opposed to that of a program, cannot be tested by actual execution. This problem can also be perceived in a more global manner. Software is being created with the goal to satisfy the needs of a customer or directly the needs of the end users. Out of these needs comes the intent, which is relatively well understandable to all the stakeholders. By using other specialized modeling techniques (typically the UML language) or in the code itself, use cases and other high-level specification and analytical artifacts in common software development almost completely dissolve. Understanding the intent expressed in code has been identified as one of the key problems in software development that has a direct impact on creating programming languages and related tools [36]. This is important not only in software maintenance, but it is also related to the question of reuse: the comprehension of the intent as realized by a given component is necessary for its reuse. Along with dedicated initiatives to improve preserving intent comprehensibility in software, such as literate programming, which subordinates code to documentation [34], intentional programming, which aimed at enabling direct creation of appropriate abstractions by the programmer [57], aspect-oriented programming, which makes possible to gather parts of the code into modules by use cases [25, 26], or the DCI (Data, Context and Interaction) approach, which enables to partially preserve use cases [14], this issue is a subject of the contemporary research in the re-revealed area of engaging end users in software development [6] that has its roots in Alan Kay's vision of a personal computer programmable by end users. From the perspective of the reality of complex software system development, the existing approaches solve the problem of losing intent comprehensibility only partially, by a simplified and limited perception of the intent and do so only at the code level. This paper explores the challenges in preserving intent comprehensibility in soft- ware. The thorough treatment of this problem requires a number of techniques and approaches to be engaged. Section 2 explains the dimensions of the intent in soft- ware. Sections 3–5 explore the possibilities of preserving intent comprehensibility from the perspective of each of the basic software constituents. Section 6 discusses the related work. The paper is closed by conclusions and indication of further work directions. 2 Dimensions of Intent The ultimate form of the software is the executable code, which is mostly text based. The level of the comprehensibility of the intent expressed by the code de- termines the quality of the resulting software system: better intent comprehensibil- ity simplifies error discovery and lessens the divergence from the functionality that the software system should have provided. Software development is accompanied by the creation of many artifacts that as such do not contribute to its functionality and thus fairly quickly become outdated. These additional artifacts are usually conjointly referred to as documentation. A special position among these is held by models as non-code artifacts predominately expressed in a graphical form and used to reason about the software system being developed. As such, they can be perceived as a transient form towards the code, which is, after the code has been created, condemned to outdating. Models can also be used to generate code or other models, or they can even be executable. In any case, it is important for the intent in models to be comprehensible, too. The originators of the intent are people and losing the intent in organizing people is transferred to the software being developed by these people. This is a direct consequence of Conway’s law [11]: Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations. This is not only the problem of the initial organization of the people. The people remain the key factor throughout software development including the maintenance phase, too. They tend to literally impersonate the software system parts they de- velop and the effectiveness of resolving development problems depends directly on the effectiveness of the communication among the developers. This is where agile and lean approaches, which favor face-to-face contact among people over any kind of formal communication, save significant time and resources [14]. Furthermore, the notion of intent in software is relative and it can be observed in the following dimensions: - **Stakeholders** (intent originators): from whose perspective the intent is observed, starting with the end user and moving towards the programmer - **Level**: at what level of construct granularity is the intent expressed, starting with the lower level constructs (e.g., conditional statements or loops), via covering constructs (e.g., methods, classes, etc.), conceptual constructs and software system parts (different levels of subsystems), up to the overall software system, including people organization - **Expression**: how is the intent expressed, starting with an idea or mental model, via informal notes and further forms of textual description, including requirements lists and use cases, up to graphical model representations, formal specification, and, finally, the executable form itself The entire intent space as determined by these dimensions, i.e., *stakeholders–level–expression*, is huge. With respect to the basic software constituents, three particular areas of interest can be identified therein (see Figure 1): - With respect to code, it is reasonable to observe the intent by its representation in an executable form at the code level up to the covering constructs level - With respect to models, the focus should be on expressing the intent by graphical models spanning from the conceptual construct level up to the software system level - With respect to people, the most interesting is the people organization as such and people organization in projects, at which the employment of textual description and graphical models to express the intent appropriately should be explored All three areas span throughout the whole stakeholder dimension since, in general, any stakeholder can be involved in any software artifact. This is at the heart of the agile and lean approaches to software development with their concept of cross-functional roles. It may seem strange for end users to be connected with code, but it a huge number of people in the USA (four times the number of professional programmers there) reported they do programming at work [6]. With techniques that shape code according to use cases and domain-specific languages, addressed in the next section, end user programming becomes even more relevant. Sections 4 and 5 address the remaining two areas of interest in exploring the intent in software. 3 Code Perspective As we will see in this section, the intent expressed in use cases can actually survive in code (Section 3.1) with application domain abstractions supported by domain specific languages (Section 3.2), while the differences in the mental models of individual stakeholders require abandoning the fixed code structure (Section 3.3). ![Figure 1: Areas of interest in exploring the intent in software](image) 3.1 Preserving Use Cases From the perspective of preserving use cases in code, there are two approaches of particular interest: aspect-oriented programming, which enables to collect the code parts into modules corresponding to use cases [25, 26], and the DCI approach [14], which enables to partially preserve use case flows, i.e., sequences of steps in use cases—known also as *flows of events* or simply *flows*¹—albeit they remain fragmented by roles. In common object-oriented programming, the client or end user intent diminishes from code. The parts of use cases end up in different classes by which they are realized. The ability to affect one use case by another one without having any reference to the affecting use case in the affected one, known as the extend relationship, also lacks in common object-oriented programming. What should be explored is how appropriate design patterns, the frameworks based on these design patterns, and preprocessing techniques can ensure not only the code to be modularized according to use cases, but also how use case flows (the actual steps) can be preserved in the code and how to achieve this in a form close to the natural language. The modification of use cases, which usually happens only in code without reflecting the changes in the use case model, would consequently be readable to stakeholders that have no programming knowledge. This would even open a possibility for these stakeholders to directly modify the program, at least at the highest level. It is necessary to investigate the possibilities of expressing individual steps in use case flows and the possibility of their formal interpretation without fully formalizing how they are expressed. For this, formal processing of informal meaning by abstract interpretation could be considered [35]. By now, it has been demonstrated, though only in the Python programming language, how use case flows can be preserved in code [7]. The modularization of use cases using design patterns in the PHP programming language has been addressed, too [23], but this approach does not preserve use case flows. ### 3.2 Domain Specific Languages Domain specific languages are being created with the goal of bringing closer the way the code is expressed to the problem domain. Programmers use problem domain notions directly in the solution, while tools communicate with them in this notional apparatus, too [42]. Domain specific languages play a key role in model driven software development. Because of this, the orientation on domain specific languages in searching for the ways to express the intent in software comes as a natural choice. Despite a growing popularity of domain specific languages, a number of questions remain unanswered in this area. These include language composition [18], effective sentence creation using projectional and hybrid editors [62], and assessment of the usability of domain specific languages [4]. --- ¹ Sometimes *scenario* is used to denote a use case flow. This may be confusing since a scenario can stand for a particular path through a use case, which involves some or all steps of one or several use case flows. Domain specific languages can be used to achieve a readable and executable intent representation that enables to propagate the notions from use cases, i.e., application domain, to code, i.e., solution domain. In other words, domain specific languages raise the level of abstraction of the solution domain closer to the application domain abstractions making it possible for programmers to express the solution using in the application domain fashion. There are three promising directions in the research of preserving intent comprehensibility with domain specific languages. The first one is to come closer to the ideal case in which a domain specific language will be both the application domain language and solution domain language. A domain specific language can have textual, but also graphical (visual) concrete representations, which constitutes the grounds for the second direction of research in preserving the intent with domain specific languages: overcoming the gap between the model and the code, while retaining executability. The third direction aims at simplifying the creation of domain specific languages and their evolution, which takes place along with the evolution of the program itself (constituting a program–language coevolution). ### 3.3 Dynamic Code Structure A huge impediment to observing the intent is the fixed code structure. Different stakeholders need to see code in different ways in which—from their perspective—the intent is readable. However, these cannot be based on a static code representation, but have to be modifiable as though they are the actual code representation. The challenge here is not only to design the necessary views, but also to enable creating further views directly by stakeholders. The fixed code structure is a consequence of the economic aspects of software development. Developers are bound to choose exactly one code structure. This code structure corresponds to the mental model they have built upon their experience and knowledge, but also to the nature of the intent they have to realize. An alternative implementation that would employ some other code structure is in this case redundant and thereof economically inconvenient. Thus, the final code structure usually favors one intent that the authors considered to be the most important. That intent can be anything, including technical intents such as software efficiency, software extensibility, etc. The programmers that join a project during the course of its realization, have to understand the existing code before they can manage to progress. In such a situation, the new programmers, as new stakeholders, have to adopt the mental model of the original programmers. This is difficult, since their own experience, knowledge, and preferences in general are only rarely close to those of the original programmers and thus constitute a mental barrier to the code comprehension. If, for example, the original programmers focused on the system efficiency, while the new programmers prefer extensibility, they might not understand many of the decisions made by the original programmers making it difficult for them to comply with these decisions. The problem of the limitations of static structure is in practice partially being attacked by the built-in projections in integrated development environments (IDE). Finding variable uses, which enables to follow scattering of the variable use and thus to follow the intent implemented by it, is one of these. Similarly, environments enable to follow selected intents that are not directly part of the executable code. One example is comments with the TODO prefix, which can be found and provided to stakeholders by the IDE in a navigable list with all the instances of a given comment. Approaches are known that improve certain aspects in the context of the problem area of dynamic code structuring. A prominent example is a method for a faster orientation in code supported by a tool that enables to display the body of the method being called directly at the place of the call without the need of explicit navigation [16]. Intentional code views from the perspective of the architectural intents based on logical metaprogramming have been reported [41]. However, these views are not editable. Intents have been represented in a form of a graph abstraction, too [55]. Programming with so-called ghosts [8] is based on automatic creation of undefined yet entities used in the program, which are displayed in a separate, editable view. The approach has been implemented as a prototype in the form of an Eclipse plugin and as an extension to Smalltalk Pharo. Recording and automating design pattern application treated, for example, by Kajsa [27], can also be perceived as a way of expressing the intent using metadata. 4 Model Perspective Dynamic code structuring as addressed in Section 3.2 is conceptually applicable to graphical models, too. Providing different, modifiable views instead of only one, static view is highly related to aspect-oriented modularization or to what is known as advanced modularization in general. There are some opportunities to achieve this in UML as a de facto standard in software modeling (Section 4.1). Employing the third dimension in representing software models can further improve intent comprehensibility (Section 4.2). 4.1 Advanced Modularization in UML Despite extensive research in the area of aspect-oriented software development, expressing advanced modularization at the model level did not end up with a generally accepted approach. A very important approach in this direction is Theme [10], which is, similarly as newer approaches such as RAM [33], based on non-UML elements. Separation of concerns with clearly expressing their interrelation naturally contributes to intent comprehensibility. The UML diagrams typically used in practice make this possible only to a limited extent. Therefore, it is necessary to investigate how advanced elements of UML, which usually have no straightforward counterparts in code, can help in expressing intent. In this sense, composite structure models, whose important elements are roles and their collaborations, are particularly interesting. Here, it is necessary to search for the ways of expressing roles and their composition. Another UML concept that is directly interconnected with composite structure models are parameterized types. 4.2 3D Rendering of UML Model rendering itself and creation of alternative views can also enhance intent comprehensibility by reducing its fragmentedness. The third dimension can be employed here to simultaneously display and interconnect related parts of the model. For this, a complex 3D UML rendering support for the layout of class or module layers and their relationships has to be provided. This is different than the standard package modularization, in which the relationships between package elements are not easily observed. The point is in maintaining the layers in a simulated 3D space in which it will be possible to create and observe elements and their relationships along with the relationships between the layers as such. In this sense, the model could be structured according to use cases as intent bearers. Use cases define interaction and therefore are commonly modeled by sequence diagrams, which implicitly uncover the underlying structure necessary for the realization of use cases [26, 1]. To expose the structure directly, sequence diagrams can be easily converted into communication or object diagrams and displayed in their own layers. The class diagram automatically created out of the communication or object diagrams could be displayed in another layer making the correspondence—and their intent—of the elements of these different diagrams obvious provided their planar coordinates are preserved. The idea itself has been indicated earlier [47]. Current research efforts aim at realizing it [22]. This approach is applicable also to decoupling patterns and antipatterns in class diagrams depicted schematically in Figure 2. Several approaches to the 3D rendering of UML diagrams have been reported. However, each of these approaches is targeting only one type of UML diagrams and none of them supports editable 3D views. X3D-UML [39] is an approach to the 3D rendering of UML state machine diagrams in movable hierarchical layers with the possibility of applying filtering. No appropriate tool for editing this view is available. GEF3D [17, 45] is a 3D framework based on Eclipse GEF (Graphical Editing Framework) developed as an Eclipse plugin. By using this framework, existing GEF-based 2D editors can easily be embedded into 3D editors. The main idea of this framework is to use the third dimension to visualize connections among several common (2D) UML diagrams each of which is displayed in a separate layer parallel to other layers. GEF3D supports also orthogonal positioning of layers into virtual boxes [17] that makes inter-model connections clearly visible, but limits the number of layers. GEF3D views are non-editable. Moreover, the project has not been maintained since 2011. A different way of 3D rendering of UML diagrams is based on so-called geons [9], simple geometrical forms by which humans recognize more complex objects according to Biederman's recognition-by-components theory. In UML diagrams, a different geon is assigned to each kind of model element. According to this approach, by getting used to this mapping, even complex diagram structures become readily comprehensible. ![Figure 2](image) Decoupling patterns and ant-patterns with a layered 3D rendering of a class diagram. 5 People Perspective Appropriate ways of organizing people have been discovered in successful projects of software development and captured as organizational patterns [15]. Approaches that will enable to better understand the intent of organizational patterns individually and in combination have to be explored. One possibility is their clarification using software modeling and UML in particular, including the 3D rendering of UML discussed in Section 3.3. This approach is applicable to expressing the organization of people in areas other than software development, too. Alternatively, organizational patterns could be modeled in a virtual world in which it would be possible for people to try the roles featured in these patterns and experience the characteristic problem situations in a simulated environment. For example, in the Architect also Implements pattern [15], a stakeholder could play the role of a programmer who has to understand the design of a software system in order to be able to implement it. The stakeholder could also play the role of a software architect who prepares the design without a clear idea of its implementation. The stakeholder could also experience each of these roles in a positive arrangement in which the architect cooperates with programmers and contributes to the implementation. A virtual world does not necessarily mean virtual reality. That would surely be a benefit, but the human imagination is capable of substituting this dimension when important features of the content are captured. This phenomenon is known in videogames, among which those with an interesting story tend to endure despite a simpler graphical workout. Thus, the essence is in creating the corresponding model of a typical situation solved by a given organizational pattern. In general, it is necessary to find an approach of transforming organizational patterns into such typical situations and to create a framework in which they could be readily expressed. Agile games [37, 20] provide a possibility to experience situations in which it is possible to better understand principles, relationships, and forces acting in agile and lean approach to software development. The same approach could be applied with organizational patterns. Differently than agile games, the envisaged representation and animation of organizational patterns in a virtual world would provide a more realistic picture of the situation. It would also take less time and would be applicable in an individual setting with no need to engage other people, nor depend on their time. A simplified representation of this idea is offered by the SimSYS environment [12]. Animating organizational patterns as text adventure games [21] promises to improve the comprehensibility of original descriptions of organizational patterns [15]. 6 Related Work The problem of preserving intent comprehensibility accompanies software development from its beginnings. In the introduction, we indicated some important historical points starting from Alan Kay’s vision of personal computer programmable by end users [32] through several approaches to preserving intent in programming, namely literate programming [34], intentional programming [56], aspect-oriented programming [25, 26], and the DCI approach [14]. Preserving intent comprehensibility is a subject of research in contemporary area of engaging end users in software development [6]. Even though involving the client or end users in software development is important, too, we claim that the successful treatment of the problem of preserving intent comprehensibility has to comprise all three basic software constituents: code, models, and people. Aspect-oriented change realization [5, 40, 64, 65] enables modular expression of a change, by which it actually contributes to a more comprehensible representation of the intent. Determining the realization type of a change that has to be applied is possible also by the multi-paradigm design with feature modeling [40], which is in its own right interesting from the perspective of preserving intent comprehensibility because it represents an effort to bridge the gap between the solution and application domain by a transformation [63]. The intent in code can be exposed by encouraging programmers to record their intent in the form of intentional comments prior to writing any code as in the design intention driven programming [38]. Such enforcement of documentation directly in the code addresses some of problems and limitations of literate programming [58]. Approaches strongly bound to a model, e.g., domain driven design [19], do not consider exposing behavior at a higher, use case level, but rather they encourage programmers to create knowledge-rich models—also called smart models—which do not evolve well [13]. As a result, the higher-level use cases become fragmented in models and they are not visible in the code. This is addressed by several approaches that preserve the intent in code at a higher level by the modularization of the code into use cases [7, 14, 25, 26]. A use case modeling metamodel can serve as a basis for reasoning about preserving use cases in the code and models [66, 67]. In applying advanced modularization for the purpose of a clearer separation of the intent in code, techniques of applying advanced modularization in established programming languages that are not being denoted as aspect-oriented [3] are of particular interest. The problem of effective design of domain specific languages from the perspective of tool support [52] is the key to a successful adoption of domain specific languages in software development. A domain specific language is a dynamic element in software development undergoing constant changes. This evolution of domain specific languages may involve their composition. In building domain specific languages, creating their concrete syntax out of the samples of sentences of abstract conception [53], as well as inferring domain specific languages out of user interfaces of existing software systems [1], can help significantly. Recording the applied design patterns using annotations [56] can help in preserving intent comprehensibility in code. The method of abstracting information from the details of the format being used, targeting XML formats and annotations [43], can be applied here. Furthermore, an analysis of the need for dynamic code structure and its conceptual design has been reported [44], supported by a NetBeans module prototype that enables simple intent based code projections expressed by structured comments [54]. Using 3D space for software modeling in UML has been reported to support code refactoring and optimization by displaying patterns in a separate layer, as well as antipatterns for refactoring in another layer [46, 48, 50, 60]. Code visualization upon AST project graphs [49, 51, 61] can be used to present models. This is related not only to algorithms of positioning elements and their relationships, such as FM3 or Fruchterman-Reingold, but also to the internal conception of presenting information in code, such as authors, antipatterns, types of classes, and similar [22]. To successfully master the graphical demands while working with UML models in a 3D space, advanced software visualization [24, 28, 30, 59] and graph visualization in general [31], as well as design of environments for visual programming [29], are necessary. **Conclusions** Up to now, preserving intent comprehensibility has been approached to in a fragmented manner without clearly understanding its relativity. We envisage an integral perception of the intent in software and in support to its comprehensibility in the whole space formed by the identified dimensions of the intent, i.e., stakeholders–level–expression, and in all three basic software constituents, i.e., code, models, and people. Our hypothesis is that it is possible to increase the comprehensibility of the intent by applying the corresponding methods conceived with respect to the dimensions in which the intent is observed: - Having use cases as part of the code can overcome current fragmentedness of the representation of use case flows, as well as readability of their steps as such - Domain specific languages can provide a readable and executable intent representation - Dynamic code structuring brings in editable adaptable code views - Advanced modularization in UML strives at employing standard yet underutilized UML elements to express the intent - 3D rendering of UML has the potential of providing a fully editable model capable of displaying the relationship of use cases to the corresponding detailed UML model (applicable to the DCI approach, too), but also patterns and antipatterns in optimization and refactoring, aspects in aspect-oriented modeling, and alternative and parallel scenarios - Representation and animation of organizational patterns of software development will make possible to transfer the experience of proven ways of organizing people in software development in a new form available to all stakeholders on an individual basis and at a convenient time Acknowledgement The work reported here was supported by the Scientific Grant Agency of Slovak Republic (VEGA) under the grant No. VG 1/1221/12. This contribution/publication is also a partial result of the Research & Development Operational Programme for the project Research of Methods for Acquisition, Analysis and Personalized Conveying of Information and Knowledge, ITMS 26240220039, co-funded by the ERDF. References [31] P. Kapec, Michal Paprčka, Adam Pažítnaj, and Vladimir Polák. Exploring 3D GPU-Accelerated Graph Visualization with Time-Travelling Virtual Cam-
{"Source-Url": "http://www2.fiit.stuba.sk/~vranic/pub/intent.pdf", "len_cl100k_base": 6401, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 64488, "total-output-tokens": 11515, "length": "2e12", "weborganizer": {"__label__adult": 0.0003635883331298828, "__label__art_design": 0.0003075599670410156, "__label__crime_law": 0.00028228759765625, "__label__education_jobs": 0.0007266998291015625, "__label__entertainment": 4.285573959350586e-05, "__label__fashion_beauty": 0.00013756752014160156, "__label__finance_business": 0.0001468658447265625, "__label__food_dining": 0.0002849102020263672, "__label__games": 0.0004439353942871094, "__label__hardware": 0.00047516822814941406, "__label__health": 0.0003643035888671875, "__label__history": 0.00018107891082763672, "__label__home_hobbies": 6.091594696044922e-05, "__label__industrial": 0.0002363920211791992, "__label__literature": 0.0002334117889404297, "__label__politics": 0.00021529197692871096, "__label__religion": 0.0003979206085205078, "__label__science_tech": 0.003971099853515625, "__label__social_life": 7.140636444091797e-05, "__label__software": 0.003490447998046875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00025081634521484375, "__label__transportation": 0.0004038810729980469, "__label__travel": 0.00017154216766357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46766, 0.03076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46766, 0.54643]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46766, 0.89617]], "google_gemma-3-12b-it_contains_pii": [[0, 2575, false], [2575, 5306, null], [5306, 7945, null], [7945, 10403, null], [10403, 11183, null], [11183, 13984, null], [13984, 17051, null], [17051, 19594, null], [19594, 22450, null], [22450, 23630, null], [23630, 26454, null], [26454, 29328, null], [29328, 32027, null], [32027, 34497, null], [34497, 36668, null], [36668, 39194, null], [39194, 41557, null], [41557, 44055, null], [44055, 46514, null], [46514, 46766, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2575, true], [2575, 5306, null], [5306, 7945, null], [7945, 10403, null], [10403, 11183, null], [11183, 13984, null], [13984, 17051, null], [17051, 19594, null], [19594, 22450, null], [22450, 23630, null], [23630, 26454, null], [26454, 29328, null], [29328, 32027, null], [32027, 34497, null], [34497, 36668, null], [36668, 39194, null], [39194, 41557, null], [41557, 44055, null], [44055, 46514, null], [46514, 46766, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46766, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46766, null]], "pdf_page_numbers": [[0, 2575, 1], [2575, 5306, 2], [5306, 7945, 3], [7945, 10403, 4], [10403, 11183, 5], [11183, 13984, 6], [13984, 17051, 7], [17051, 19594, 8], [19594, 22450, 9], [22450, 23630, 10], [23630, 26454, 11], [26454, 29328, 12], [29328, 32027, 13], [32027, 34497, 14], [34497, 36668, 15], [36668, 39194, 16], [39194, 41557, 17], [41557, 44055, 18], [44055, 46514, 19], [46514, 46766, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46766, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4f894820f2627f981007ca367d51689b1fe3377b
Speculative execution plan for multiple query execution systems Anna Sasak*, Marcin Brzuszek Institute of Computer Science, Maria Curie Sklodowska University, pl. M. Curie-Sklodowskiej 1, 20-031 Lublin, Poland. Abstract – There are different levels at which parallelism can be introduced to the database system. Starting from data partitioning (intra-operator parallelism) up to parallelism of operation (inter-operator parallelism) that depends on a query granularity. The paper presents the parallelisation method based on speculative execution for the database systems which are expected to give answers to complex queries coming from different sources as soon as possible. Taking under consideration W of upcoming queries waiting for execution, the execution plan for the first query should be developed. This plan should give the largest benefit also for W-1 of the consecutive queries. Thus, in parallel to the first query, some excessive computations can be executed, which in further steps would reduce the execution time of the consecutive queries. The paper presents possible risks and benefits are using this method and also analyses of possible execution time reduction for different models of speculative parallelization [1]. 1 Introduction In relational databases, the mutual independence between two tables as well as between two tuples within a table, makes simultaneous processing of multiple tables or tuples possible. A database request often involves the processing of multiple tables. The ways in which a request accesses these tables and combines the intermediate results are defined by the computational model. For a single request, there is usually more than one way of execution. Query optimization is the process of determining the best execution path. The objective of a database optimizer is to produce a good execution plan for a given query. The quality of plan is defined by a cost function which estimates response time or amount of resources used or combination of both. The optimizer searches a set of candidate plans which *asasak@liza.umcs.lublin.pl can be generated owing to transformation rules. The set of candidate plans is called the search space. The cost model provides an abstraction of the parallel execution system in terms of operator cost functions and the database in terms of physical schema information. Database statistics includes distribution of attribute values in a relation. Attribute distribution is used to perform the most fundamental operation in a cost model, that is, computing the selectivity of predicates that appear in a query, which, in turn, helps determine the number of tuples satisfying the predicates [2]. The system must often process upcoming queries in a certain order. Thus the optimizer while creating the execution plan can take under consideration a few of consecutive awaiting queries. Such out-of-order method of analysing and query processing would be called a speculative execution[3]. Speculative action is considered as some work done in anticipation of an event taking place. This offers opportunities of gain or loss that depend on speculation accuracy[4]. In computer architecture, speculative execution is the process of executing instructions ahead of their normal schedule. As the database consists of the finite number of relations, there exists a finite set of queries that can be built over them. If a query references to an n number of relations of all N relations in the database, then for the benefit of expectant database tasks, some queries containing subsets of k referenced relations can be executed speculatively[5]. Then by creating some adequate data copies and transforming the expectant queries so that they use the pre-fetched data, the number of entities to scan would decrease. Unfortunately, speculations and overheads are inseparable especially when there is a lot of modifying queries in line. As long as the benefits of successful speculative execution outweight the total overhead of its use, the technique is considered a profitable activity. The first part of the paper introduces a notation, which is used to estimate the cost of a query execution. The consecutive parts describe the proposed optimization method and conduct a discussion of such system efficiency. The summary outlines the direction of future research. 2 Notations and relational algebra A parallel optimizer is expected to define an abstract parallel execution model. This model is similar to an algebraic machine that is a set of algorithms for relational operators with more complex rules to combine these algorithms and execute them in parallel. The process of choosing the query execution strategy consists of the series of transformations that replace algebraic expressions with some other equivalent expressions which can be computed more effectively. Most of SQL queries can be defined with a relatively small number of classical relational algebra operators. This paper introduces only a few operators, which in the further parts are used for query execution costs analysis. The first operator – equivalent of the WHERE clause - is Selection marked as $\sigma_C(R)$. This operator is used to create a new relation based on the existing one, by choosing rows described by a condition or predicate. The arguments of the $\sigma$ selection are the relation $R$ and the condition $C$. As a result, a multiset that contains those entities of the $R$ relation that meet the condition $C'$ is being created. The mentioned condition can contain: (1) arithmetical operators (e.g. +, *) or textual operators (e.g. concatenation, LIKE) referencing to constants and/or attributes, (2) comparisons of those terms (e.g. a<b, a+b=10), (3) logical operators (AND, OR, NOT). The next operator – equivalent of the SELECT clause - is Projection. It creates a new relation based on the existing one by choosing a group of columns. If $R$ is a relation then $\Pi L(R)$ stands for a projection of $R$ on the list $L$. The list $L$ can contain the following elements: 1. A single attribute of the $R$ relation. 2. An expression $a \rightarrow b$, where $a$ and $b$ are the names of attributes, which means that in the resulting relation the name of an attribute $a$ will be replaced by $b$. 3. An expression $E \rightarrow z$, where $E$ stands for an expression containing attributes of the relation $R$, constants, logical operators and textual operators and $z$ stands for the name of the new attribute. Values of this attribute are obtained as results of the expression $E$ e.g. $a + b \rightarrow z$ as a list element stands for a sum of $a$ and $b$ arguments marked as $z$. The result of projection originates from application of the list $L$ operators to the entities from the relation $R$. It creates a relation with schema described by the list $L$ and its transformations. The third operator is Product. That is the Cartesian product from the set theory. Its elements are all possible ordered pairs made of entities of input relations. It is an equivalent of the relations list of the WHERE clause. The product of $RxS$ relations is a new relation built of the attributes of $R$ and $S$ relations. If the particular entity $r$ occurs $p$ times in a $R$ relation and an entity $s$ occurs $q$ times in a $S$ relation, then the $rs$ entity will occur $p \times q$ times in the $RxS$ product. The last introduced operator is Join, which is an equivalent of the JOIN clause. The most common form of Join is a natural join. The natural join of the $R$ and $S$ relations is represented by $R \bowtie S$ and is equivalent to $\Pi L(\sigma C(R \times S))$, where $C$ stands for a condition that compares pairs of chosen attributes of $R$ and $S$. The result of natural join can be computed by the consecutive application of three operators: $\times$, $\sigma$ and $\Pi$. Thus the more efficient way is to compute it on the spot. There exists a wide variety of methods of tracking down pairs of matching entities from the $R$ and $S$ relations. Yet for now those methods are not the subject of this paper. Join estimation for the presented analysis assumes that the order of joins has no impact on the general operation cost, and thus is not considered in an execution plan[6]. If the condition $C$ is a single term of $a = b$ form where $a$ and $b$ are the attributes of $R$ and $S$ respectively, then such join is called an equi-join and is represented by $R \bowtie_C S$. On the contrary to natural join an equi-join contains no attribute projection. **Example 2.** Assume there is a relation Movie(movie_id, title, year, length, tape, studio, starName) and that the following query was formulated in SQL: ```sql SELECT title, year FROM Movie WHERE length>=100 AND studio='Fox'. ``` This query expressed with relational algebra operators would have the following form: \[ \Pi_{\text{title}, \text{year}}(\sigma_{\text{length} \geq 100}(\text{Movie}) \cap \sigma_{\text{studio}=\text{Fox}}(\text{Movie})) \], or alternatively \[ \Pi_{\text{title}, \text{year}}(\sigma_{\text{length} \geq 100 \text{ AND studio}=\text{Fox}}(\text{Movie})) \]. ### 3 Cost estimation parameters When an optimizer is to choose an execution plan, it is essential to introduce some parameters that describe operator cost. Two values \( T \) and \( V \) are introduced as parameters that measure the access cost for a relation. \( T(R) \) stands for an approximate number of the \( R \) relation entities. In our analyses the means of storing the particular entities, and thus the possibility of accessing the whole blocks, is abandoned, assuming that each entity must be accessed separately. In such case the \( T(R) \) value is also an evaluation of the number of storage accesses, required to read the whole \( R \) relation. \( V(R, a) \) will stand for the number of distinct values in a column of the \( R \) relation. In general, if \([a_1, a_2, a_3, ..., a_m]\) an attribute list of the \( R \) relation, then \( V(R, [a_1, a_2, a_3, ..., a_m]) \) stands for the number of distinct values in the columns of the relation \( R \) corresponding with the attributes \( a_1, a_2, a_3, ..., a_m \). During the execution of the selection, the number of the relation attributes stands still, but the number of entities decreases. In the case of the simplest selection, in which attributes are compared to constants, the size of the result can be easily estimated, if the number of distinct values of analysed attribute is known or possible to estimate. Let \( S = \sigma_{a_1=C}(R) \) where \( C \) is a certain constant. In such case \( T(S) = T(R)/V(R, a_1) \) can be accepted as an estimation. A regular distribution of the \( a_1 \) attribute in the \( R \) relation is assumed for a simplification. For wider conditions containing inequalities, the value \( 1/3T(R) \) is assumed as a good estimation [7]. While estimating the join cost it is assumed that the attributes of join \( R \bowtie S \) are the primary key in \( S \) and a foreign key in \( R \). The estimated cost of join with such assumption is as follows. Let \( V(R, a) \leq V(S, a) \). Then for each entity \( t \) from \( R \) there is a possibility as \( 1/V(S, a) \), that it can be paired with an entity from \( S \). As \( S \) contains \( T(S) \) of entities, thus the expected value of the number of entities to be paired with \( t \) is \( T(R)/V(S, a) \). As there are \( T(R) \) entities in \( R \), thus the estimated value of \( R \bowtie S \) is \( T(R)T(S)/V(S, a) \). If \( V(R, a) \geq V(S, a) \), then \( T(R \bowtie S) = T(R)T(S)/V(R, a) \). In general \( T(R \bowtie S) = T(R) \times T(S)/\max(V(R, a), V(S, a)) \)[7]. ### 4 Cost estimation parameters Let us consider a relational database compound of \( N \) tables connected between each other by foreign keys. This database is a cohesive entity in the sense that any group of relations not connected to other relations in this database cannot be distinguished. Answers for database queries are returned by a multiple query execution system. The successive queries are joining the queue to the system. Each of awaiting queries contains references to \( n (1 \leq n \leq N) \) relations from the database. The execution system has at its disposal Nproc processors, one of which is assigned to the first query from the queue (nonspeculative computations) while Nproc-1 processors can be assigned to some additional computations whose purpose is to reduce the system answer time [8]. As a computational step we consider the time in which the first processor returns the data required by a non speculative query, and the rest of Nproc-1 processors execute certain speculative computations. In each computational step, w of W queries awaiting for execution are analysed (sliding-window method)[9]. Based on those analyses, tasks should be assigned to free processors so there would be the highest possible gain from the point of view of the w-1 consecutive queries. Execution of those operations in parallel with the first query would allow for reduction of execution time of the following queries. At any given moment during the system activity, the graph representing the structure of w analysed queries is available [10, 11]. The construction of this graph must comply with the following assumptions: 1. Two types of the vertexes are allowed. - That representing a certain relation that is a part of a query. - That representing the conditions that bound the specific attributes of the relation. 2. The edges between given vertexes signify the presence of the reference between two relations or the presence of a certain condition with reference to the attribute of the given relation. 3. Each edge has a weight assigned to it. Weight is interpreted as a probability (frequency) of the presence of connection between two given vertexes. Schematically the relations will be labelled with the capital letter according to the index: Ri where i stands for the number of each individual relation. Fields of the Ri table will be named fRi,j where i stands for the relation number, to which this field belongs, while j stands for the consecutive number of the field in the range of this relation. A field of the special meaning will be named pk_fRi.0. It will be the primary key for this table. Additionally, a group of special fields might appear in a table. The names of fields in this group will be fk_fRi,j_fRk,0 and they will be a representation of the foreign key to the field being the primary key in the Rk relation. The special case might be the primary key, which consists of the fields that are the foreign keys, with names in the pattern of pk_fk_fRi,j_fRk,0. ### 5 The efficiency analysis Let us consider the database containing seven tables. Fig. 1. presents the relations schema for the following database structure: <table> <thead> <tr> <th>Table</th> <th>Primary Key</th> </tr> </thead> <tbody> <tr> <td>R1</td> <td>pk_fR1,0; fR1,1;...;fR1,7</td> </tr> <tr> <td>R2</td> <td>pk_fR2,0; fR2,1;...;fR2,15; fk_fR2,16_fR1,0</td> </tr> <tr> <td>R3</td> <td>pk_fR3,0; fR3,1;...;fR3,4, fk_fR3,5_fR4,0</td> </tr> <tr> <td>R4</td> <td>pk_fR4,0; fR4,1;...;fR4,6</td> </tr> <tr> <td>R5</td> <td>pk_fR5,0; fR5,1;...;fR5,10, fk_fR5,11_fR1,0, fk_fR5,12_fR3,0,</td> </tr> <tr> <td>R6</td> <td>pk_fR6,0; fR6,1;...;fR6,3</td> </tr> <tr> <td>R7</td> <td>pk_fk_fR7,0_fR6,0;pk_fk_fR7,1_fR5,0;</td> </tr> </tbody> </table> The following queries wait for execution: K1: select * from R6 where fR6,3>C1 K2: select * from R3 join R4 on R4.pk_fR4,0=R3.fk_fR3,5_fR4,0 where R4.fR4,6=C2 K3: update R4 set fR4,4=C3 where fR4,1>C4 K4: select * from R3 join R4 on R4.pk_fR4,0=R3.fk_fR3,5_fR4,0 where R4.fR4,6<C5 (it is stated that C5>C2) K5: select * from R1 join R2 on R1.pk_fR1,0=R2.pk_fR2,0 join R5 on R5.fk_fR5,11_fR1,0=R1.pk_fR1,0 where R2.fR2,1>C6 K6: select * from R1 join R5 on R5.fk_fR5,11_fR1,0=R1.pk_fR1,0 where R5.fR_5,1<C7 Fig. 1. Relation schema for the database. The execution analysis is conducted for the following parameters that describe the system: - $N_{Proc} = 3$ stands for the number of available processors, $w = 4$ stands for the window size and thus the number of queries analysed in one computational step, $k = 1, 2, ..., N_{Proc}$ is the number of the processor in the group, $\text{CompStep}$ is the number of the consecutive computational step. Fig. 2 presents the graphs for the consecutive computational steps describing the relations and the conditions that occur in queries covered by the window. In the first step, in fact, there are two separate graphs available, as the relations set of the first query has no common point with that of the consecutive three queries. The numbers assigned to the edges are the weights. In the first step set to 1 except the edge between the R3 and R4 vertexes as this connection occurs in two of queries in window. With regard to the graphs... In the first computational step there are three different conditions referencing the `R4` query, whereas the two of them relate to the same attribute and moreover, the `C5` inequality is wider, and contains the constant comparison. It is obvious that one of the processors must be assigned to the `R4` selection referencing the wider condition (`f_{R4,6} < C_5`), as it reduces the number of entities for the two of the awaiting queries – `K2` and `K4`. There still remains one available processor and one special unassigned edge. This particular edge is distinguished with dotted line as it refers to the update query. Modifying queries need special attention as they influence the results of other awaiting queries and thus can cause the necessity to cancel the speculative results. Then the last processor is assigned to the `R4` update. (In fact, speculative processors executing modifying queries create only a copy of data with modifications as the order of execution must be kept.) When the assigned tasks are finished, and the partial results are saved, the awaiting queries must be modified so that they take those changes into account, and then the window should be moved and the graph refreshed. As a result, the `K2` and `K4` queries would take the following form, and the window would include the `K5` query. K2: select * from `R3` join `R4_spec1` on `R4_spec1.pk_fR4,0=R3.fk_fR3,5_fR4,0` where `fR4,6=C2` K3: already executed, changes need confirmation K4: select * from `R3` join `R4_spec1` on `R4_spec1.pk_fR4,0=R3.fk_fR3,5_fR4,0` K5: select * from `R1` join `R2` on `R1.pk_fR1,0=R2.pk_fR2,0` join `R5` on `R5.fk_fR5,11_fR1,0=R1.pk_fR1,0` where `R2.fR2,1>C6` In the second computational step the `K2` query is computed nonspeculatively. Next in line is the `K3` update query already executed speculatively so the remaining processors can be used for the benefit of the `K4` and `K5` queries. At least three separate options are available for computations assignment. First of them is finishing the `K4` query (risky due to awaiting update query). Second, selection from the `R2` relation and third, the `R1` and `R5` join. Excluding the risky option there remain two tasks and two available processors so the assignment is obvious. After results saving and queries modifications, the situation for `CompStep = 3` would be as follows: K3: already executed, changes need confirmation K4: select * from `R3` join `R4_spec1` on `R4_spec1.pk_fR4,0=R3.fk_fR3,5_fR4,0` K5: select * from `R2` join `R1_R5_spec` on `R1_R5_spec.pk_fR1,0=R2.pk_fR2,0` where `R2.fR2,1>C6` K6: select * from `R1_R5_spec` where `R1_R5_spec.fR_5,1<C7` The first query waiting for the execution in the third computational step is an already executed update query. Thus, while confirming the results of `K3` and before speculative assignments can be done, any subsequent speculations referencing the same relations as `K3` must be cancelled. As `K3` modified at least one row of the `R4` relation the `K4` query has to return to its previous form, that is not using the speculative results. When this process is over the remaining queries are as follows: Speculative execution plan for multiple query execution systems K4: select * from R3 join R4 on R4.pk_fR4,0 = R3.fk_fR3,5_fR4,0 where fR4,6 < C5 K5: select * from R2 join R1_R5_spec on R1_R5_spec.pk_fR1,0 = R2.pk_fR2,0 where R2.fR2,1 > C6 K6: select * from R1_R5_spec where R1_R5_spec.fR_5,1 < C7 Two remaining processors can be assigned to a few different operations but selections are always preferred as there is always reduction of tuples expected. Thus the remaining processors would compute the R4 selection and the R2 selection respectively. After results saving and queries modifications the situation for the last CompStep = 4 would be as follows: K4: select * from R3 join R4_spec on R4_spec.pk_fR4,0 = R3.fk_fR3,5_fR4,0 K5: select * from R2_spec join R1_R5_spec on R1_R5_spec.pk_fR1,0 = R2.pk_fR2,0 K6: select * from R1_R5_spec where R1_R5_spec.fR_5,1 < C7 As the forth computational step is the last one in our example the last processors assignment will be simple since there are three queries waiting and three available processors, and thus each query will be computed by one processor. ![Diagram](image1) Fig. 2. The relations graphs for the consecutive computational steps. After analysing the course of execution let us analyse its cost. In cost analyses only “SELECT” queries are taken under consideration and thus the K3 query is ignored. For the sequential execution – by sequential execution it is assumed executing each query separately, even in parallel but without any benefits of partial results – the costs estimation presents as follows: \[ T(seq) = T(K1) + T(K2) + T(K4) + T(K5) + T(K6) = \frac{1}{3}T(R6) + T(R4) / V(Ri, fRi, j) + T(R3 \bowtie R4) + T(R3 \bowtie R4) + 1 / 3T(R4) + T(R1 \bowtie R2 \bowtie R5) + 1 / 3T(R2) + T(R1 \bowtie R5) + 1 / 3T(R5). \] The estimation for the speculative execution would be: \[ T(spec) = T(n\text{CompStep} = 1) + T(n\text{CompStep} = 2) + T(n\text{CompStep} = 3) + T(n\text{CompStep} = 4). \] Unfortunately, due to the update query and missed speculation, the comparison between \( T(seq) \) and \( T(spec) \) does not give a clear profit and loss account. Thus it is better to compare only the chosen fragments of computations which are a good example of adopted method performance. Let us start with the cost estimation of the sequential execution of two queries \( K5 \) and \( K6 \): \[ T(K5_\text{seq}) = T(K5) + T(K6) = T(R1 \bowtie R2 \bowtie R5) + 1/3T(R2) + T(R1 \bowtie R5) + 1/3T(R5). \] The estimation for the speculative execution: \[ T(K5_\text{spec}) = 1/3T(R2) + T(R1 \bowtie R5) + T(R2_\text{spec} \bowtie R1_\text{R5 spec}) + 1/3(R1_\text{R5 spec}) + T(K5_\text{seq}) - T(K5_\text{spec}) = T(R1 \bowtie R2 \bowtie R5) + 1/3T(R2) + T(R1 \bowtie R5) + 1/3T(R5) - T(R1 \bowtie R5) - 1/3T(R5) - T(R2_\text{spec} \bowtie R1_\text{R5 spec}) - 1/3(R1_\text{R5 spec}) = T(R1 \bowtie R2 \bowtie R5) + 1/3T(R5) - T(R2_\text{spec} \bowtie R1_\text{R5 spec}) - 1/3(R1_\text{R5 spec}) = T(R1 \bowtie R2 \bowtie R5) + 1/3T(R5) - 1/3T(R1 \bowtie R5) - 1/3(R1 \bowtie R5) = 2/3T(R1 \bowtie R2 \bowtie R5) + 1/3T(R5) - 1/3(R1 \bowtie R5) \approx 2/3T(R1 \bowtie R2 \bowtie R5)[\text{as}T(R5) \geq T(R1 \bowtie R5)] > 0 \] The second example refers to the \( K2 \) and \( K4 \) queries. Executed sequentially they require double \( R3, R4 \) join and double \( R4 \) scan due to two different conditions. Executing those queries speculatively we execute one \( R4 \) scan excessively as due to the update query speculations for \( K4 \) must be cancelled. We have proved that speculative execution of sql queries gives profit in amount of reference to the number of relation entities, and by that also shortens the amount of time needed to get answers. Amount of profit will depend on the speculation accuracy and thus on the number of modifying queries in line. Additional factors influencing profits are the size of the data in given relations and distribution of given attribute values. 6 Summary The presented SQL query speculative computation scheme seems to be a promising method for multiple query execution systems. Due to the usage of the inter-operator parallelism, it is possible to choose such granularity and execution plan which will generate as much profit as possible not only for the first awaiting query, but also for a group of the consecutive awaiting queries. Open for the discussion there remain the problems associated with modifying queries appearing on the list, which have substantial influence on accuracy of the speculative computations. Nonetheless the analysis presented in this paper proves that this method has a significant chance of giving promising results. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3283/2477", "len_cl100k_base": 6426, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31577, "total-output-tokens": 7440, "length": "2e12", "weborganizer": {"__label__adult": 0.0003018379211425781, "__label__art_design": 0.00033020973205566406, "__label__crime_law": 0.0004549026489257813, "__label__education_jobs": 0.0011997222900390625, "__label__entertainment": 9.143352508544922e-05, "__label__fashion_beauty": 0.00014865398406982422, "__label__finance_business": 0.0005974769592285156, "__label__food_dining": 0.0004580020904541016, "__label__games": 0.0006060600280761719, "__label__hardware": 0.0013628005981445312, "__label__health": 0.00074005126953125, "__label__history": 0.0002906322479248047, "__label__home_hobbies": 0.00011664628982543944, "__label__industrial": 0.0007610321044921875, "__label__literature": 0.0002849102020263672, "__label__politics": 0.0002846717834472656, "__label__religion": 0.0004668235778808594, "__label__science_tech": 0.1556396484375, "__label__social_life": 9.08970832824707e-05, "__label__software": 0.0288543701171875, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.0002110004425048828, "__label__transportation": 0.0004527568817138672, "__label__travel": 0.00021159648895263672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26614, 0.05008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26614, 0.60256]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26614, 0.88885]], "google_gemma-3-12b-it_contains_pii": [[0, 2093, false], [2093, 5546, null], [5546, 8778, null], [8778, 12198, null], [12198, 15655, null], [15655, 17153, null], [17153, 20311, null], [20311, 22108, null], [22108, 24286, null], [24286, 26614, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2093, true], [2093, 5546, null], [5546, 8778, null], [8778, 12198, null], [12198, 15655, null], [15655, 17153, null], [17153, 20311, null], [20311, 22108, null], [22108, 24286, null], [24286, 26614, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26614, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26614, null]], "pdf_page_numbers": [[0, 2093, 1], [2093, 5546, 2], [5546, 8778, 3], [8778, 12198, 4], [12198, 15655, 5], [15655, 17153, 6], [17153, 20311, 7], [20311, 22108, 8], [22108, 24286, 9], [24286, 26614, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26614, 0.075]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
f4dbceab77cbc2d8a7da6682818c9bff2fd89f69
Microservices and their design trade-offs: a self-adaptive roadmap Hassan, Sara; Bahsoon, Rami DOI: 10.1109/SCC.2016.113 Document Version Peer reviewed version Citation for published version (Harvard): Link to publication on Research at Birmingham portal Publisher Rights Statement: Checked for eligibility: 20/04/2018 General rights Unless a licence is specified above, all rights (including copyright and moral rights) in this document are retained by the authors and/or the copyright holders. The express permission of the copyright holder must be obtained for any use of this material other than for purposes permitted by law. • Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • Users may use extracts from the document in line with the concept of 'fair dealing' under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain. Where a licence is displayed above, please note the terms and conditions of the licence govern your use of this document. When citing, please reference the published version. Take down policy While the University of Birmingham exercises care and attention in making items available there are rare occasions when an item has been uploaded in error or has been deemed to be commercially or otherwise sensitive. If you believe that this is the case for this document, please contact UBIRA@lists.bham.ac.uk providing details and we will remove access to the work immediately and investigate. Download date: 15. Aug. 2019 Abstract—Migrating to microservices (microservitization) enables optimising the autonomy, replaceability, decentralised governance and traceability of software architectures. Despite the hype for microservitization, the state of the art still lacks consensus on the definition of microservices, their properties and their modelling techniques. This paper summarises views of microservices from informal literature to reflect on the foundational context of this paradigm shift. A strong foundational context can advance our understanding of microservitization and help guide software architects in addressing its design problems. One such design problem is finalising the optimal level of granularity of a microservice architecture. Related design trade-offs include: balancing the size and number of microservices in an architecture and balancing the non-functional requirement satisfaction levels of the individual microservices as well as their satisfaction for the overall system. We propose how self-adaptivity can assist in addressing these design trade-offs and discuss some of the challenges such a self-adaptive solution. We use a hypothetical online movie streaming system to motivate these design trade-offs. A solution roadmap is presented in terms of the phases of a feedback control loop. Keywords-microservices; trade-offs; non-functional requirements; granularity; self-adaptativity; decision-making I. INTRODUCTION Several industries have recently migrated to or consider migrating to microservices [1] (microservitization). Servitization is a shift towards utility-based engineering for software products into services that are continuously motivated by adding long-term value to the system users [2]. We therefore view microservitization as a form of servitization where services/components are transform into microservices — a more fine-grained and autonomic form of services — to add long-term value to the architecture. Microservitization is also an example of a paradigm shift since it involves a dramatic change to the way software is designed and developed within a firm [3]. Microservitization is applicable to “brownfield” (i.e., existing systems migrating to microservices) and “greenfield” (i.e., building new systems) development [1]. Microservitization involves isolating business functionalities into microservices that interact through standardised interfaces. Isolating business functionalities aims at optimising the autonomy and replaceability of the service(s). It can also facilitate autonomous management (i.e., decentralised governance) of the service(s). Therefore the paradigm shift can promote better traceability, accountability and auditing for the service(s) and their provision in the event of failure. These qualities are examples of potential value to be added to the software architecture through microservitization by enhancing its flexibility to cope with operation, maintenance and evolution uncertainties. Ultimately, this can also relate to improved maintenance costs and cost-effective quality of service (QoS) provision to system users. Despite the hype and the business push towards microservitization [1], there is a lack of academic consensus regarding the definition and properties of the paradigm shift and corresponding design patterns for microservices [4]. One of the motivations of this paper is to bridge this gap by digesting the various informal views from industry on defining microservices to aid in understanding the design problems faced by software architects when migrating to microservices. Among these problems is finalising the level of granularity of a microservice too early. “Splitting too soon can make things very difficult to reason about. It will likely happen that you (the software architect) will learn in the process. [1]” This problem is of significance both in brownfield and greenfield development [1] since it affects the choice of concrete realisations (and thereby microservice vendors) when instantiating a system’s abstract architecture. II. MOTIVATING EXAMPLE AND CONTRIBUTION Definition We use the context of greenfield development here as an example to motivate two aspects (or trade-offs) of the granularity problem. We use a hypothetical online movie streaming subscription-based system for illustration. The Promise requirements data repository [5] was used to derive the functional and non-functional requirements of this system. This repository provides indicative requirements from different application domains. We will use the following indicative functional requirements (FRs): • FR1: System shall allow users to view reviews of selected movies by other users. • FR2: System shall allow users to add their own movie review for a selected movie. FR3: System shall allow the administrator to approve a review posted by a user. Figure 1. Segment of the abstract architecture for online movie streaming subscription-based system provided by the architect At design time, we assume the initial abstract architecture of the system is as shown in Figure 1. Each modular boundary ultimately maps to a single microservice. The questions that arise when encapsulating the three FRs above within modular boundaries are: 1) Would too much communication complexity be introduced if movie reviews are encapsulated within a separate modular boundary? 2) Is the functionality isolation enhanced by separating reviews into a separate modular boundary worth the investment? 3) How does encapsulating the reviews affect the non-functional requirement (NFR) satisfaction of the overall system (global NFR)? 4) Would encapsulating the reviews into a new modular boundary affect the NFR satisfaction of the individual (local) modular boundaries interacting with it (namely Browser, User and Administrator)? The questions above present two trade-offs that need to be considered when finalising the level of granularity of a microservice. The former two questions represent a trade-off which we refer to as the size versus number of microservices trade-off. Intuitively, the more microservices introduced into the architecture, the higher the level of isolation between the business functionalities. This comes at the price of increased network link communications and perhaps increased object distribution complexity. Addressing this trade-off systematically is essential for assessing the extent to which “splitting” is beneficial regarding the potential value of microservitization (as defined above). The latter two questions represent another trade-off which we refer to as the local versus global NFR satisfaction trade-off. The third question is concerned with the provision of QoS to system users, which is ultimately a reflection of global NFR satisfaction. That is, the microservitization exercise cannot be done without keeping human and beneficiaries of the services in the loop. For example, a user of the movie streaming system will only be concerned with the results of his/her request returning within 5 seconds of issuing it. This requirement is a global NFR for the system. He will not be concerned with the graphics function within an implementation of the Browser modular boundary returning within 1 second of its execution. This latter requirement is a local NFR. It is these local NFRs that interact together to satisfy the global NFRs of the system however. This interaction takes different forms depending on the number of microservices and the pattern in which they interact. Addressing this trade-off systematically means more cost-effective QoS provision to the users of the system on the long term. The optimal balancing point for both trade-offs highly depends on the current scenario in which the system is operating (i.e., its current environment). For example, for a steadily large volume of requests for reviews on a particular movie, directing those requests to a separate Review modular boundary is reasonable to avoid the Browser becoming a bottleneck for the whole system. This level of granularity can be perceived as the optimal one for the current environment. On the other hand, if there is a persistent outage in a network link connecting the Browser and the User, then merging the 2 modular boundaries can help reduce the latency submitting a request for a movie review. In this scenario, a coarser rather than finer level of granularity can be perceived as the optimal one. Therefore, the argument here is that aggressive isolation of business functionalities is not necessarily ideal for all scenarios of the environment. Furthermore, the definition of the optimal balancing point depends on the relative NFR preferences as elicited from the system’s stakeholders and the relative criticality of the local and global NFRs. For example, a choosing the coarse level of granularity in the example above is only deemed the optimal balancing point if the system stakeholders actually consider reducing the latency of the request (a global NFR) critical compared to the cost of implementing this coarse grained architecture. In particular this cost can be negative implications on local NFRs (e.g., the return time of a specific function within a microservice will increase). Therefore, only if the global NFR of reducing request latency is critical to the stakeholders compared to other (possibly conflicting) global and local NFRs will this coarse level of granularity be deemed optimal. The link between the local/global NFR satisfaction trade-off and the microservice size/number trade-off can be inferred from the discussion above. We argue that determining the optimal level of granularity for a microservice architec- ture entails addressing the microservice size/number trade-off. Deciding between functionally equivalent architectures each having different numbers and sizes of microservices (i.e., different levels of granularity of microservices) requires knowledge of the implications of each candidate architecture on NFR satisfaction. In particular, it requires knowledge of how adjusting the size of an individual microservice affects its NFR satisfaction and how that adjustment of size affects the NFR satisfaction of the overall system. The contributions of this paper are thereby three-fold: - We review and reflect on the definition, properties, and modelling techniques of microservices as presented in informal literature (Section III). - In Section IV, we formulate the problem of addressing the design trade-offs and introduce our solution proposal. - A self-adaptive solution is then outlined at a high level in Section V. The movie streaming system is used as a running example to illustrate the applicability of existing techniques in the literature for each phase. The envisioned challenges and possible future research directions towards the proposed solution are then discussed in Section VI. III. REFLECTION ON THE STATE OF THE ART AND PRACTICE IN MICROSERVICES Due to the recency of the research area, there is a multitude of ways in which microservices have been defined and modelled. Clear understanding of the paradigm shift, its motivations, and implications are prerequisites for advancing microservitization. We have reviewed the state of art and practice to capture the different ways in which microservices can be defined and modelled. In [7], microservices are defined as an architectural style which promotes developing an application “as a suite of small services, each running in its own process and communicating with lightweight mechanisms [7]”. In [8], this definition is slightly complemented to focus on the “weight” of a microservice rather than its size. In particular, a microservice has to be lightweight enough to be replaceable, rather than having to invest in maintaining it. A more compact definition of the microservices is “small, autonomous services that work together [9] p.2]”. Autonomy is the ability “to change independently of each other, and be deployed by themselves without requiring consumers to change [9] p.3].” A healthy implication of autonomy is decentralised governance of the system [7]. Teams are fully responsible for a specific microservice(s), making responsibility for design decisions more traceable. The responsibility spans designing, building, testing, deploying and providing support for that service(s) [1]. They have full choice of the technology used to implement a microservice(s), as long as the interface facing the user is standardised. We therefore view microservices as autonomic, replaceable and deployable artefacts of microservitization that encapsulate fine-grained business functionalities presented to system users through standardised interfaces. The autonomy of these artefacts allows for governing them in a decentralised manner and tracing their changes. It is suggested in [1], [8] to model microservices in an event-driven manner. An “event” is formulated as the delta that has occurred in the service due to a change in the environment. An event bus captures events occurring in upstream services, then the downstream services can capture them later. This allows each downstream service to only capture events that it is interested in from a stream of events [4], allowing for autonomy and decentralised governance. Furthermore, the implementation of the event bus is independent from the services capturing events from it, allowing for replaceability. Microservices can also be reasoned about in terms of domain models. In [10], a domain model is made up of bounded contexts, each encapsulating a subset of business functionalities. This subset is further broken into more fine-grained modular boundaries. It is these modular boundaries which are candidates to be mapped to individual microservices. The internals of bounded contexts and modular boundaries do not need to be exposed to the rest of the system, allowing for autonomy, replaceability and decentralised governance. The literature reveals only a subtle distinction between the service-oriented architectural style (SOAs) and microservices. However, this distinction needs to be made more explicit to highlight the uniqueness of microservices. The uniqueness comes from: 1) the potential of microservices as autonomous fine-grained computational units and, 2) the enhanced flexibility of their underlying style. Therefore, microservices can be thought of as “enriched” and more modularised services. Classic dynamic service selection and composition in SOAs mostly focuses on dynamic restructuring of services while ensuring the overall functionality of the system. However, existing approaches do not question the granularity problem — whether the service provision is at the optimal level of granularity or not. We aim to exploit that distinction by addressing the microservice size/number trade-off and the local/global NFR satisfaction trade-off systematically. IV. PROBLEM FORMULATION Among the crucial and non-trivial decision problems (DPs) that constitute addressing the size/number and the local/global NFRs satisfaction trade-offs are the following: 1) DPI: A solution which manages these trade-offs has to determine (given the current environment scenario and the stakeholders’ definition of “optimality” regarding the trade-offs of concern): - When does decomposing a microservice into more fine-grained ones achieve the required optimality for both trade-offs? When does merging several fine-grained microservices into a coarse-grained one achieve the required optimality for both trade-offs? When should the current level of granularity be kept without further merging or decomposition? 2) **DP2**: The chosen architecture still has to guarantee the functional requirements of the system, regardless of the level of granularity of the microservices in that architecture. 3) **DP3**: Much of the uncertainties that relate to the choice of the optimal architecture and the knowledge that relates to the expected behaviour of the system cannot be fully captured at design time. **DP2** motivates developing a solution that addresses the trade-offs at a more abstract service composition level rather than that followed by classical contributions on service composition. This abstraction allows for focussing on the encapsulation of functionalities rather than the particular concrete services that will instantiate these functionalities. Abstract services represent units of computation and their composition through well-defined interfaces. An abstract service can be operationalised by several alternative concrete services. Referring to Figure 2, consider a scenario where the software architect produces an abstract specification of a service-oriented system, S. For S, there are refined abstract architectures that vary in the number and size of their microservices. Each abstract service varies in the way it balances between the global and local NFR satisfaction. It is this refined abstract architecture solution space (i.e., $S'_1, \ldots S'_n$) that we aim to manage and reason about to choose the optimal $S'_i$ given a current environment scenario and NFR preferences. Given the choice of $S'_i$, this refined abstract architecture can then be mapped at runtime to concrete services from a service registry or marketplace (i.e., a classic service selection venture starts given the output of the solution we are proposing). **DP1** and **DP3** call for runtime support to continuously update design-time knowledge. This requires monitoring the environment surrounding the system and using the monitored data to update the system’s knowledge to better cater for uncertainties. This support can be provided by engineering self-adaptivity as a solution. Therefore, we argue that inducing a microservices architecture with the primitives of self-adaptivity is a strong candidate for addressing these trade-offs. Many of the concerns that drive the choice of the optimal architecture are of runtime nature and are merely difficult to anticipate and analyse at design time. This self-adaptive solution however inevitably takes as input the architect’s best knowledge at design time of what the optimal level of granularity of the architecture would be. The role of the self-adaptive solution is thereafter to support, refine and update this knowledge at runtime. A self-adaptive system is made up of an adaptation system which is responsible for planning any adaptations when need be, and a managed system which executes these plans. There are several mechanisms which the adaptation system can adopt. Underlying most of these mechanisms is the concept of control loops (MAPE-K loop) [12], [14]. We propose a solution based on the MAPE-K loop to render a systematic solution that improves over the system lifetime through accumulating knowledge. --- **Figure 2.** The different conceptual modelling levels for microservices **V. Solution Roadmap** This section proposes and discusses the suitability of techniques in the literature to each phase of the MAPE-K loop. 1) **Input to MAPE-K loop**: Architectural modelling techniques provide several ways to capture an architect’s best knowledge of the optimal abstract architecture (referring to Figure 2) with different levels of expressiveness. This vision will correspond to the managed system of the self-adaptive solution we are proposing. Iterations of capturing and elicitation are envisioned here due to the decentralised governance culture of the microservices setting. 2) **Monitoring**: Defining variables to monitor at runtime and relating them to NFRs is a two-fold process: 1) an NFR gathering and elicitation phase and 2) linking the NFRs to variables that need to be monitored as runtime. Furthermore, there are 2 types of variables that need be monitored: variables that reflect the scenario (e.g., client request rate) and variables that reflect the system’s behaviour in that scenario (e.g., response time). We envision utility trees (such as those presented in [16]) as a possible technique for both phases due to their understandability. We appreciate however that they lack the exhaustiveness of other more powerful techniques such as [17] where utility trees are leveraged with implied scenario detection. 3) **Analysis**: Several analysis techniques are presented in [11] to interpret the values of the monitored variables into a --- <table> <thead> <tr> <th>Abstract</th> <th>Refined abstract</th> <th>Concrete</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>$S'_1$</td> <td>A, A, A</td> </tr> <tr> <td>B</td> <td>$S'_2$</td> <td>B, B, B</td> </tr> <tr> <td>C</td> <td>$S'_3$</td> <td>C, C, C</td> </tr> <tr> <td>D</td> <td></td> <td>D, D, D</td> </tr> <tr> <td>A</td> <td>$S'_4$</td> <td>A, A, A</td> </tr> </tbody> </table> --- ### Notes - **DP1**: Several analysis techniques are presented in [11] to interpret the values of the monitored variables into a --- ### Diagram **Figure 2.** The different conceptual modelling levels for microservices --- ### Table <table> <thead> <tr> <th>Abstract Service</th> <th>Concrete Service Candidates</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>A, A, A</td> </tr> <tr> <td>B</td> <td>B, B, B</td> </tr> <tr> <td>C</td> <td>C, C, C</td> </tr> <tr> <td>D</td> <td>D, D, D</td> </tr> </tbody> </table> perception of the current scenario surrounding the managed system. The enhanced distribution of functionalities introduced by microservitization poses a challenge to this phase. This distribution increases the possibility of wrong perceptions of the current environment scenario across the microservices, therefore increasing the likelihood of triggering faulty adaptation decisions. 4) **Planning**: Given the interpretation of monitored data from the analysis phase, the decision to trigger (and plan) an adaptation is taken in this phase. Deciding to trigger an adaptation to begin with depends on the relative NFR preferences captured from stakeholders. Once a decision is made to trigger an adaptation, the solution space to address **DP1** is a set of functionally equivalent refined abstract architectures of microservices. The level of autonomy of the self-adaptive solution that we are proposing depends on who is responsible for the formulation of this solution space. If the solution space is provided as input to the self-adaptive solution then the role of the self-adaptive solution is less autonomous and more of a design support for decision making. In particular, the planning phase would then only be responsible for choosing the optimal architecture given the solution space from the software architects. On the other hand, a more autonomous self-adaptive solution is one where it builds the solution space of refined abstract architectures at runtime and chooses the optimal one from that space. The field of artificial intelligence (AI) planning provides techniques to support an autonomous planning phase through dynamic, recursive service decompositions techniques governed by a set of predicates [6]. The first step in [6] is translation of a composite web service specification to the problem domain, therefore allowing abstracting away from concrete specifications to more abstract technology independent ones (thereby addressing **DP2**). Manual synthesis of the solution space governed by workflow patterns [18] is a possible direction in case a design support for decision making rather than an autonomous self-adaptive solution is adopted. Once the solution space is formed, software repositories can be mined or economic models can be consulted can be consulted to predict the NFR satisfaction levels for each refined abstract architecture and the added value that each can bring to the system in terms of the properties referred to in Section [1] coming to a decision regarding the optimal refined abstract architecture (S’). 5) **Execution**: Classic concrete service selection techniques can be employed in this phase to map the refined abstract architecture to concrete services. Crucially however, the concrete service selection process can be potentially simplified by pruning the solution space. For example, if the chosen S’ suggests only 3 microservices, then only 3 service markets need to be examined instead of 5 or 6 markets. In addition, this phase needs to feedback into the knowledge base of the adaptation system. This knowledge update will feed into more informed decision making over the lifetime of the system (addressing **DP3**). The key challenge to this phase is the availability of specialised service markets to instantiate all of the microservices in the chosen refined abstract architecture. 6) **Knowledge**: Probabilistic modelling is an attractive venue to capture dynamic uncertainty in the beliefs about the system at runtime. Bayesian probabilities is particularly attractive since it captures the delta in probability given some condition, so it can be used to capture updates in knowledge objectively. The challenge here however is the effort required to infer the prior and posterior probabilities of NFR satisfaction needed to capture the delta in knowledge. **VI. Discussion and Future Work** Although Section [IV] poses interesting research problems, we appreciate that the trade-offs presented in Section [I] can be indicators of other design problems in addition to finalising granularity. For example, lack of balance in the local/global NFR satisfaction trade-off or a significant increase in complexity (as mentioned in **DP1**) might be indicators of sub-optimal deployment choices (e.g., choice of communication protocols and data access dependencies). Therefore, one of our future work directions is to refine our understanding of the causal relationship between the granularity problem and its indicators. We are aiming to do that by conducting a series of experiments a concrete system adopting microservitization, in each experiment altering either a deployment choice only or the granularity of the microservices only and thereafter monitoring the effect of these alterations on different QoS variables. Once this relationship is clarified, the longer term research direction is providing a systematic self-adaptive solution inspired by the proposal in Section [IV]. We appreciate that the microservitization paradigm shift poses research challenges to each phase of the MAPE-K loop. For example, defining and prioritising the variables to be monitored at runtime is a significant challenge in the granularity problem due to the extra effort required to prioritise NFRs and variables in both the local and global architectural levels (as discussed in Section [I]). Motivated by these challenges and others, an initial step towards a systematic solution is deriving the concrete challenges to each phase of the MAPE-K based on experimentation with a runnable system (with more concrete FRs) that adopts microservitization. In particular, we aim to outline a repeatable solution roadmap in our future work as we derive the challenges of each phase from a concrete example system. Once these challenges are reified, we will assess the suitability of examined techniques in the literature to addressing these challenges in the microservices setting. through experimentation and monitoring with the concrete example system. We also appreciate that integrating a self-adaptive, runtime solution into a running system raises practicality issues. Therefore we envision potential in leveraging on symbiotic simulation approaches [19] to implement the MAPE-K loop. Data from the monitoring phase is fed into a simulation of the managed system, where the analysis and planning are carried out to assess the reliability of the chosen refined abstract architecture before it is embedded in the running system. VII. CONCLUSION Our contribution has reflected on the informal literature about microservices, their properties and their modelling techniques. We have then formulated the problem of finding the optimal level of granularity during microservitization, motivating the problem using a hypothetical online movie streaming subscription-based system. We break down the problem into two trade-offs: the microservice size/number and the global/local NFR satisfaction. A self-adaptive runtime solution to the problem is then proposed and discussed. Some of the research challenges and practicality issues that microservitization poses to such a solution are highlighted along with possible future research directions. REFERENCES
{"Source-Url": "https://research.birmingham.ac.uk/portal/files/29076197/bare_conf_1.pdf", "len_cl100k_base": 5815, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21575, "total-output-tokens": 7524, "length": "2e12", "weborganizer": {"__label__adult": 0.0003447532653808594, "__label__art_design": 0.0008130073547363281, "__label__crime_law": 0.0002734661102294922, "__label__education_jobs": 0.0008678436279296875, "__label__entertainment": 7.104873657226562e-05, "__label__fashion_beauty": 0.00016021728515625, "__label__finance_business": 0.0002980232238769531, "__label__food_dining": 0.00032210350036621094, "__label__games": 0.00045609474182128906, "__label__hardware": 0.0006284713745117188, "__label__health": 0.0004291534423828125, "__label__history": 0.0002789497375488281, "__label__home_hobbies": 6.371736526489258e-05, "__label__industrial": 0.0002722740173339844, "__label__literature": 0.0003757476806640625, "__label__politics": 0.0002796649932861328, "__label__religion": 0.00041866302490234375, "__label__science_tech": 0.01544189453125, "__label__social_life": 8.434057235717773e-05, "__label__software": 0.0052642822265625, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00022232532501220703, "__label__transportation": 0.0003781318664550781, "__label__travel": 0.00018286705017089844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34441, 0.02241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34441, 0.17695]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34441, 0.90522]], "google_gemma-3-12b-it_contains_pii": [[0, 2199, false], [2199, 6963, null], [6963, 11870, null], [11870, 17600, null], [17600, 23416, null], [23416, 29352, null], [29352, 34441, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2199, true], [2199, 6963, null], [6963, 11870, null], [11870, 17600, null], [17600, 23416, null], [23416, 29352, null], [29352, 34441, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34441, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34441, null]], "pdf_page_numbers": [[0, 2199, 1], [2199, 6963, 2], [6963, 11870, 3], [11870, 17600, 4], [17600, 23416, 5], [23416, 29352, 6], [29352, 34441, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34441, 0.09848]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4b92c3c246220c53de5f95484b325929e724e1a5
A New Family of String Pattern Matching Algorithms Bruce W. Watson, Richard E. Watson Ribbit Software Systems Inc. (IST Technologies Research Group) Box 24040, 297 Bernard Ave. Kelowna, B.C., V1Y 9P9, Canada e-mail: {watson, rwatson}@RibbitSoft.com Abstract. Even though the field of pattern matching has been well studied, there are still many interesting algorithms to be discovered. In this paper, we present a new family of single keyword pattern matching algorithms. We begin by deriving a common ancestor algorithm, which naïvely solves the problem. Through a series of correctness preserving predicate strengthenings, and implementation choices, we derive efficient variants of this algorithm. This paper also presents one of the first algorithms which could be used to do a minimal number of match attempts within the input string (by maintaining as much information as possible from each match attempt). Key words: single keyword pattern matching, shift distances, match attempts, reusing match information, predicate strengthening and weakening. 1 Introduction and related work In this paper, we present a new family of algorithms solving the single keyword string pattern matching problem. This particular pattern matching problem can be described as follows: given an input string $S$ and a keyword $p$, find all occurrences of $p$ as a continuous substring of $S$. The field of string pattern matching is generally well-studied, however, it continues to yield new and exciting algorithms, as was seen in Watson’s Ph.D. dissertation [Wats95], the recent book by Crochemore and Rytter [2], the more classic paper by Hume and Sunday [7] and book by Gonnet and Baeza-Yates [4]. In the dissertation [Wats95], a taxonomy of existing algorithms was presented, along with a number of new algorithms. Any given algorithm may have more than one possible derivation, leading to different classifications of the algorithm in a taxonomy\footnote{This is precisely what happened with the Boyer-Moore type algorithms as presented in the dissertation [Wats95].}. Many of the new derivations can prove to be more than just an educational curiosity, possibly leading to interesting new families of algorithms. This paper presents one such family — with some new algorithms and also some alternative derivations of existing ones. While a few of the derivation steps are shared with the presentation in [Wats95], this paper takes a substantially different approach overall and arrives at some completely new algorithms. The algorithms presented in this paper can be extended to handle some more complex pattern matching problems, including multiple keyword pattern matching, regular pattern matching and multi-dimensional pattern matching. Our derivation begins with a description of the problem, followed by a naïve first algorithm. We then make incremental (correctness preserving) improvements to these algorithms, eventually yielding efficient variants. Throughout the paper, we first precede each definition with some intuitive background. Before presenting the derivation, we give the mathematical preliminaries necessary to read this paper. 2 Mathematical preliminaries While most of the mathematical notation and definitions used in this paper are is described in detail in [5], here we present some more specific notations. Indexing within strings begins at 0, as in the C and C++ programming languages. We use ranges of integers throughout the paper which are defined by (for integers $i$ and $j$): \[ [i, j) = k | i \leq k < j \\ (i, j] = k | i < k \leq j \\ [i, j] = [i, j) \cup (i, j] \\ (i, j) = [i, j) \cap (i, j] \] In addition, we define a permutation of a set of integers to be a bijective mapping of those integers onto themselves. 3 The problem and a first algorithm Before giving the problem specification (in the form of a postcondition to the algorithms), we define a predicate which will make the postcondition and algorithms easier to read. Keyword $p$ (with the restriction that $p \neq \varepsilon$, where $\varepsilon$ is the empty string) is said to match at position $j$ in input string $S$ if $p = S_{j - |p| + 1}^{j - 1}$; this is restated in the following predicate: **Definition 3.1 (Predicate Matches):** We define predicate \textit{Matches} as \[ \text{Matches}(S, p, j) \equiv p = S_{j - |p| + 1}^{j - 1} \] The pattern matching problem requires us to compute the set of all matches of keyword $p$ in input string $S$. We register the matches as the set $O$ of all indices $j$ (in $S$) such that $\text{Matches}(S, p, j)$ holds. Definition 3.2 (Single keyword pattern matching problem): Given a common alphabet $V$, input string $S$, and pattern keyword $p$, the problem is defined using postcondition $PM$: $$ O = \{ j \mid j \in [0, |S|] \land \text{Matches}(S, p, j) \} $$ Note that this postcondition implicitly depends upon $S$ and $p$. $\square$ We can now present a nondeterministic algorithm which keeps track of the set of possible indices (in $S$) at which a match might still be found (indices at which we have not yet checked for a match). This set is known as the live zone. Those indices not in the live zone are said to be in the dead zone. This gives us our first algorithm (presented in the guarded command language of Dijkstra [3, 1]). Algorithm 3.3: ```plaintext live, dead := [0, |S|], O; O := \emptyset; { invariant: live \cup dead = [0, |S|] \land live \cap dead = \emptyset \land O = \{ j \mid j \in dead \land \text{Matches}(S, p, j) \} } do live \neq 0 -> let j : j \in live; live, dead := live \setminus \{j\}, dead \cup \{j\}; if \text{Matches}(S, p, j) -> O := O \cup \{j\} | \neg \text{Matches}(S, p, j) -> \text{skip} fi od{ postcondition: PM } ``` The invariant specifies that live and dead are disjoint and account for all indices in $S$; additionally, any match at an element of dead has already been registered. Thanks to this relationship between live and dead, we could have written the repetition condition $live \neq O$ as $dead \neq [0, |S|]$, and the $j$ selection condition $j \in live$ as $j \not\in dead$. It should be easy to see that the invariant and the termination condition of the repetition implies the postcondition — yielding a correct algorithm. Note that this algorithm is highly over-specified by keeping both variables live and dead to represent the live and dead zones, respectively. For efficiency, only one of these sets would normally be kept. Some of the rightmost positions in $S$ cannot possibly accommodate matches — no match can be found at any point $j \in [|S| - |p| + 1, |S|]$ since $|S_{j+1} \cdots S_{|S|-1}| \leq |S| - |p| + 1 < |p|$ (the match attempt begins too close to the end of $S$ for $p$ to fit). For this reason, we safely change the initializations of live and dead to $$ live, dead := [0, |S| - |p|], [|S| - |p| + 1, |S|] $$ In the next section, give a deterministic (more realistically implemented) version of the last algorithm. 4 A more deterministic algorithm In the last algorithm, our comparison of \( p \) with \( S_{j..j+p-1} \) is embedded within the evaluation of predicate \( \text{Matches} \). In this section, we make this comparison explicit. We begin by noting that \( p = S_{j..j+p-1} \) is equivalent to comparing the individual symbols \( p_k \) of \( p \) with the corresponding symbols \( S_{j+k} \) of \( S \) (for \( k \in [0,|p|] \)). In fact, we can consider the symbols in any order whatsoever. To determine the order in which they will be considered, we introduce \textit{match orders}: Definition 4.1 (Match order): We define a match order \( mo \) as a permutation on \([0,|p|]\). Using \( mo \), we can restate our match predicate. Property 4.2 (Predicate \( \text{Matches} \)): Predicate \( \text{Matches} \) is restated as \[ \text{Matches}(S, p, j) \equiv (\forall i : i \in [0,|p|] : p_{mo(i)} = S_{j+mo(i)}) \] This rendition of the predicate will be evaluated by a repetition which uses a new integer variable \( i \) to step from 0 to \(|p| - 1\), comparing \( p_{mo(i)} \) to the corresponding symbol of \( S \). As \( i \) increases, the repetition has the following invariant: \[ (\forall k : k \in [0,i] : p_{mo(k)} = S_{j+mo(k)}) \] and terminates as early as possible. In the following algorithm, we use the match order \( mo \), the new repetition and our previous optimization to the initializations of \textit{dead} and \textit{live}. Algorithm 4.3: \[ live, \text{dead} := [0,|S|-|p|], [|S|-|p|+1,|S|];\] \[ O := 0;\] \[ \{ \text{invariant:} \ live \cup \text{dead} = [0,|S|] \land live \cap \text{dead} = \emptyset \\ \land O = \{ j \mid j \in \text{dead} \land \text{Matches}(S, p, j) \} \} \] \[ do \ live \neq \emptyset \rightarrow \\ \let j : j \in live; \\ live, \text{dead} := live \setminus \{ j \}, \text{dead} \cup \{ j \}; \] \[ i := 0; \] \[ \{ \text{invariant:} \ (\forall k : k \in [0,i] : p_{mo(k)} = S_{j+mo(k)}) \} \] \[ do \ i < |p| \textbf{ cand} \ p_{mo(i)} = S_{j+mo(i)} \rightarrow \\ i := i + 1 \] \[ \od \{ \text{postcondition:} \ (\forall k : k \in [0,i] : p_{mo(k)} = S_{j+mo(k)}) \\ \land (i < |p| \Rightarrow p_{mo(i)} \neq S_{j+mo(i)}) \} \] \[ \text{if } i = |p| \rightarrow O := O \cup \{ j \} \] \[ \text{if } i < |p| \rightarrow \textbf{skip} \] \[ \fi \] \[ \od \{ \text{postcondition:} \ PM \} \] The operator $P \text{ cand } Q$ appears in the guard of the inner loop of the above algorithm. This operator is similar to conjunction $P \land Q$ except that if the first conjunct evaluates to $false$ then the second conjunct is not even evaluated. This proves to be a useful property in cases such as the loop guard since, if the first conjunct $(i < |p|)$ is $false$ (hence $i \geq |p|$, and indeed $i = |p|$), then the term $mo(i)$ appearing in the second conjunct is not even defined. Note that the implication within the second conjunct of the loop postcondition is derived from the loop guard, forcing the implication operator to be conditional as well (that is, if $i < |p|$ is determined to be $false$, then $p_{mo(i)} \neq S_{j+mo(i)}$ is not even evaluated). As we will see in the next section, the particular choice for $mo$ can make a difference in the performance of the algorithm. Some possible match orders include ‘forward’ ($mo$ is the identity permutation) and ‘reverse’ ($mo(i) = |p| - i - 1$). The permutation chosen could even be devised according to some theoretical expectations or statistical analysis for a particular application. For instance, if $p$ contained a subsequence of characters which are known to appear very rarely within the type of input string, then the permutation would be chosen in order to check for a match within that subsequence first (since this may result in discovering a mismatch sooner). This approach is standard fare, and is used to find fast variants of the Boyer-Moore algorithms (as described in [7]). Yet another possibility which could prove interesting is that $mo$ is chosen on-the-fly, that is, $mo(i)$ could be allowed to depend upon $mo(i-1)$, $mo(i-2)$, $\ldots$, $mo(0)$ and even upon other factors such as how much of the input string we have already processed. Such an choice of permutation would be highly specialized to a particular instance of this problem and we do not explore it any further in this paper. In the next section, we outline some precomputation on $p$ which speeds up the algorithm tremendously but also depends upon the choice of $mo$, meaning that if we devised the permutation on-the-fly, we would be forced to perform the precomputation for each of the possible unique permutations that our algorithm could produce (a maximum of $|p|!$). ## 5 Reusing match information On each iteration of the outer repetition, index $j$ is chosen and eliminated from the live zone in the statement: \[ \text{live, dead := live } \setminus \{j\}, \text{dead } \cup \{j\} \] The performance of the algorithm can be improved if we remove more than just $j$ in some of the iterations. To do this, we can use some of the match information, such as $i$, which indicates how far through $mo$ the match attempt proceeded before finding a mismatching symbol. The information most readily available is the postcondition of the inner repetition: \[ (\forall k : k \in [0,i) : p_{mo(k)} = S_{j+mo(k)}) \land (i < |p| \Rightarrow p_{mo(i)} \neq S_{j+mo(i)}) \] We denote this postcondition by $\text{Result}(S,p,i,j)$. Since this postcondition holds, we may be able to deduce that certain indices in $S$ cannot possibly be the site of a match. It is such indices which we could also remove from the live zone. They are formally characterized as: \[ \{ x \mid x \in [0, |S|] \land (\text{Result}(S, p, i, j) \Rightarrow \neg \text{Matches}(S, p, x)) \} \] Determining this set at pattern matching time is inefficient and not easily implemented. We wish to derive a safe approximation of this set which can be precomputed, tabulated and indexed (at pattern matching time) by \(i\). In order to precompute it, the approximation must be independent of \(j\) and \(S\). We wish to find a strengthening of the range predicate since this will allow us to still remove a safe set of elements from set live, thanks to the property that, if \(P \Rightarrow Q\) (\(P\) is a strengthening of \(Q\), and \(Q\) is a weakening of \(P\)), then \[ \{ x \mid P(x) \} \subseteq \{ x \mid Q(x) \} \] As a first step towards this approximation, we can normalize the ideal set (above), by subtracting \(j\) from each element. The resulting characterization will be more useful for precomputation reasons: \[ \{ x \mid x \in [-j, |S| - j] \land (\text{Result}(S, p, i, j) \Rightarrow \neg \text{Matches}(S, p, j + x)) \} \] Note that this still depends upon \(j\), however, it will make some of the derivation steps shown shortly in Section 5.1 easier. Because those steps are rather detailed, they are presented in isolation. Condensed, the derivation appears as: \[ (\text{Result}(S, p, i, j) \Rightarrow \neg \text{Matches}(S, p, j + x)) \] \[ \leftarrow \{ \text{Section 5.1} \} \] \[ \neg ((\forall k : k \in [0, i] \land \text{mo}(k) \in [x, |p| + x] : p_{\text{mo}(k)} = p_{\text{mo}(k) - x}) \] \[ \land (i < |p| \land \text{mo}(i) \in [x, |p| + x) \Rightarrow p_{\text{mo}(i)} \neq p_{\text{mo}(i) - x}) \] \[ \equiv \{ \text{define the predicate Approximation}(p, i, x) \} \] \[ \text{Approximation}(p, i, x) \] Note that we define the predicate \(\text{Approximation}(p, i, x)\) which depends only on \(p\) and \(i\) and hence can be precomputed and tabulated. It should be mentioned that this is one of several possible useful strengthenings which could be derived. We could even have used the strongest predicate, \(\text{false}\), instead of \(\text{Approximation}(p, i, x)\). This would yield the empty set, \(\emptyset\), to be removed from \(\text{live}\) in addition to \(j\) (as in the previous algorithm). We can derive a smaller range predicate of \(x\) for which we have to check if \(\text{Approximation}(p, i, x)\) holds. Notice that choosing and \(x\) such that \([x, |p| + x] \cap [0, |p|] = \emptyset\) has two important consequences: - The range of the quantification in first conjunct of \(\text{Approximation}(p, i, x)\) is empty (hence this conjunct is \(true\), by the definition of universal quantification with an empty range). - The range condition of the second conjunct (the ‘implicator’) is \(false\) — hence the whole of the second conjunct is \(true\) since \(\neg false \Rightarrow p\) for all predicates \(p\). With this choice of $x$, we see that predicate $\text{Approximation}(p, i, x)$ always evaluates to $false$, in which case we need not even consider values of $x$ such that $[x, |p| + x) \cap [0, |p|) = \emptyset$. This simplification can be seen in the following algorithm where we have solved the above range equation for $x$, yielding the restriction that $x \in [1 - |p|, |p| - 1)$. Intuitively we know that there must be such a range restriction since we can not possibly know from a current match attempt whether or not we will find a match of $p$ in $S$ more than $|p|$ symbols away. Finally we have the following algorithm (in which we have added the additional update of $live$ and $dead$ below the inner repetition). Note that we introduce the set $\text{nogood}$ to accumulate the indices for which $\text{Approximation}(p, i, x)$ holds. Also note that we renormalize the set $\text{nogood}$ by adding $j$ to each of its members and ensuring that it is within the valid range of indices, $[0, |S|)$. Algorithm 5.1: ```plaintext live, dead := [0, |S| - |p|], [|S| - |p| + 1, |S|]; O := φ; { invariant: live ∪ dead = [0, |S|] ∧ live ∩ dead = φ ∧ O = { l | l ∈ dead ∧ Matches(S, p, l) } } do live ≠ φ → let j : j ∈ live; live, dead := live \ {j}, dead ∪ {j}; i := 0; { invariant: (∀ k : k ∈ [0, i) : p_{m(k)} = S_{j+m(k)}) } do i < |p| ∧ cand p_{m(i)} = S_{j+m(i)} → i := i + 1 od; { postcondition: Result(S, p, i, j) } if i = |p| → O := O ∪ {j} | i < |p| → skip fi; \text{nogood} := (\{ x | x ∈ [1 - |p|, |p| - 1) ∧ Approximation(p, i, x) \} + j) \cap [0, |S|); live := live \ \text{nogood}; dead := dead ∪ \text{nogood} od{ postcondition: PM } ``` 5.1 Range predicate strengthening Here, we present the derivation of a strengthening of the range predicate $$\text{Result}(S, p, i, j) ⇒ \neg \text{Matches}(S, p, j + x)$$ Being more comfortable with weakening steps, we begin with the negation of part of the above range predicate, and proceed by weakening: \[(\neg (\text{Result}(S, p, i, j) \Rightarrow \neg \text{Matches}(S, p, j + x))) \equiv (\neg \neg (\text{Result}(S, p, i, j)) \lor \neg \text{Matches}(S, p, j + x)) \equiv (\text{Result}(S, p, i, j) \land \text{Matches}(S, p, j + x)) \equiv (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : k \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \] \[\Rightarrow (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : m\sigma(k) \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \equiv (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : m\sigma(k) \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \Rightarrow (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : m\sigma(k) \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \equiv (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : m\sigma(k) \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \Rightarrow (\forall k : k \in [0, i) : p_{m\sigma(k)} = S_{j+m\sigma(k)}(i) \land (i < |p|) \Rightarrow p_{m\sigma(i)} \neq S_{j+m\sigma(i)}) \land (\forall k : m\sigma(k) \in [0, |p|) : p_{m\sigma(k)} = S_{m\sigma(k)+j+x}) \] 6 Choosing j from the live zone In this section, we discuss strategies for choosing the index j (from the live zone) at which to make a match attempt. In the last algorithm, the way in which j is chosen from set live is nondeterministic. This leads to the situation that live (and, of course, dead) is fragmented, meaning that an implementation of the algorithm would have to maintain a set of indices for live. If we can ensure that live is contiguous, then an implementation would only need to keep track of the (one or two) boundary points between live and dead. There are several ways to do this, and we discuss some of them in the following subsections section. Each of these represents a particular policy to be used in the selection of j. 6.1 Minimal element We could use the policy of always taking the minimal element of live. In that case, we can make some simplifications to the algorithm (which, in turn, improve the algorithm’s performance): - We need only store the minimal element of live, instead of sets live and dead. We use \( \hat{\text{live}} \) to denote the minimal element. - The dead zone update could be modified as follows: we will have considered all of the positions to the left of \( j \) and so we can ignore the negative elements of the update set: \[ \{ x \mid x \in \lbrack 1 - |p|, 0 \rbrack \land \text{Approximation}(p, i, x) \} \] Indeed, we can just add the maximal element (which is still contiguously in the update set and greater than \( j \)) of the update set to \( \hat{\text{live}} \) for the new version of our new update of live and dead. Depending upon the choice of weakening, and the choice of match order, the above policy yields variants of the classical Boyer-Moore algorithm (see [Wats95, 2, 7]): **Algorithm 6.1:** \[ \begin{align*} \hat{\text{live}} & := 0; \\ O & := 0; \\ \text{do} \quad \hat{\text{live}} & \leq \lvert S \rvert - |p| \to \\ & j := \hat{\text{live}}; \\ & \hat{\text{live}} := \hat{\text{live}} + 1; \\ i & := 0; \\ \{ \text{invariant: } (\forall k : k \in [0, i) : p_{m_0(k)} = S_{j+m_0(k)}) \} \\ \text{do} \quad i & < |p| \text{ cand } p_{m_0(i)} = S_{j+m_0(i)} \to \\ & i := i + 1 \\ \text{od; } \{ \text{postcondition: } \text{Result}(S, p, i, j) \} \end{align*} \] \[ \begin{align*} \text{if} \quad i & = |p| \to O := O \cup \{j\} \\ \mid i & < |p| \to \text{skip} \\ \text{fi;} \end{align*} \] \[ \begin{align*} nogood & := (\text{MAX } x : x \in [0, |p| - 1) \\ & \land (\forall h : h \in [0, x] : \text{Approximation}(p, i, x) : x); \\ \hat{\text{live}} & := \hat{\text{live}} + nogood \\ \text{od}\{ \text{postcondition: } \text{PM} \} \end{align*} \] \[ \square \] ### 6.2 Maximal element We could always choose the maximal element of live. This would yield the dual of the previous algorithm. 6.3 Randomization We could randomize the choice of $j$. Given the computational cost of most reasonable quality pseudo-random number generators, it is not clear yet that this would yield an interesting or efficient algorithm. It is conceivable that there exist instances of the problem which could benefit from randomly selected match attempts. 6.4 Recursion We could also devise a recursive version of the algorithm as a procedure. This procedure receives a contiguous range of live indices ($live$) — initially consisting of the range $[0, |S| - |p|]$. If the set it receives is empty, the procedure immediately returns. If the set is non-empty, $j$ is chosen so that the resulting dead zone would appear reasonably close to the middle of the current live zone\textsuperscript{2}. This ensures that we discard as little information as possible from the nogood index set. After the match attempt, the procedure recursively invokes itself twice, with the two reduced live zones on either side of the new dead zone. This yields the following procedure: \textbf{Algorithm 6.2:} \begin{verbatim} proc mat(S, p, live, dead) → { live is contiguous } if live = ∅ → skip | live ≠ ∅ → live_lower := (MIN k : k ∈ live : k); live_upper := (MAX k : k ∈ live : k); j := ((live_lower + live_upper - |p|)/2; i := 0; { invariant: (forall k : k ∈ [0, i) : pm(k) = S_j+m(k)) } do i < |p| | candid pm(i) = S_j+m(i) → i := i + 1 od; { postcondition: Result(S, p, i, j) } if i = |p| → O := O ∪ {j} | i < |p| → skip fi; new_dead := (∧ x ∈ [1 - |p|, |p| - 1) ∧ Approximation(p, i, x) + j ∩ [0, |S|)); dead := dead ∪ new_dead; mat(S, p, live_lower, (MIN k : k ∈ new_dead : k)), dead); mat(S, p, (MAX k : k ∈ new_dead : k), live_upper), dead) fi corp \end{verbatim} \textsuperscript{2}The algorithm given in this section makes a simple approximation by taking the middle of the live zone it receives, and subtracting $|p|/2$. 21 This procedure is used in the algorithm: **Algorithm 6.3:** \[ \begin{align*} O & := \emptyset; \\ \text{\texttt{mat}}(S, p, [0, |S| - |p|], [\lceil |S| - |p| + 1 \rceil, |S|]) \\ \{ \text{postcondition: } PM \} \end{align*} \] Naturally, for efficiency reasons, the set live can be represented by its minimal and maximal elements (since it is contiguous). ### 7 Further work The family of algorithms presented in this paper can easily be extended to multiple pattern matching and to regular pattern matching (using regular expressions or regular grammars). In each of these cases, various strengthenings of the update predicate could be explored and specialized methods for choosing the index of the next match attempt determined. Another branch in this family tree of algorithms could be derived by removing the conjunct \( p_{\text{max}(i)} = S_{i+\text{max}(i)} \) from the guard of the inner repetition (that is, do not terminate the match attempt as soon as we encounter a mismatch). This would allow us to accumulate more mismatch information and possibly provide a weaker strengthening than \( \text{Approximation}(p, i, x) \) (and hence a larger set nogood). It is not yet clear that this would lead to an interesting family of algorithms. Few of the algorithms presented here have been implemented in practice. Some of the algorithms presented here can be manipulated to yield the well-known Boyer-Moore variants, and we can therefore speculate that their running time is excellent, based upon the results presented in [Wats95]. It would be interesting to see how the new algorithms perform against the existing variants. ### 8 Conclusions We have shown that there are still many interesting algorithms to be derived within the field of single keyword pattern matching. The correctness preserving derivation of an entirely new family of such algorithms demonstrates the use of formal methods and the use of predicates, invariants, postconditions and preconditions. It is unlikely that such a family of algorithms could have been devised without the use of formal methods. Historically, keyword pattern matching algorithms have restricted themselves to processing the input string from left to right, thus discarding half of the useful information which can be determined from previous match attempts. As a new starting point for pattern matching algorithms, this paper proposes pattern matching in the more general manner of making match attempts in a less restricting order within the input string. With the advent of both hardware and software which enable near-constant-time lookup of a random character in a file stream (using memory mapped files, as are available in most newer operating systems), such algorithms will prove useful for typical single keyword pattern matching applications (ones which have a finite input string which can be randomly accessed). The derivation also yielded a recursive algorithm which appears to be particularly efficient. The algorithm has been implemented, and benchmarking results will be presented in the final paper, comparing the algorithm to the other extensively benchmarked algorithms in [7, Wats95]. 9 Acknowledgements We would like to thank Nanette Saes and Mervin Watson for proofreading this paper, Ricardo Baeza-Yates for serving as a sounding board, and Ribbit Software Systems Inc. for allowing us to pursue some of our pure research interests. References
{"Source-Url": "http://www.stringology.org/cgi-bin/getfile.cgi?c=-&n=2&t=pdf&y=1997", "len_cl100k_base": 7797, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 47989, "total-output-tokens": 8983, "length": "2e12", "weborganizer": {"__label__adult": 0.0004723072052001953, "__label__art_design": 0.0004100799560546875, "__label__crime_law": 0.0005817413330078125, "__label__education_jobs": 0.0008664131164550781, "__label__entertainment": 0.00011938810348510742, "__label__fashion_beauty": 0.00021767616271972656, "__label__finance_business": 0.00028204917907714844, "__label__food_dining": 0.0004475116729736328, "__label__games": 0.0006270408630371094, "__label__hardware": 0.00122833251953125, "__label__health": 0.0010747909545898438, "__label__history": 0.00035643577575683594, "__label__home_hobbies": 0.00015246868133544922, "__label__industrial": 0.0005707740783691406, "__label__literature": 0.0005450248718261719, "__label__politics": 0.00038051605224609375, "__label__religion": 0.0007653236389160156, "__label__science_tech": 0.08636474609375, "__label__social_life": 0.0001360177993774414, "__label__software": 0.006381988525390625, "__label__software_dev": 0.896484375, "__label__sports_fitness": 0.00043320655822753906, "__label__transportation": 0.0006546974182128906, "__label__travel": 0.0002391338348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28296, 0.01931]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28296, 0.21216]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28296, 0.83783]], "google_gemma-3-12b-it_contains_pii": [[0, 2400, false], [2400, 4577, null], [4577, 6984, null], [6984, 9343, null], [9343, 12561, null], [12561, 15563, null], [15563, 17557, null], [17557, 20060, null], [20060, 21924, null], [21924, 23898, null], [23898, 26563, null], [26563, 28296, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2400, true], [2400, 4577, null], [4577, 6984, null], [6984, 9343, null], [9343, 12561, null], [12561, 15563, null], [15563, 17557, null], [17557, 20060, null], [20060, 21924, null], [21924, 23898, null], [23898, 26563, null], [26563, 28296, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28296, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28296, null]], "pdf_page_numbers": [[0, 2400, 1], [2400, 4577, 2], [4577, 6984, 3], [6984, 9343, 4], [9343, 12561, 5], [12561, 15563, 6], [15563, 17557, 7], [17557, 20060, 8], [20060, 21924, 9], [21924, 23898, 10], [23898, 26563, 11], [26563, 28296, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28296, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
6b2569a89ede3f0dbc3b48c57f4417eac15a0352
PMPI: A multi-platform, multi-programming language MPI using .NET Mohammad M. El Saifi Edson Toshimi Midorikawa Department of Computer Engineering and Digital Systems Polytechnic School – University of São Paulo Sao Paulo - SP - Brazil {mohamad.saifi, edson.midorikawa}@poli.usp.br ABSTRACT Implementation of the MPI standard on heterogeneous platforms is desirable because it permits using resources discarded by existing MPI implementations of homogenous systems. This paper describes PMPI, as partial implementation of the MPI standard on a heterogeneous platform. Unlike other MPI implementations, PMPI permits MPI processes written in different programming languages to run on multiplatform system. PMPI is built on top of .NET framework. PMPI can span multiple administrative domains distributed geographically. To programmers, the grid looks like a local MPI computation. The model of computation is indistinguishable from that of standard MPI computation. This paper studies the implementation of PMPI with Microsoft .NET framework and MONO to provide a common layer for a multiprogramming language multiplatform MPI application. We show the obtained results using PMPI, and compare them to MPICH2. The obtained results will show that the use of .NET framework for PMPI is feasible and can be optimized for performance. Keywords MPI, Parallel Computing, HPC, .NET Framework, MONO 1. INTRODUCTION For many years, parallel computation was always an attractive alternative for obtaining high-performance computing [Dongarra et al. 2003] [Foster 1995]. With the use of multiple computational nodes interconnected by a high-speed network, clusters of computers are the most common platform of parallel machines. The recent introduction of multi-core microprocessors will result in parallel computers becoming available on desktops. MPI is perhaps the best known standard used in parallel computation allowing nodes spread across the network to collaborate to achieve a common computational goal [Andrews 2000] [MPI Forum 1994]. The limitation of MPI is two fold. On the one side, most existing MPI implementations, such as MPICH2, execute only on homogeneous platforms [MPICH2 2006]. Accordingly, idle cycles that are spread across a variety of machine architectures and operating systems across networked PCs, are discarded because of the lack of an MPI that executes on a heterogeneous platform. These idle cycles are increasingly being recognized as a huge and largely untapped source of computing power. On the other side, almost existing MPI implementations use C, C++ or FORTRAN programming language. Accordingly, researchers and programmers who collaborate on the solution of the same problem need to stick to one of the languages that supports the MPI library they intend to use. The implementation of MPI that can tap into those idle resources on heterogeneous platforms is desirable because it allows researchers and programmers, who need high performance computing and have available heterogeneous platforms around their campus, to use all available resources [Kelly, Roe and Sumitomo 2002][ Kelly and Roe 2002][ Kelly and Mason 2003]. Having the ability to use MPI on heterogeneous systems maximizes computational power resources. In addition to using MPI on a heterogeneous platform, programmers want to use a variety of programming languages in their computational program. In the same MPI computation, programmers want nodes to run applications written in different programming languages simultaneously using MPI standards. This becomes a merit when we have multiple programmers participating in the solution of a unique problem, where each programmer is writing a program that runs on a separate node such as same data multiple program solutions. This permits programmers to explore their abilities and skills in their preferred programming language, and to use the programming language that best suit the solution of the problem. This paper studies the feasibility of implementing MPI standard on a heterogeneous platform by implementing the component PMPI. PMPI aims to provide programmers and researchers with a framework that takes care of a transparent communication infrastructure between the heterogeneous nodes in a MPI computation in a robust and secure manner. The programmer is left to concentrate only on the application specific computational aspects. We take advantage of the .NET framework to provide application programmers with a choice of the programming language, all of which can use the same PMPI framework classes. There are different choices that can be made to implement the PMPI component. We choose the .NET framework [Ritchter and Balena 2002] for this purpose as the first tentative and used .NET Remoting [McLean 2003] [Rammer 2002] as the communication infrastructure for PMPI. In this implementation, PMPI acts as a remote-object based framework for creating MPI parallel applications. The framework is built using the extensibility features of the .NET Remoting framework. Unlike the Java virtual machine, the .NET runtime is designed to be language independent. Accordingly, developers can create their applications using any language that targets the CLR such as: C#, Visual Basic, Visual C++ or one of many other .NET languages such as Eiffel, Perl, Cobol, Component Pascal, Smalltalk, or Fortran [Ritchter and Balena 2002]. Today there are about twenty six different programming languages that target the .NET framework [Ritchter and Balena 2002]. PMPI enables programmers to program in a normal MPI fashion, without being concerned what platform or programming language other participating nodes will run. The main contribution of this paper is to study the feasibility of implementing MPI on a virtual machine and show performance results compared to other existing MPI implementation. This offers programmers who have heterogeneous systems with a library that can reap the available computational power on available machines. The remainder of this paper is organized as follows. Section 2 describes the architecture of PMPI. Section 3 describes the programming model of PMPI. Section 4 explains the sample application used in the tests. Section 5 describes the results and some preliminary performance figures. Finally, section 6 concludes and discusses future and related works. 2. ARCHITECTURE PMPI architecture follows the standard structure of a layered networking architecture. PMPI is composed of three components. The first component is PMPI which contains MPI implementation. The second one is the agent that runs on each node participating in the MPI computation. The agent is responsible for starting MPI programs on nodes, and offers administrative information about nodes, in addition to information about administrative domains. The third component is PMPI Gateway, or PIP (Platform Interface Portal). The PIP serves as a gateway to administrative domains to overcome problems raised by firewalls and NAT separating different administrative domains. Each administrative domain has a PIP known to all agents. Inside PMPI component, there is an address resolution layer that is transparent to programmers. This layer decides on whether to direct MPI calls directly to other nodes or to their corresponding PIPs. This permits programmers the freedom to concentrate on their problem rather than communication implementation. Figure 1: Four nodes using PMPI Figure 1 shows a basic PMPI infrastructure. The figure shows a structure with four nodes running on one administrative domain connected by local Ethernet network. The processes may be running on different platforms, and each process may be written in a different programming language. PMPI communication infrastructure is constructed on .NET Remoting, and in turn, is based on TCP/IP. .NET Remoting can be customized to support other protocols [Rammer 2002]. Figure 2 shows PMPI component layers. On the top, we have the MPI interface that is available to programmers. When a MPI call is made, it passes through the address resolution module to check which administrative domain the destination node belongs to, and what communication method is to be used to reach the node that costs less. For example, nodes behind firewalls may be reachable only through port 80 using the SOAP protocol which is firewall friendly in contrast to the binary protocol. On the other hand, SOAP consumes more network bandwidth and is less efficient than binary formatting [McLean 2003]. Figure 3 shows a sketch of a MPI computation spanning two administrative domains where each administrative domain is located behind a firewall. In this figure, MPI calls made from one administrative domain to the other are done through the PIPs of the administrative domain. The PIP will serve as a proxy on behalf of nodes making the call. The scenario in figure 3 assumes that we have barriers in both administrative domains. In other words, nodes in administrative domain 1 cannot reach nodes in administrative domain 2 directly using remote object calls. Instead, they should use the PIP proxy service to exchange messages. To better understand the idea, let’s take an example where node A in administrative domain one will make MPI call to node B in administrative domain two. The address resolution layer of PMPI running on node A detects that node B is running on another administrative domain and there is no way to reach node B directly because of a firewall or NAT. The address resolution layer directs the call to the PIP node of administrative domain one. The PIP in turn directs the MPI call to PIP of administrative domain two. The PIP of administrative domain two receives the call and directs it to node B of its domain. If the call is synchronous, then the PIP of administrative domain one block node A until it receives a notification from PIP of the other administrative domain that node B has received the call. The PIP acts as proxy on behalf of the nodes in their corresponding administrative domain. The rest of this section is divided into two subsections. The first describes MPI standard. The second describes PMPI architecture and constructs. 2.1 MPI: Message Programming Interface In the message-passing library approach to parallel programming, a collection of processes executes programs written in a standard sequential language augmented with calls to a library of functions for sending and receiving messages. MPI is a complex system. In its entity, it comprises 129 functions, many of which have numerous parameters of variants [Foster 1995]. In the MPI programming model, a computation comprises one or more processes that communicate by calling library routines to send and receive messages to other processes. In most MPI implementations, a fixed set of processes is created at program initialization, and one process is created per processor. However, these processes may execute different programs. Hence, the MPI programming model is sometimes referred to as multiple program multiple data (MPMD) to distinguish it from SPMD model in which every processor executes the same program. Processes can use point-to-point communication operations to send a message from one named process to another; these operations can be used to implement local and unstructured communications. A group of processes can call collective communication operations to perform commonly used global operations such as summation and broadcast. MPI’s ability to probe for messages allows asynchronous communication. Probably MPI’s most important feature from a software engineering viewpoint is its support for modular programming. A mechanism called a communicator allows the MPI programmer to define modules that encapsulate internal communication structures [MPI Forum 1994]. 2.2 PMPI Basic Architecture PMPI is built on top of .NET framework. We are using Microsoft .NET framework 1.1 for Microsoft Windows and Mono 1.0.5 for Linux. Although Mono can run on Power PC, BSD and other operating systems and architectures, we based our initial implementation on Windows and Linux operating systems although this can be expanded to other operating systems without any modification in the code. The initial implementation of PMPI was devoted to implement functionality rather than performance. Because of this, we selected higher level implementations of the .NET framework to implement PMPI. For the communication layer, we used .NET Remoting which is based on remote object communication. The classes that make up the .NET framework are layered, meaning that at the base of the framework are simple types, which are built on and reused by more complex types. .NET Remoting is one such complex type which in turn is built as layers where each layer can be customized to programmer needs [Jones et al 2004]. This adds extra overhead compared to using simple raw classes such as socket class [Rammer 2002]. We used C# as the programming language. All .NET programming language compilers targets the CTS (common type system) of the framework. C# compiler helps the programmer adhere to CTS types by setting the “CLSCompliantAttribute” attribute to true [Bock 2003]. In this way, the compiler generates an error whenever you try to use a non CTS type. This guarantees that the generated code is accessed by all .NET programming languages since all .NET programming languages target the CTS [Ritchter and Balena 2002]. Each node participating in the MPI computation should have the .NET framework installed. Nodes running Windows operating systems should install Microsoft Framework 1.1 on their machines. Nodes running Linux should install Mono 1.0.5. Although there are newer versions of the framework for both platforms, PMPI has been tested on earlier frameworks. In addition to the framework installed on the machines participating in the MPI computation, the nodes should have PMPI installed on each node. The initial implementation of PMPI needs to have bidirectional communication between the nodes. Accordingly, firewalls can cause problems. The implementation of PIP is not yet implemented. Initially, PMPI implemented 20 MPI functions. Those functions cover basic, asynchronous, collective and modular commands. When MPI computation starts, each node registers PMPI object at a known end point to other nodes using .NET remote object. With .NET remoting, the framework creates a thread pool to receive the calls made against the remote object. When node A sends data to node B within the same administrative domain, node B’s PMPI will receive the data and releases the calling object immediately, node A in this. When node B calls MPI_Receive, PMPI will check to see if there is a message with the corresponding tag and source. If it finds a corresponding message, then a pointer to the message is passed to MPI_Receive, and the call returns immediately in node B. If no corresponding message is found with the requested tag-source, the call in node B is blocked until node B receives the requested message. If node A uses synchronous MPI_SSend, then PMPI layer on node A blocks until node B sends a release signal after the process in node B makes a call to MPI_Receive. PMPI uses a hash table data structure to control received message. The key of the hash table is a combination of the source, tag, and communicator ID. The value of the hash table points to a queue whose elements contains a data structure composed of the received message, message size, message type and synchronization objects that the receiving thread will block on. When the node calls MPI_Receive with a particular tag, source and communicator, PMPI checks the hash table for pending messages in the queue. If it finds a message, it pops the message from the queue in a FIFO manner and wakes up the thread using the synchronization objects found in the read queue element. When the waked thread terminates, the message is passed to the MPI_Receive call. Note that if the call is made using MPI_Ssend, which is a synchronous send, the receiving thread will block the sending thread until it is waked up again by MPI_Receive in the manner explained above. If it comes that MPI_Receive is called before a MPI_Send and PMPI finds the queue empty, then it blocks the call on synchronziation objects, enqueue the call with the synchronization objects in the queue whose pointer is stored in a hash table. Later, when PMPI is invoked by MPI_Send, PMPI checks first if a pending MPI_Receive exists. If it find a pending receive, then it pops the queue, wakes the thread using the popped synchronization objects and returns. When it comes to collective operations, PMPI uses a thread pool to perform the collective task. PMPI uses a simple algorithm for collective tasks. Each communicator has a master node known to all participating nodes. The communicator master node is responsible for coordinating the collective calls. In other worlds, its the master communicator node who decides when the collective call is done. PMPI implements this by using a thread pool in the communicator master node. When the collective call is made, PMPI checks if the node is the master in the target communicator. If it is not, then it uses a methodology similar to Send_Receive explained before. If it finds the node to be the communicator master, then it creates one thread for each node in the communicator, and blocks on the synchronization object. When the thread in the pool terminates, it verifies if other threads in the pool had terminated; if not, then the thread blocks on a synchronization object. If the thread happens to be the last one, then the thread wakes all other threads using the synchronization object. By this means, the communicator master manages the collective operation. The agents will be a separate component. For MS Windows, the agent is implemented as Windows Service. The agent will be responsible for starting the programs on participating nodes. In addition, the agent will supply managing data about the nodes themselves such as available memory, CPU load, speed, administrative domains and other managing data. Today, most operating systems implement the Web-Based Enterprise Management (WBEM), which is an industry initiative to develop a standard technology for accessing management information in an enterprise environment. WMI is the Micorsoft implementation of WBEM. The PIPs are part of PMPI architecture but are not yet implemented. PIPs will be implemented using Web Services. The remote object model explained will be substituted by Web Service model. The PIP will be a gateway on behalf of the calling node. The architecture and implementation of PIP will consider having two communicating PIPs on behalf of the send and receiving nodes. 3. PROGRAMMING MODEL The programming model is as simple as any existing MPI implementation. The master node initializes the MPI computation using a XML computation file. PMPI is object based. Therefore, the MPI functions should be called as object methods. When PMPI is initialized, it publishes a remote object at a known end point. Each participating node knows the address and port of all other nodes in the MPI computation. When the program calls a MPI function, PMPI receives the function call and transmits it to the corresponding node after resolving its address internally. Although current implementation did not target nodes running behind NATs and firewall, PMPI layered implementation makes it easy to build semantics to solve the complications raised by firewalls and NATs with out programmer awareness. This helps the programmer to devote his efforts on programming rather than MPI communications. Future works will customize the real proxy of the .NET Remoting object to intercept message calls and select the destination accordingly. We wrote applications in VB.NET, C#, managed C++, and J#. We ran each application on a different node. All four nodes ran under Microsoft Windows XP operating system. For MONO running on Linux Redhat 9, we were limited to C# since it is the only existing non-beta compiler. For simplicity, we used only the above programming languages, but this can extend to any available .NET programming language. The MPI computation ran as if programs at all nodes were written in the same programming language. ```csharp MPI obj = new MPI(); obj.MPI_Init(args); id = obj.MPI_Comm_Rank(MPI_Comm_World); tasks = obj.MPI_Comm_Size(MPI_Comm_World); obj.MPI_Send(offset, 1, MPI_Integer, dest, mtype, MPI_Comm_World); obj.MPI_Send(rows, 1, MPI_Integer, dest, mtype, MPI_Comm_World); ``` Figure 4: Part of the sample application Figure 4 shows part of the sample application written in C# where the code initializes an MPI computation, gets its task Id within COMM_WORLD, gets COMM_WORLD size, sends data to “dest” node and later receives data from “dest” node. Note that the MPI functions are methods of a PMPI object called “obj”. These methods are either static or instance methods. Static methods of PMPI enable us to write multithreaded programs running on a machine where all threads use the same PMPI object. Also, it is possible to start multiple PMPI objects where each object participates in a different MPI computation with out the need to MPI communicators. 4. SAMPLE APPLICATION We used as a sample application the master-worker model for matrix multiplication (A x B = C). The results of this sample are compared to MPICH2 for Windows in the next section. The master (task Id 0) sends matrix B to all participating nodes (workers), and distributes the rows of matrix A into worker nodes evenly. Workers perform the multiplication and send back the result to the master node. Master node accumulates the results from all workers into matrix C. The sample application was taken from the examples that install with MPICH2. In this sample application, the master does not participate in the MPI computation. It just sends the data to workers and gets back the results into matrix C. 5. RESULTS The tests are executed in three sets. The first set of tests is the results obtained executing the sample application on a homogeneous platform corporate network. The second test is done on the same corporate network with both PMPI and MPICH2. The last test is done on a cluster using homogeneous and heterogeneous platforms. 5.1 Results using Corporate Homogenous Platform We tested the application first on standalone machines with out using parallel MPI computation. We rewrote the application taking out all MPI commands and compiled them using Microsoft Visual C++, Microsoft C# and MONO C# compilers. The corporate network was composed of AMD 1.5 GHZ, 512 KB cache CPUs with 256 MB RAM and 40 GB HD. The nodes run under Windows XP. One node had dual operating systems: Windows XP and Redhat 9. The obtained results are as follows. C# managed code application executed 27% slower than C++ application on machine running Windows XP or Windows 2003 operating system. On machine running Linux Redhat 9 with mono .NET framework, C++ executed 10 times faster than C#. Comparing Microsoft NET C# running on Windows XP to MONO 1.05 C# compiler Running under Linux Redhat 9, Microsoft C# executed 5 times faster than MONO C#. Before going any further, let me clarify some details about array access in managed world and some performance issues. Each time an element of an array is accessed, the CLR ensures that the index is within the array’s bound. This prevents you from accessing memory that is outside the array, which would potentially corrupt other objects. If an invalid index is used to access an array element, the CLR throws a System.IndexOutOfRangeException exception. The index checking comes at a performance cost. If we have confidence in our code, we can access an array without having the CLR perform index checking. This feature is not allowed in all .NET languages and is not CLS complaint. Accordingly, only .NET languages that have this feature will benefit from fast array access such as C#. To give an idea on how much gain we get using fast array access, we show the following results. C# using managed array access executes 20% slower than C# using fast array access on the machine running Windows XP. On Linux, C# using managed array access, executed 5 times slower than C# using fast array access. As we note, the performance gain in Linux is huge (500%). The problem with fast array is that not all .NET languages support it since it is not a CLS compliant. In addition, it is harder to code than managed array access since it uses pointers. Accordingly, the benefit of using fast array is limited to only a subset of .NET programming languages. Later, we executed the application using both MPICH2 and PMPI using managed array access with PMPI. The sample application running on PMPI nodes was written with C#, Java.NET, managed C++ and VB.NET. The compiler choice did not affect the result. We used a various combination of the programming languages and we got the same results. The results are shown only for Windows OS since we used MPICH2 for windows. In figures 5, we show a comparison between PMPI and MPICH2 for different problem sizes executing on 6 nodes. The results demonstrate that PMPI executed slower than MPICH2 between 40% and 70%. Figure 6 shows the linear relation ship between the number of nodes and the execution time. As we increase the participating nodes, the execution time decreases linearly. The cluster, named BIO, is composed of 8 nodes each with dual 2.0 GHZ, 512 KB cache CPUs with 512MB RAM and 40 GB HD. As before, we tested the application first on a standalone machines with out using parallel MPI computation. We rewrote the application taking out all MPI commands and compiled them using Microsoft C# compiler and mono C#. Later, we executed the application on the cluster using up to six nodes where nodes varied between nodes running Windows 2003 server and nodes running Linux Redhat 8. The result is shown below in figure 7. As the figure shows, Microsoft .NET platform performed better than MONO .NET framework. When we mixed the nodes between Windows and Linux operating systems, PMPI executed with performance equivalent to the average of executing on each platform independently. Although PMPI executes slower than MPICH2, the main overhead is a result of managed array access and the use of high construct communication construct of the .NET framework. This overhead was expected and is subject for future work. In addition, we detected that the use of thread pool within the program structure, degraded PMPI performance in a master-worker model. This loss of performance resulted from the fact that the operating system has full control of the thread pool which resulted in activating threads to receive the results from nodes while other threads were still sending data to other nodes. With a custom thread pool, PMPI will have full control on the executing thread, and in turn, can block receiving threads while PMPI is sending. This will improve a lot performance especially when we have large number of nodes. This happens because as we increase the number of nodes, we have greater the tendency of nodes completing their jobs before the master. Moreover, there are some other code tuning of PMPI that can improve performance such as reducing .NET framework boxing, a mechanism that .NET framework exchange data between the allocated stack and managed heap. Boxing in .NET managed code is known to have performance cost and minimizing it can improve performance a lot. 6. RELATED WORKS In this section we discuss related work the can be use parallel computing on a multiplatform. In [Fer05], experiment with implementation of parallel programs using C# running on Unix and Windows is done. In [Will01], a binding between an already implemented MPI interfaces and C# is done. In [Car00], a Multiplatform MPI implementation is done for JAVA programming language. However, none of the above works have focused and worked with a Multiprogramming Language MPI. 7. CONCLUSION AND FUTURE WORKS The first implementation of PMPI was shown to be feasible and it is possible to execute MPI standards on a multi-language and multiplatform systems. Although the first implementation showed that PMPI is slower than MPICH2, the difference is explained by known issues and these issues can be eliminated. Care should be taken when using a heterogeneous system including Linux with managed array access. As shown in the preliminary results, mono performs very poor with managed array access. In such a case, we should consider using fast array access. The next step in this project is to span PMPI to multiple administrative domains that span geographic area across the internet. In addition, lower communication constructs can improve performance in addition to use a custom thread pool to manage threads instead of the operating system thread pool. This will give us a complete control on the threads. Also, we will do a comparison between JavaMPI to PMPI. REFERENCES [Fos95a] Foster, I., Designing and Building Parallel Programs, pp 275-310, 1995
{"Source-Url": "http://dotnet.zcu.cz/NET_2006/Papers_2006/short/A59-full.pdf", "len_cl100k_base": 5796, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23506, "total-output-tokens": 6992, "length": "2e12", "weborganizer": {"__label__adult": 0.00032448768615722656, "__label__art_design": 0.00025081634521484375, "__label__crime_law": 0.0003082752227783203, "__label__education_jobs": 0.00077056884765625, "__label__entertainment": 7.975101470947266e-05, "__label__fashion_beauty": 0.00014722347259521484, "__label__finance_business": 0.0002644062042236328, "__label__food_dining": 0.0003101825714111328, "__label__games": 0.0005402565002441406, "__label__hardware": 0.001628875732421875, "__label__health": 0.0005245208740234375, "__label__history": 0.0002875328063964844, "__label__home_hobbies": 0.0001043081283569336, "__label__industrial": 0.0005650520324707031, "__label__literature": 0.0002027750015258789, "__label__politics": 0.00022530555725097656, "__label__religion": 0.0004897117614746094, "__label__science_tech": 0.0679931640625, "__label__social_life": 9.441375732421876e-05, "__label__software": 0.01056671142578125, "__label__software_dev": 0.9130859375, "__label__sports_fitness": 0.00034689903259277344, "__label__transportation": 0.0007662773132324219, "__label__travel": 0.00024700164794921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31373, 0.02335]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31373, 0.56737]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31373, 0.89351]], "google_gemma-3-12b-it_contains_pii": [[0, 3267, false], [3267, 7454, null], [7454, 10197, null], [10197, 15146, null], [15146, 20286, null], [20286, 24438, null], [24438, 27289, null], [27289, 31373, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3267, true], [3267, 7454, null], [7454, 10197, null], [10197, 15146, null], [15146, 20286, null], [20286, 24438, null], [24438, 27289, null], [27289, 31373, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31373, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31373, null]], "pdf_page_numbers": [[0, 3267, 1], [3267, 7454, 2], [7454, 10197, 3], [10197, 15146, 4], [15146, 20286, 5], [20286, 24438, 6], [24438, 27289, 7], [27289, 31373, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31373, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
23db4b8ed7f6f8f95b2070b87878fcac13d22bd2
CONTENTS 1. Template metaprogramming 2. Variadic template arguments 3. Smart pointers Template metaprogramming - Template metaprogramming is a Turing complete language - Every intuitive computable number can be computed - Meaning: we can basically compute anything - Funny implication - There cannot be a correct C++ compiler! - TMP is a bit esoteric - Some software companies do not allow it - However, there are some users - Boost.Hana – your standard library for metaprogramming - Try to use `constexpr` (since C++11) instead of TMP - You will see why that is one the next few slides! Template metaprogramming prerequisites - static variables in struct/class - Are shared across all variables of that type - They belong to the type itself - Similar to global variables but with limited scope - Great news - Types can store values - And with values we can perform computations - So we can perform computations with types - Templates are processing types! - We just discovered metaprogramming - TMP uses types to express computations ```cpp #include <iostream> struct A { // 'value' exists only once-across all // variables of type A static const int value = 100; }; int main() { A a, b; std::cout << a.value << ' '; std::cout << b.value << ' '; std::cout << A::value << ' '; return 0; } ``` Template metaprogramming - Functional language - Compute using pattern matching and recursion - Example: computing the power function ```cpp #include <iostream> template<int B, unsigned E> struct power { static const int value = B * power<B, E - 1>::value; }; template<int B> struct power<B, 0> { // template specialization on the power template type static const int value = 1; }; int main() { const int p = power<2, 10>::value; std::cout << p << ' '; return 0; } ``` Template metaprogramming ```cpp #include <iostream> template<int B, unsigned E> struct power { static const int value = B * power<B, E - 1>::value; }; template<int B> struct power<B, 0> { static const int value = 1; }; int main() { const int p = power<2, 10>::value; std::cout << p << ' '; return 0; } ``` - In programming using templates - Types are used as functions - They can get 1. Types 2. Constant values 3. References to functions - as input parameters - They can store a 1. type with `typedef` 2. constant with `enum` or `static const` - Template specialization directs control flow (pattern matching and recursion) - In our example - templates get instantiated ... - until the base case is reached Template metaprogramming ```cpp #include <iostream> template<int B, unsigned E> struct power { static const int value = B * power<B, E - 1>::value; }; template<int B> struct power<B, 0> { static const int value = 1; }; int main() { const int p = power<2, 10>::value; std::cout << p << ' '; return 0; } ``` ```cpp #include <iostream> constexpr int power(int base, unsigned exp) { return (exp == 0) ? 1 : base * power(base, exp - 1); } int main { constexpr p = power(2, 10); std::cout << p << ' '; return 0; } ``` Template metaprogramming - Even data structures can be realized - Remember the triple type from the exercises - C++’s tuple data type is implemented using template metaprogramming - Lists are also possible Computing Euler’s number at compile time using TMP - Use this formula for $e$ \[ e = 1 + \frac{1}{1} + \frac{1}{1 \cdot 2} + \frac{1}{1 \cdot 2 \cdot 3} + \frac{1}{1 \cdot 2 \cdot 3 \cdot 4} + \cdots \] \[= \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots \] \[= \sum_{k=0}^{\infty} \frac{1}{k!} \] Computing Euler’s number at compile time (TMP) I #include <iostream> template<int N, int D> struct Frac { const static int Num = N; const static int Den = D; }; template<int X, typename F> struct Mult { typedef Frac<X * F::Num, X * F::Den> value; }; template<int X, int Y> struct GCD { const static int value = GCD<Y, X % Y>::value; }; template<int X> struct GCD<X, 0> { const static int value = X; }; template<typename F> struct Simplify { const static int gcd = GCD<F::Num, F::Den>::value; typedef Frac<F::Num / gcd, F::Den / gcd> value; }; template<typename X1, typename Y1> struct SameBase { typedef typename Mult<Y1::Den, X1>::value X; typedef typename Mult<X1::Den, Y1>::value Y; }; template<typename X, typename Y> struct Sum { typedef SameBase<X, Y> B; const static int Num = B::X::Num + B::Y::Num; const static int Den = B::Y::Den; typedef typename Simplify<Frac<Num, Den>>::value value; }; Computing Euler’s number at compile time (TMP) II ```cpp template<int N> struct Fact { const static int value = N*Fact<N-1>::value; }; template<> struct Fact<0> { const static int value = 1; }; template<int N> struct E { const static int Den = Fact<N>::value; typedef Frac<1, Den> term; typedef typename E<N-1>::value next_term; typedef typename Sum<term, next_term>::value value; }; template<> struct E<0> { typedef Frac<1, 1> value; }; int main() { typedef E<12>::value X; std::cout << "e = " << (1.0 * X::Num / X::Den) << '\n'; std::cout << "e = " << X::Num << " / " << X::Den << '\n'; return 0; } ``` [Example taken from https://monoinfinito.wordpress.com/series/introduction-to-c-template-metaprogramming/] © Heinz Nixdorf Institut / Fraunhofer IEM Computing Euler’s number at compile time (**constexpr**) III - Using the same formula ```cpp #include <iostream> constexpr unsigned factorial(unsigned n) { return (n == 0) ? 1 : n * factorial(n - 1); } constexpr double euler(unsigned n) { double e = 1; for (unsigned i = 1; i <= n; ++i) { e += 1.0 / factorial(i); } return e; } int main() { constexpr double e = euler(12); std::cout << "Euler's number is: " << e << std::endl; return 0; } ``` - Let’s see what the compiler does - Compile with: ``` clang++ -std=c++17 -Wall -emit-llvm -S -fno-discard-value-names euler.cpp ``` (obtain LLVM compiler’s internal representation) Pros and cons using template metaprogramming - **Pros** - Evaluated at compile time - Higher abstraction possible - **Cons** - Longer compile times - Hard to read / write - Functional style does not match C++ - Not supported by development tools - Error messages usually make no sense - Heavily overused - “No type information” - Use C++ `constexpr` instead! - Unless you have good reason to do otherwise Variadic template arguments ```cpp #include <iostream> // With C++14 template<typename T> T sum(T t) { return t; } template<typename T, typename... Args> T sum(T t, Args... args) { return t + sum(args...); } int main() { int res = sum(1, 2, 3, 4, 5, 6, 7, 8, 9, 10); std::cout << res << '\n'; return 0; } // With C++17 template<typename... Args> auto sum(Args&&... args) { return (args + ...); } ``` - Using C++17 fold expressions Compiler can oftentimes deduce template parameter(s) Variadic template arguments - Another example: printing everything - Print arbitrary many arguments of arbitrary type ```cpp #include <iostream> #include <string> template<class T> void print_everything(T t) { std::cout << t << ' '; } template<class T, class... Args> void print_everything(T t, Args... args) { std::cout << t << ' '; print_everything(args...); } int main() { print_everything("Hello", 1, 2.333, std::string("World")); return 0; } ``` • Using C++17 fold expressions ```cpp template<typename... Args> void print(Args&&... args) { ((std::cout << t << ' '), ...); } ``` Smart pointers - Remember (raw) pointers ```cpp int i = 42; int *i_ptr = &i; ``` - Pointers are necessary for dynamically memory allocation ```cpp int *dyn_int = new int; delete dyn_int; int *dyn_array = new int[12]; delete[] dyn_array; ``` - What was the problem here? - You probably will forget to use `delete / delete[]` at some point - Finding memory leaks is expensive - Smart pointers (SPs) are safe wrappers for raw pointers Ownership problematic ```c matrix *matrix_multiply(matrix *a, matrix *b) { matrix *c = new matrix(a->rows(), b->cols()); // perform the computation c = a * b; return c; } ``` - Problem - Who frees matrix \(c\), allocated in \texttt{matrix\_multiply()}? - It has to be deleted at some point - Problem in general: Who is responsible, who owns the resource(s)? - Who allocates memory and who frees it after usage? 1. Caller allocates, caller frees (cf. right) 2. Callee allocates, caller frees (cf. above) 3. Callee allocates, callee frees (cf. \texttt{std::string, std::vector}) Smart pointers - Help with ownership problematic - SPs know who owns what resource - SPs do the clean-up (`delete`) themselves - They automatically call the destructor if the managed resource has no owner anymore - “Are no longer used by anyone” - How? - SPs calls `delete` for object pointing-to when their own destructor is called - Smart pointer know about ownership! - That is not a real garbage collector - It is just reference counting – “The poor man’s garbage collector.” - “Only pay for counter-variables and incrementing / decrementing counters” - By the way: it is possible to leak resources in Java (although it has a garbage collector) Smart pointers - Three types of smart pointers exist - `std::unique_ptr` // for unique ownership - One user at a time - `std::shared_ptr` // for shared ownership - One or more users at a time - `std::weak_ptr` // for non-owned things - Does not own, but is allowed to use the underlying object - Not commonly used in practice - SPs are implemented in the STL - All SPs defined in `<memory>` - Use `#include <memory>` std::unique_ptr - `std::unique_ptr` behaves like an ordinary pointer - Example ```cpp struct Data { double x; double y; Data(double x, double y) : x(x), y(y) {} }; int main() { std::unique_ptr<Data> data_ptr(new Data(12.5, 14.8)); return 0; } ``` - Note that we do not use `delete` explicitly std::unique_ptr - Using a factory function ```cpp struct Data { double x; double y; Data(double x, double y) : x(x), y(y) {} }; int main() { std::unique_ptr<Data> data_ptr(std::make_unique<Data>(12.5, 14.8)); return 0; } ``` - Caution: `std::make_unique()` has been introduced in C++14 - It has been “kind of” forgotten in C++11 - In C++11 just use `new` **std::unique_ptr** 1. How to model a std::unique_ptr? - Make it a class providing a pointer to the resource 2. How to ensure data_ptr is the only user? - Disallow copying the smart pointer ```cpp unique_ptr(const unique_ptr& up) = delete; unique_ptr& operator= (const unique_ptr& up) = delete; ``` - Now we can only have one user at a time - Attempts of copying result in an compiler error 3. How is data_ptr able to delete its resource? - It uses the destructor ```cpp ~unique_ptr() { delete resource; } ``` - Now the resource is cleaned up for us 4. How to use it elsewhere without copying? - Use std::move() How to use smart pointers and what about dereferencing? - Operators are overloaded to make the smart pointer behave like a raw pointer - Dereference and obtain the managed resource - `T& operator*()` - Dereference and access a member of the managed resource - `T* operator->()` This code does not compile - Why? - `std::unique_ptr` cannot be copied - Because copying results in more than one user! - Here we would have two owners - `main()` - `setZero()` - Move data instead of copying to have one user at a time - `std::move()` `data_ptr` into `setZero()` - and back from `setZero()` to `main()` std::unique_ptr ```cpp struct Data { double x; double y; Data(double x, double y) : x(x), y(y) {} }; std::unique_ptr<Data> setZero(std::unique_ptr<Data> d) { d->x = 0.0; d->y = 0.0; return d; } int main() { std::unique_ptr<Data> data_ptr(new Data(12.5, 14.8)); std::unique_ptr<Data> zero = setZero(std::move(data_ptr)); std::cout << zero->x << ' '; std::cout << zero->y << ' '; return 0; } ``` - That works - Caution: - Do not use `data_ptr` after you moved it somewhere else! - Undefined behavior - Segmentation fault - The second `std::move()` is “hidden” - `setZero()` moves `d` back to `main()` into the variable `zero` - Compiler will complain if you forget `move()` std::shared_ptr - Allows multiple owners ```cpp class Data { public: double x; double y; Data(double x, double y) : x(x), y(y) {} }; ``` ```cpp std::shared_ptr<Data> setZero(std::shared_ptr<Data> d) { d->x = 0.0; d->y = 0.0; return d; } ``` ```cpp int main() { std::shared_ptr<Data> data_ptr(new Data(12.5, 14.8)); std::shared_ptr<Data> zero = setZero(data_ptr); std::cout << zero->x << 'n'; std::cout << zero->y << 'n'; return 0; } ``` - Keeps track of its owners using an internal counter - `setZero()` can now be used without `std::move()` - It can be copied - We allow more than one user! ### std::shared_ptr - **Improved example** ```cpp struct Data { double x; double y; Data(double x, double y) : x(x), y(y) {} }; std::shared_ptr<Data> setZero(std::shared_ptr<Data> d) { d->x = 0.0; d->y = 0.0; return d; } int main() { std::shared_ptr<Data> data_ptr = std::make_shared<Data>(12.5, 14.8); std::shared_ptr<Data> zero = setZero(data_ptr); std::cout << zero->x << '\n'; std::cout << zero->y << '\n'; return 0; } ``` - **std::make_shared() makes a difference** - Performs only one allocation for data and reference counter - Data and reference counter sit in one block of memory - More efficient because of data locality --- © Heinz Nixdorf Institut / Fraunhofer IEM std::shared_ptr 1. How to model a `shared_ptr`? - Make it a class providing a pointer to a resource 2. How to store the number of users/references? - Store them in a counter 3. How to copy? - Just perform a **flat copy** of the handle (do not copy the managed resource) - Increment the reference counter on copy 4. When to delete the resource? - `~shared_ptr` { `if (--refcounter == 0) delete resource;` } std::weak_ptr - Can hold a reference but is not an owner ```cpp #include <iostream> #include <memory> std::weak_ptr<int> wp; void f() { if (std::shared_ptr<int> spt = wp.lock()) { std::cout << *spt << '\n'; } else { std::cout << "wp is expired" << '\n'; } } int main() { auto sp = std::make_shared<int>(42); wp = sp; f(); f(); return 0; } ``` - You rarely use it - A std::weak_ptr must be copied into a std::shared_ptr in order to be used A note on smart pointers - Massively reduce probability of introducing memory leaks - Always prefer using smart pointers when managing resources - Prefer `std::unique_ptr` over `std::shared_ptr`, if possible - Custom deleters are possible - Smart pointers behave like raw pointers - Just need a tiny little bit more memory in case of `std::shared_ptr` - Only fallback to raw pointers … - if you cannot afford a few bytes more per variable - if your platform does not provide an STL implementation - if you implement algorithms - if you have another good reason ``` std::unique_ptr Defined in header `<memory>` ``` ```template< class T, class Deleter = std::default_delete<T> > class unique_ptr; ``` ```template< class T, class Deleter > class unique_ptr<T[], Deleter>; ``` A note on dynamic memory allocation - If you have to dynamically allocate objects - Use smart pointers (`std::unique_ptr`, `std::shared_ptr`) - If you have to dynamically allocate an array of objects - Use `std::vector` - Do not think there are no exceptions - Raw pointers are still needed - When implementing algorithms - If you are only a user and not an owner of a resource - ... Status Quo - You know very much about modern C++ - Probably more than your older professors - What next? - We have to deepen your knowledge - There will be a summer exercise sheet with 16 additional points - Object oriented programming (OOP) - Threads and asynchronous tasks (running computations in parallel) - High performance computing (HPC) and what you should know about it - Static analysis and job offers - Introduction to the final project as well as hacks and miscellaneous - A nice talk by Bjarne Stroustrup that recaps everything so far and more: - [https://www.youtube.com/watch?v=86xWVb4XlyE](https://www.youtube.com/watch?v=86xWVb4XlyE) Recap - Template metaprogramming - Variadic template arguments - Ownership – who is responsible for clean-up - Smart pointers - `std::unique_ptr` - `std::shared_ptr` - `std::weak_ptr` - Status quo Thank you for your attention Questions?
{"Source-Url": "https://www.hni.uni-paderborn.de/fileadmin/Fachgruppen/Softwaretechnik/Lehre/CPP_Programming/SS2021/cpp_programming_lecture_07.pdf", "len_cl100k_base": 4847, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 60283, "total-output-tokens": 6550, "length": "2e12", "weborganizer": {"__label__adult": 0.0004475116729736328, "__label__art_design": 0.0003571510314941406, "__label__crime_law": 0.0002512931823730469, "__label__education_jobs": 0.0012254714965820312, "__label__entertainment": 6.937980651855469e-05, "__label__fashion_beauty": 0.00015366077423095703, "__label__finance_business": 0.00014102458953857422, "__label__food_dining": 0.0004100799560546875, "__label__games": 0.0005259513854980469, "__label__hardware": 0.000934600830078125, "__label__health": 0.0003063678741455078, "__label__history": 0.00020635128021240232, "__label__home_hobbies": 0.00012493133544921875, "__label__industrial": 0.0004191398620605469, "__label__literature": 0.0002529621124267578, "__label__politics": 0.0002334117889404297, "__label__religion": 0.0006194114685058594, "__label__science_tech": 0.00345611572265625, "__label__social_life": 0.00014317035675048828, "__label__software": 0.0025234222412109375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0003817081451416016, "__label__transportation": 0.00069427490234375, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17152, 0.01079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17152, 0.73703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17152, 0.59517]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 87, false], [87, 612, null], [612, 1354, null], [1354, 1850, null], [1850, 2640, null], [2640, 3211, null], [3211, 3418, null], [3418, 3754, null], [3754, 4714, null], [4714, 5519, null], [5519, 6201, null], [6201, 6627, null], [6627, 7144, null], [7144, 7817, null], [7817, 8302, null], [8302, 8911, null], [8911, 9581, null], [9581, 10024, null], [10024, 10350, null], [10350, 10722, null], [10722, 11396, null], [11396, 11679, null], [11679, 12011, null], [12011, 12749, null], [12749, 13391, null], [13391, 14121, null], [14121, 14543, null], [14543, 15036, null], [15036, 15835, null], [15835, 16239, null], [16239, 16909, null], [16909, 17113, null], [17113, 17152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 87, true], [87, 612, null], [612, 1354, null], [1354, 1850, null], [1850, 2640, null], [2640, 3211, null], [3211, 3418, null], [3418, 3754, null], [3754, 4714, null], [4714, 5519, null], [5519, 6201, null], [6201, 6627, null], [6627, 7144, null], [7144, 7817, null], [7817, 8302, null], [8302, 8911, null], [8911, 9581, null], [9581, 10024, null], [10024, 10350, null], [10350, 10722, null], [10722, 11396, null], [11396, 11679, null], [11679, 12011, null], [12011, 12749, null], [12749, 13391, null], [13391, 14121, null], [14121, 14543, null], [14543, 15036, null], [15036, 15835, null], [15835, 16239, null], [16239, 16909, null], [16909, 17113, null], [17113, 17152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17152, null]], "pdf_page_numbers": [[0, 0, 1], [0, 87, 2], [87, 612, 3], [612, 1354, 4], [1354, 1850, 5], [1850, 2640, 6], [2640, 3211, 7], [3211, 3418, 8], [3418, 3754, 9], [3754, 4714, 10], [4714, 5519, 11], [5519, 6201, 12], [6201, 6627, 13], [6627, 7144, 14], [7144, 7817, 15], [7817, 8302, 16], [8302, 8911, 17], [8911, 9581, 18], [9581, 10024, 19], [10024, 10350, 20], [10350, 10722, 21], [10722, 11396, 22], [11396, 11679, 23], [11679, 12011, 24], [12011, 12749, 25], [12749, 13391, 26], [13391, 14121, 27], [14121, 14543, 28], [14543, 15036, 29], [15036, 15835, 30], [15835, 16239, 31], [16239, 16909, 32], [16909, 17113, 33], [17113, 17152, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17152, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
708f433f80b2d86e18afe75a562f840bda8909bf
Scenario-aware dataflow Citation for published version (APA): Document status and date: Published: 01/01/2008 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim. UML Profile for Modeling System Observation Mathias Funk, Piet van der Putten and Henk Corporaal Abstract Nowadays interactive electronics products offer a huge functionality to prospective customers, but often it is too huge and complex to be grasped and used successfully. In this case, customers often abrate the struggle and return the products to the shop. Also the variability in scope and features of a product is so large that an up-front specification becomes hard if not impossible. To avoid the problem of an inadequate match between customer expectations and designer assumptions, new sources of product usage information have to be developed. One possibility is to integrate observation functionality into the products, continuously involving real users in the product development process. The integration of such functionality is an often overlooked challenge that should be tackled with an appropriate engineering methodology. This report presents on-going work about a novel design for observation approach that supports early observation integration and enables the cooperation with various information stakeholders. We show how observation can be embedded seamlessly in a model-driven development process using UML. 1 Introduction Complex innovative electronic products often fail to satisfy customers’ needs. Products are too complicated and the inherent functionality is often not relevant to user needs and expectations. Increasing numbers of returned products support this [1]. On the other hand, nowadays products are hard to specify because of their high complexity and because of rapidly changing user demands. Also, faster cycles in the product creation process cannot benefit from traditional feedback channels any more. While, a couple of years ago, a product technology could reach maturity within 10 to 20 cycles, thus allowing for gradual improvements, todays products have to accomplish the same within three cycles. Obviously, delivering a mature product in this setting becomes difficult. Accordingly, complex interactive products should be built for rapid changes. Products can be adapted to changing needs during development and even after release, in terms of firmware updates and the like. Still, targeting the product for a certain user base is a major problem in the industry [1]. One reason for this is the lack of usage information which is reliable enough to base the further development of the product on. Our approach towards this problem is to build observation modules into products. These observation modules can be configured remotely and observe parts of the system including user interaction, system performance and potentially user satisfaction with the system provided functionality. The products are given to selected key testers who use the products in their habitual environment - something, which promises to yield more representative data than, for instance, usability labs. It is important to note that even during the experiment, that is, with the products residing at the tester’s home, the configuration of observation can be changed according to recent findings of information stakeholders. The observation modules send collected data to a server on the Internet. There, the data is aggregated and made accessible for analysis purposes which might be a real-time visualization of usage statistics or a more extensive analysis by means of process mining tools that capture patterns inside the data stream. This report concentrates on the integration of such observation facilities into products. We propose a model-driven technique to do this in an efficient and structured way which is tailored to current system development practices. Subsequently, we show how observation-related parts of the system can be modeled by means of a novel UML profile. The report ends with a conclusion and an outline of possible future steps. 2 Related work The remote monitoring of products has been done before, ranging in scope from the monitoring of cars to building automation, computer programs, mobile devices, and websites [6, 7, 8, 11]. However, our research is different in two important aspects: First, in our approach we assume that information stakeholders are not willing to use complex programming paradigms to achieve the sought-after data, therefore we use a visual language to specify observation behavior in a domain-specific way. Second, the integration of observation functionality into the target system is described in a software engineering process which is, in our opinion, necessary for widespread use. On the technical level we rely on the proven model-driven engineering approach, but also try to apply more agile modeling techniques like model interpretation [3] that allows for dynamic adaptation of runtime systems without the need for client compilation support. The modeling of observation systems is performed using the Unified Modeling Language (UML) [10] and, more precisely, its profile extension mechanism [2]. Besides that, there is also the area of large enterprise event reporting systems like WBEM [12]. These systems aggregate business information from various sources and present this data to stakeholders. The general approach of these systems is similar to that of an observation system, however the proposed observation system aims not only at scalability, but is also flexible and light-weight. Since observed products might have limited performance resources or are in others ways not capable of costly computations. The processing of observed data works is done similar to event correlation systems [9], but generally uses less functions and expression power. Instead the focus is on a visual language that enables non-programmers to specify observation according to their information requirements. 3 Product usage observation Our approach separates the concerns of (i) product or system development and (ii) the specification of what to observe and how to present the collected data. In its application, product observation [5] involves accordingly two roles: the first role is a system developer, concerned with the integration of the observation module into the product. The second role, the information stakeholder, specifies observation in an easy and straight-forward process. For information stakeholders the proposed approach opens a dedicated information channel which provides potentially high quality data. Even more importantly, the observation behavior can be adapted to changing information needs remotely. In cases of large observation systems or the integration of observation into a number of different products, the developer role mentioned above can be split up into two dedicated roles: the application developer and the platform developer. While the application developer is concerned with the integration of observation callbacks, hooks, into the product, the platform developer focuses on the observation system that is finally integrated into the product. An observation system consists of three main layers: the authoring and analysis layer, the management and repository layer, and the observation layer (cf. Figure 1). On the first layer, it is specified what to observe and how to process and present the collected information. The management and repository layer plays mainly the role of a middleware between specification of observation and observation itself; it transports observation specifications towards the observation layer and the observed data from products back to the authoring and analysis layer. The observation layer is the place where the actual observation takes place inside the products. From the development perspective, two interesting things happen here: first, hooks have to be integrated into the product. Those hooks represent places that can be observed, that is, they are proxies to the actual places in hardware and software where the actual data is generated. This encapsulation helps to maintain a consistent interface from product to observation module. Second, observation specifications coming from the authoring layer are transformed into executable runtime structures and represent the logic of observation in a certain scenario. The latter aspect is addressed in [4]. 4 Modeling observation Modeling of observation systems is done by means of the Unified Modeling Language (UML). This language is an industry-wide standard for modeling of hardware and software systems. UML models are widely understood by developers in the community, and the modeling benefits from extensive tool support. UML offers a light-weight extension mechanism, profiles, that is suitable for building domain-specific UML models. This means to project domain language semantics onto UML by technically extending it with a dedicated set Figure 1. Technical observation system overview Figure 2. Observation profile (package view) of new concepts. A profile denotes not only a list of concepts that are to be used to build a certain kind of system or system part. Likewise, the use of concepts shall be shown. At the same time, a profile is also a generalized description of possible systems. Since the variability in the domain of observation system is high, ranging from enterprise level systems with huge performance resources to embedded systems that barely can afford the necessary processor cycles for pre-processing of collected data. The level of detail that is shown in the observation profile differs much. Parts like the interface between host system and observation component can be described in high detail whereas large parts of the data collection support system and the whole authoring and analysis layer cannot be specified in general terms. Still, in these parts, the main concepts are provided and linked in order to find a compromise between concreteness and flexibility. Sub-profiles are shown here in an overview; most contain more elements that structure the big building blocks explained here. The observation profile as shown in Figure 2 is basically divided into five sub-profiles that can be mapped to three layers of an observation system (cf. Section 3). While the observation authoring and the observation presentation profile packages are entirely concerned with the authoring and analysis layer, the management and repository profile packages respectively belong to the management and repository layer. The remaining execution profile package is concerned with the observation layer. In the following, the five main sub packages and their contents shall be described in detail, beginning with the authoring profile package. 5 Observation Authoring The authoring profile package (cf. Figure 3) is divided into five sub packages: Besides the specification formalization and the authoring environment packages which describe the tooling for observation definition, the simulation package serves as a basis for the testing of observation specifications and the semantics package provides means to define semantic concepts which can be attached to specification elements. Finally, the visualization formalization is part of this sub package since it is crucial to define metrics together with their presentation in the authoring phase within one environment in order to simplify the domain experts’ tasks and leverage tools without too many context changes. 5.1 Specification Formalization Since observations always follow a specific purpose, it is often necessary that observation specifications grasp the observation-related semantics of a certain domain. Therefore we do not propose a single language for all kinds of observation scenarios and applications domains, but instead the essential building blocks for domain-specific observation languages. The observation specification formalization sub-profile contains those building blocks. The main element is the «Observation Specification» which serves as a container for «Specification Element»s. These elements can be either «Specification Block» or «Connector», a notion which is close to the event-driven paradigm in specifying observation. However, this can be mapped easily to other paradigms like rule-based or metrics-based definitions. Even for these different approaches towards observation specification, the sub types of blocks, «Event Source», «Processing», «Concept», and «Export» are valid. These concepts are mandatory for the development and execution of an observation system. Finally, «Syntax Constraint»s are needed to define the exact syntax of the intended domain-specific language. Both «Specification Block» and «Connector» have constraints attached that serve as a basic grammar of an observation description. However, more sophisticated semantic constraints are posed by the editor as denoted in the authoring environment sub-profile. 5.2 Authoring Environment Inside an authoring environment, several editors might play a role for a concise definition of observation, however, only two of them shall be described here. Others, such as editors for surveys and semantic concepts and ontologies are beyond the scope of this report. The logical first step in the observation flow is to define what should be observed and how the collected data should be processed. Therefore a comprehensive editor, «Specification Editor», is needed that provides observation building blocks in a directly manageable way. The editor provides means to construct specifications by using visual or textual «Specification Element Representation». Ways to manipulate the representations are abstracted as «Manipulation». Underlying details of this concept are auxiliary items that are beyond the scope of the observation profile. The editor has a strong dependency on the «Observation Specification Formalization» package, as that defines the actual syntax of the observation specification language, being which elements to include and to allow in connection with other elements and which constraints to satisfy for a valid specification. That is, the editor incorporates validation of newly created observation specifications and thus ensures that all specifications comply to the syntax and other additional constraints of the domain-specific language. The second editor is used to define an observation specific platform model of the host system. This model defines the exact properties of hooks in the observation system that can be accessed. Similar concepts are used for the definition of the editor compared to the specification editor. However, this editor results in a hook model that is a description of the interface between observation system and host system. This is the view an observer gains of the host system. Naturally, this editor has a dependency relationship to the observation integration package, more specifically, the «hook» concept. ### 5.3 Simulator Before distributing an observation specification to potentially hundreds of test machines, it is advisable to test or simulate it within the authoring environment. Moreover, this allows to briefly check if the data is collected, processed and finally presented in the right way. The Observation Simulator package contains two concepts that connect to observation integration concepts and basically schedule and generate synthetic hook data which is then processed as specified previously. The «SimulationScheduler» takes in an observation specification and executes it using the «ObserverSimulator». The latter concept denotes a subsystem which generates hook data and triggers hooks accordingly, by using either a user interface that allows authoring environment user to trigger hooks with user-defined data, or by incorporating hook trigger probabilities defined a-priori. These facilities can be used to check the routing of observation events and the processing semantics, leading to a specification that collects the relevant data accurately. ### 5.4 Semantics The semantics package aims at the information level of observed data, that is, the connection of atomic events to semantic contexts. By linking simple events together, insight into more abstract usage patterns can be gained. Basically, a «SemanticConcept» is stored within an «Ontology». Presumably, multiple ontologies can be used in the specification process to divide the semantic properties of host system, experiment-related information, and other semantic content into. respective ontologies. The linking between specification and semantics is expressed with a link between the ontology-based concept and the «Concept» specification block (cf. Figure 3). 5.5 Visualization Formalization While the main focus of the authoring profile package is on specification of observation, the visualization specification is also part of it. The authoring phase is the observation flow is naturally also the place for thinking about how the collected information is further used and presented. The visualization sub-profile targets the definition of metrics within the authoring environment that translate raw or preprocessed observation data into accessible information. These metrics can be leveraged within the analysis phase in the form of charts, data views and other visual representations. The «Metric» concept denotes this idea. A metric is based on one or more axes which represent a data filter. The «Axis» concept is abstract and sub-concepts like «AggregateAxis», «SemanticAxis», and «TimeAxis» realize different filter domains. For instance, the metric "average time for all users using features A, B, or C over every single experiment day" uses three axes: a time axis for the all days of the experiment, a semantic axis for features A, B, C, and an aggregate axis for the aggregate sum of time that the users have used the respective feature per day. This metric can be easily visualized, for instance, by means of a line chart. In the following section the concepts using such metrics are explained. 6 Observation Presentation - Visualizer The visualization of observation data is only one possibility to further use captured data; another is to export the data, for instance, to specialized analysis tools. Besides offline analysis techniques, the real-time visualization of collected data in the instant it reaches the server is crucial to oversee the study, to make sure all necessary data is collected, and to grasp a quick overview on data aggregates. The presentation profile package shown in Figure 4 contains at this point only the presentation sub-profile which basically contains «View» sub-concepts. The «ChartView» is connected to a «Metric» concept and visualizes collected according to the axes defined in this metric. An «ActivityView» is specialized view for activity and status information of product instances («MachineStatusView»), but also regarding the distribution status of observation specifications. Since observation modules inside the products “pull” the specifications, a «SpecificationDistributionView» helps to keep track of the module updates. Finally, there are views that show the raw collected data. The «DataView» concept denotes such views which provide richer information to advanced users. Compared to charts which show mainly an aggregate simplification of all collected data, the data views display task data fields, semantic properties as well as originator information. 7 Observation Management - Specification Distribution Before observation specifications can be executed on the product instances, they have to get there first. The observation system provides a middleware that bridges the infrastructure gap between authoring environment and product instance. The specification distribution sub-profile (cf. Figure 5) denotes the necessary concepts for such a distribution middleware. Commonly realized as a server, it provides two interfaces: the «SpecificationPublisher» concept which permits the authoring environment client to send a specification to the server, and the «SpecificationSubscriber» that provides access to new specifications for the observation module client, in this context called a «Machine». As soon as the latter interface is accessed by such a machine, the «ObservationDistribution» compiles an «ObservationPlan» that contains all valid and active «ObservationSpecification»s for this machine. The machines analyze the received plan and update themselves with new specifications in case of a changed plan. The subscriber interface contributes events to the activity and update status of certain machines, too. 8 Observation Repository The repository layer of an observation system is responsible for collecting pre-processed data from local observation units (directly) and from proxy units that bridge a potential technical gap between local units and repository server. The two main parts are the data storage sub-profile and the data access sub-profile (cf. Figure 6). While the first serves as a general data storage place for all sources of information, the latter handles access for the observation-integrated analysis components and external tools that require unfiltered data access. 8.1 Data Storage The storage of observation data is technically one of the most demanding tasks in the whole system as potentially a large mass of data items can be collected. However, the description of observation-related parts is rather short since flexibility regarding the data base and data base abstraction is crucial. «DataCollector» is an interface similar to the ones in the specification distribution sub-package. It allows the observation module client to access the data collection server and send collected data which is subsequently stored according to a «DataSchema» in the data base. As mentioned before, the data base access has to be flexible, so a «DatabaseAccessor» serves as a proxy to a data storage implementation. 8.2 Data Access Observation data access use used by data consumers such as the internal visualization on the analysis layer and external tools that might require different data formats and support only few access methods. Basically, this sub-profile shown in Figure 6 provides a «DataView» that contains a «Query» to access a «DataSource». This source directly links to the «DatabaseAccessor» which abstracts from the database. In addition, the «DataView» involves a specialized «ExportInterface» which enables external data consumers to access the collected observation in the right way. 9 Observation Execution The observation execution profile package is concerned with both observer and observee parts of the observation system. In its application, it is almost entirely built into the product and accesses the host system via a hook interface to acquire data. Inside this profile package there are two sub-profiles that describe architecture of observation and its integration into products. Another part deals with the observation behavior at runtime, and the forth package provides structures for the observation data that is eventually collected. 9.1 Integration In the specification of observation, hooks are used as information sources. However, hooks are only abstract places where information can be perceived. Therefore, on the system level, a «Hook» is realized as a proxy. element that encapsulates the combination of an «Observable» element and its observation-related properties, such as «Characteristic» and «Constraint». Characteristics cover timing properties and data types, constraints describe runtime limitations of the observable. This meta-information can be used to build predictable observation modules, or to simulate observation behavior prior to implementation or deployment. Hooks and observables are basically two different views on the same entity. From the hook side, only observation-related properties are shown and other information, e.g. about the implementation, is hidden - vice versa from the observable side. Both stereotypes are aggregated in respective stereotypes, «HookModel» and «Observee». While the hook model is simply a collection of hooks without further meaning, the observee denotes a system part which contains «Observable» elements, but is itself not directly observable. This stereotype can be used early in the development to annotate system parts that should be observed, and can be refined later to actual «Observable»s. Another stereotype of the integration profile is the «ObservationContext» which represents contextual information belonging to observable or observee. This information can (i) determine how the observable behaves, generates data, and responds to triggering, and (ii) it can be part of the raw observation data that is generated by the observable. All context information depends on the «ObservationScenario». Such a scenario is a usage setting, e.g. an experiment setup, and contains information about the environment the product is used in, as well as the user who interacts with the product. To further explain the relationship between hooks and observables, we will have a look at interaction patterns in the observation domain. The nature of hooks, being either self-triggering, externally triggered, or both, suggests basically two interaction patterns. The self triggering and the externally triggered patterns are explained in Figure 8 by using the aforementioned stereotypes of «Observable» and «Hook». The first pattern is suitable for hooks which are self-triggered, that is, the observable system structure autonomously triggers the respective hook object whenever new information is perceived and should be fed into the observation system. In this pattern the responsibility of taking action lies on the observee’s side. The second pattern deals with hooks that have to be triggered externally in order to produce data. In this pattern, the hook object is linked to the observable structure, e.g. in the form of a public operation, and can trigger the observable. In the rare case that an observable has both characteristics, a combination of those patterns is also possible. Hooks, as their proxy nature suggests, connect observables to the respective interface in the observation module, the «HookInterface» as shown in Figure 7. The observable element delivers raw data to the interface, and inside the module this interface presents the data to the observation component. The component subsequently processes the raw data according to the specified observation behavior. Obviously, the resulting data is determined to a large extent by the observation system input coming from hooks, thus the strong connection to the «HookData» stereotype (cf. Figure 7). 9.2 Architecture The «HookInterface» is one of the main parts of the observation architecture profile shown in Figure 9. The interface stereotype is a part of the «ObservationModule», namely being a sub stereotype of «Observation-SubModule». Other sub modules are concerned with the communication between observation and repository. Figure 9. Observation Architecture overview layer (cf. Figure 1). Also, an observation module could possibly have a user interface in order to inform the user about the observation or to collect subjective feedback in the form of questionnaires. Two submodules deal with runtime behavior of specified observation, the observation component. The «Configurator» receives an observation specification and translates it into an executable «ObservationComponent» which is run by the «Scheduler». The latter module is responsible for triggering of hooks and the synchronization of concurrent events. 9.3 Behavior The behavior sub-profile (cf. Figure 10) describes the dynamic runtime structures that form an «ObservationSpecification», and that are executed within the observation module inside the host system. These building blocks are dynamically created and linked, and enable a flexible adaptation of observation logic to changed requirements. Corresponding elements are denoted in an observation specification which is a model of the observation behavior that is subsequently created and executed. The structure of the behavior sub-profile is in this sense similar to the specification formalization sub-profile structure. The basic functionality a dynamic building block needs is expressed in the «FlowNode» concept: event routing behavior. «FlowNode»s can be connected via «FlowRoute»s. While incoming routes of a node are independent and can even be assigned a name for a parameter, outgoing routes are all triggered once the node fires an event. Concrete realizations of the abstract «FlowNode» concept are: (1) «ProcessingNode»s which are capable of processing the data contained in incoming events, (2) «ExportNode»s which cache, serialize and transmit incoming events to the «Communicator» (cf. Figure 9), (3) «SemanticNode»s which attach semantic concepts to an incoming and instantly outgoing event, and (4) «Hook»s which are the entry points for data and events in the «ObservationComponent». According to different uses of the hook, there are several types of hooks: «PlatformHook»s which connect to the host system (that is, the «Hook» concept) and acquire data therefrom. «SystemHook»s capture events generated inside the observation system, being for instance status messages and error codes. Finally, there are «SemanticHook»s which capture semantic events. These events are triggered whenever a semantic concept is attached to an event in the «ObservationComponent». Especially this enables information stakeholders to quickly abstract from hooks and low-level events by means of user-defined semantic concepts. Once these concepts are bound to events and data, a considerably easier specification of observation logic can be achieved. 9.4 Data Being almost intangible within the observation system, but becoming the most stable artifact of observation once it reaches the data collection server, the structure and properties of observation data are denoted in the data sub-profile (cf. Figure 11). When it is generated or captured in its raw form in a «Hook», the «HookData» can have two modalities «HookEventData» and «HookStatusData», the former expressing an event which occurs at a distinct time point, the latter expressing a sample taken from a continuous data stream. As soon as this data is processed in the «ObservationComponent» it becomes «ComplexEventData» connected to an «Originator» and potentially «SemanticConcept»s. However, all types of «ObservationData» can be exported and lead to a valid data trace. 10 Conclusion & Future work Companies experience a lack of reliable usage information about their products. Our approach towards this problem is to build observation modules into products that are capable to provide reliable and structured product usage information directly from the source. Observation integration can possibly have a strong impact on the development of innovative products, thus the need for doing this efficiently in an engineering methodology. This report introduces a model-driven technique to integrate observation functionality into products. The profile introduced here simplifies the development tasks necessary for observation integration, thus reducing the effort for integration. It helps to automate the process of observation specification and data collection. However, the issue of automation remains partly future work as the provision of even better tools for the observation integration is crucial. Furthermore, we see observation integration not as a simple parallel development task only, but as a potential driving force behind a new development paradigm: “design for observation”. This involves observation as a first class development aspect and helps to provide a solid basis for extensive, but meaningful product information presented to information stakeholders in a comprehensive way. Acknowledgments This work is being carried out as part of the “Managing Soft-Reliability in Strongly Innovative Product Creation Processes” project, sponsored by the Dutch Ministry of Economic Affairs under the IOP-IPCR program. References [8] K. Kabitzsch and V. Vasyutynskyy. Architecture and data model for monitoring of distributed automation...
{"Source-Url": "https://pure.tue.nl/ws/files/3378732/710922.pdf", "len_cl100k_base": 6182, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36632, "total-output-tokens": 7550, "length": "2e12", "weborganizer": {"__label__adult": 0.0003147125244140625, "__label__art_design": 0.0006346702575683594, "__label__crime_law": 0.00027632713317871094, "__label__education_jobs": 0.0008168220520019531, "__label__entertainment": 7.784366607666016e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.0003612041473388672, "__label__food_dining": 0.0002872943878173828, "__label__games": 0.0005631446838378906, "__label__hardware": 0.001983642578125, "__label__health": 0.000400543212890625, "__label__history": 0.00033664703369140625, "__label__home_hobbies": 9.47117805480957e-05, "__label__industrial": 0.0007252693176269531, "__label__literature": 0.0002789497375488281, "__label__politics": 0.0002264976501464844, "__label__religion": 0.0004718303680419922, "__label__science_tech": 0.07830810546875, "__label__social_life": 6.747245788574219e-05, "__label__software": 0.00919342041015625, "__label__software_dev": 0.9033203125, "__label__sports_fitness": 0.000240325927734375, "__label__transportation": 0.0006327629089355469, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36594, 0.0269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36594, 0.47155]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36594, 0.89216]], "google_gemma-3-12b-it_contains_pii": [[0, 2087, false], [2087, 2185, null], [2185, 2185, null], [2185, 5770, null], [5770, 10875, null], [10875, 10969, null], [10969, 16034, null], [16034, 18454, null], [18454, 22574, null], [22574, 25292, null], [25292, 28998, null], [28998, 32546, null], [32546, 35966, null], [35966, 36594, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2087, true], [2087, 2185, null], [2185, 2185, null], [2185, 5770, null], [5770, 10875, null], [10875, 10969, null], [10969, 16034, null], [16034, 18454, null], [18454, 22574, null], [22574, 25292, null], [25292, 28998, null], [28998, 32546, null], [32546, 35966, null], [35966, 36594, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36594, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36594, null]], "pdf_page_numbers": [[0, 2087, 1], [2087, 2185, 2], [2185, 2185, 3], [2185, 5770, 4], [5770, 10875, 5], [10875, 10969, 6], [10969, 16034, 7], [16034, 18454, 8], [18454, 22574, 9], [22574, 25292, 10], [25292, 28998, 11], [28998, 32546, 12], [32546, 35966, 13], [35966, 36594, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36594, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
2ffc242c736418d01dca37e9077e6ad33c634517
Proving array properties using data abstraction Julien Braine, Laure Gonnord To cite this version: Julien Braine, Laure Gonnord. Proving array properties using data abstraction. Numerical and Symbolic Abstract Domains (NSAD), Nov 2020, Virtual, United States. hal-02948081v2 HAL Id: hal-02948081 https://hal.archives-ouvertes.fr/hal-02948081v2 Submitted on 16 Nov 2020 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract This paper presents a framework to abstract data structures within Horn clauses that allows abstractions to be easily expressed, compared, composed and implemented. These abstractions introduce new quantifiers that we eliminate with quantifier elimination techniques. We study the case of arrays and our experimental evaluation show promising results on classical array programs. CCS Concepts: • Theory of computation → Verification by model checking; Abstraction; Type structures. Keywords: abstraction, data structures, Horn clauses, array properties 1 Introduction Static analysis of programs containing non-bounded data-structures is a challenging problem as most interesting properties require quantifiers. Even stating that all elements of an array are equal to 0 requires it. A common way to reduce the complexity of such problems is abstraction using program transformation [13] or abstract interpretation [8]. In this paper, we suggest a new technique that we name data abstraction that takes advantage that we are abstracting data-structures. Inspired by previous work on arrays [4, 14], we combine quantifier instantiation with abstract interpretation. We obtain a transformation from Horn clauses to Horn clauses, a format with clear semantics to which programs with assertions can be reduced. The goal is to provide a framework in which abstractions on data structures can be easily expressed, compared, composed and implemented and decorrelate them from the back-end solving. Example 1 will be our motivating and running example. Proving this program is challenging as it mixes the difficulty of finding universally quantified invariants with modulo arithmetic. In Section 2, we introduce Horn clauses, the transformation of our running example, and Galois connections, in Section 3, we formally give our data abstraction technique, in Section 4, we give an instance of such an abstraction on arrays and in Section 5 we give the experimental results of our tool and compare it with the Vaphor tool [14]. Example 1. Running example: the following program initializes an array to even values, then increases all values by one and checks that all values are odd. We wish to prove that the assertion is verified. ``` for (k = 0; k < N; k++) // Program point For1 a[k] = rand(); for (k = 0; k < N; k++) // Program point For2 a[k] = a[k]+1; assert (a[k] % 2 == 1); ``` 2 Preliminaries 2.1 Horn Clauses A Horn clause is a logical formula over free variables and predicates. The only constraint is that Horn clauses are “increasing”, that is, there can be at most one positive predicate in the clause. Horn clauses are written in the following form: $P_1(\text{expr}_1) \land \ldots \land P_n(\text{expr}_n) \land \phi \rightarrow P'(\text{expr}')$ where: • $\text{expr}_1, \ldots, \text{expr}_n, \phi, \text{expr}'$ are expressions possibly containing free variables. • $P_1, \ldots, P_n$ are the “negative” predicates • $P'$ is the positive predicate or some expression The semantics of such a Horn clause is the following: $\forall \text{vars}, P_1(\text{expr}_1) \land \ldots \land P_n(\text{expr}_n) \land \phi \Rightarrow P'(\text{expr}')$ where $\text{vars}$ are the free variables of the expressions. We say a set of Horn clauses is satisfiable if and only if there exist values (sets) for each predicate that satisfy all the Horn clauses. Programs with assertions can be transformed into Horn clauses using tools such as SeaHorn [2] or JayHorn [12], and Example 2 gives the transformation of Example 1 into Horn clauses by creating a predicate per program point and expressing the constraints on each program point using clauses. Example 2. Running example in Horn clauses where all predicates For1 have arity 3 (1 array and 2 integer parameters). Clause (4) in bold, will be used throughout the paper. \[\text{For1}(a, N, 0)\] (1) \[\text{For1}(a, N, k) \land k < N \rightarrow \text{For1}(a[k \leftarrow r + 2], N, k + 1)\] (2) \[\text{For1}(a, N, k) \land k \geq N \rightarrow \text{For2}(a, N, 0)\] (3) \[\text{For2}(a, N, k) \land k < N \rightarrow \text{For2}(a[k \leftarrow a[k] + 1], N, k + 1)\] (4) \[\text{For2}(a, N, k) \land k \geq N \rightarrow \text{For3}(a, N, 0)\] (5) \[\text{For3}(a, N, k) \land k < N \land a[k] \% 2 \neq 1 \rightarrow \text{false}\] (6) \[\text{For3}(a, N, k) \land k < N \rightarrow \text{For4}(a, N, k + 1)\] (7) 2.2 Galois Connection A Galois connection [6] is a way of expressing a general abstraction. In our case, we abstract predicates (i.e. sets of possible values) on a concrete set \(\mathcal{C}\) to an abstract set \(\mathcal{A}\). A Galois connection is defined by - \(\alpha : \mathcal{P}(\mathcal{C}) \rightarrow \mathcal{P}(\mathcal{A})\): the abstraction of a predicate - \(\gamma : \mathcal{P}(\mathcal{A}) \rightarrow \mathcal{P}(\mathcal{C})\) gives the concrete values an abstracted predicate represents. Two properties are required: 1. \(S \subseteq \gamma(\alpha(S))\) for soundness. 2. \(\forall S, \alpha(\gamma(S)) \subseteq S\) for minimal precision loss. 3 Data Abstraction In this section, we present our main contribution: data abstraction. We abstract the Horn clauses, and then show how to remove the added quantifiers. 3.1 Data Abstraction in Horn Clauses We propose the notion of data abstractions whose goal is to reduce the complexity of elements (such as arrays) by a set of simpler values (such as integers). Definition 1. Data abstraction \((\sigma, F_{\sigma})\) Let \(\mathcal{C}\) and \(\mathcal{A}\) be sets. A data abstraction is a couple \((\sigma, F_{\sigma})\) where \(\sigma\) is a function from \(\mathcal{C}\) to \(\mathcal{P}(\mathcal{A})\) and \(F_{\sigma}\) is a formula encoding its inclusion relation \(F_{\sigma}(a^\#, \sigma(a)) \equiv a^\# \in \sigma(a)\). It defines a Galois connection from \(\mathcal{P}(\mathcal{C})\) to \(\mathcal{P}(\mathcal{A})\) by: - \(\alpha_{\sigma}(S \subseteq \mathcal{C}) = \bigcup_{a \in S} \sigma(a)\) - \(\gamma_{\sigma}(S^\# \subseteq \mathcal{A}) = \{a \in \mathcal{C} | \sigma(a) \subseteq S^\#\}\) Example 3. \(\sigma_{\text{Cell}_1}\) abstraction of an array: abstracting an array by the set of its cells (i.e. couples of index and value). \(\sigma_{\text{Cell}_1}(a) = \{(i, a[i])\}\) \(F_{\alpha_{\text{Cell}_1}}((i, v), a) \equiv v = a[i]\) Remark: abstracting a single array does not lose information, but abstracting a set of arrays (using \(a\)) does. In Algorithm 1 we give the implementation of such abstractions in Horn clauses and Example 4 unravels its execution. The key idea consists in replacing a predicate \(P(\text{expr})\), that is, \(\text{expr} \in P\), by \(\text{expr} \in \gamma(P^\#)\) for a new predicate \(P^\#\) which is created to represent \(\alpha(P)\). Soundness of Algorithm 1, that is, that if a Horn problem \(H\) has a counter example, so does the result of Algorithm 1 on \(H\), follows from \(P \subseteq \gamma(\alpha(P))\). Algorithm 1. Abstracting in Horn clauses. Input: 1. \(H\): Horn problem 2. \(P\): predicate to abstract. 3. \(P^\#\): unused predicate. 4. \(F_{\sigma}\). Computation: for each clause \(C\) of \(H\), for each \(P(\text{expr})\) in \(C\), replace \(P(\text{expr})\) by \(\forall a^\#, F_{\sigma}(a^\#, \text{expr}) \rightarrow P^\#(a^\#)\), where \(a^\#\) is a new unused variable. Example 4. Execution of Algorithm 1 with \(\sigma_{\text{Cell}_1}\). Input: 1. Clauses of Example 2. 2. \(\alpha_2\) 3. \(\alpha_{\text{Cell}_1}\) applied to \(a\). Output: Consider Clause 4 from the example on page 2. After applying Algorithm 1 and naming the introduced quantified variables (\(i^\#, v^\#\)) and \((i'^\#, v'^\#)\), we obtain: \((\forall i^\#, v^\#: v^\# = a[i^\#] \rightarrow \text{For2}^\#(i^\#, v^\#, N, k) \land k < N) \rightarrow (\forall i'^\#, v'^\#: v'^\# = a[k - a[k] + 1][i'^\#]) \rightarrow \text{For2}^\#(i'^\#, v'^\#, N, k + 1))\) In this section, we have a general scheme to abstract Horn problems with a data abstraction, however, new quantifiers \((\forall a^\#)\) are introduced that solvers [7, 10] have trouble solving. 3.2 Removing the Introduced Quantifiers: Instantiation Our abstraction has introduced new quantifiers in our Horn clauses. Here, we give an algorithm to remove those quantifiers using a technique called quantifier instantiation [4] which consists in replacing a universal quantifier, i.e. a possibly infinite conjunction, by a conjunction over some finite set \(S\). In other words, an expression of the form \(\forall q, \text{expr}(q)\) is transformed into an expression of the form \(\bigwedge_{q \in S} \text{expr}(q)\). Algorithm 2 removes the quantifiers in two steps: 1. Remove useless quantifiers: \(\text{expr} \rightarrow (\forall q, \text{expr}')\) with \((q \notin \text{expr})\) becomes \(\text{expr} \rightarrow \text{expr}'\), with \(q\) a free variable. 2. Instantiate the other \(\forall\) thanks to a heuristic \(\text{insts}\). Algorithm 2. Instantiation algorithm. Input: - \(C\), a clause (after abstraction). - \(\text{insts}\), a function that to a quantifier of \(C\) and the abstracted value \(\text{expr}\), returns an instantiation set \(S\). Computation: 1. Remove universal quantifiers in the head of the clause. 2. For each instance of \(\forall a^\#, F_{\sigma}(a^\#, e) \rightarrow P^\#(a^\#)\), replace it by \(\bigwedge_{a^\# \in \text{insts}(a^\#)} F_{\sigma}(a^\#, e) \rightarrow P^\#(a^\#)\). Example 5. Example of instantiation from Example 4 \( (\forall i^\#, v^\# : a[i^\#] \rightarrow \text{For}^2(i^\#, v^\#, N, k) \wedge k < N) \) \( \rightarrow (\forall i^\#, u^\# : v^\# = a[k \rightarrow a[k + 1]|i^\#] \) \( \rightarrow \text{For}^2(i^\#, u^\#, N, k + 1)) \) After the first step (i.e. removing \( Vi^\#, v^\# \), we obtain: \( (\forall i^\#, v^\# : a[i^\#] \rightarrow \text{For}^2(i^\#, v^\#, N, k)) \wedge k < N \) \( \rightarrow ((a^\# = a[k \rightarrow a[k + 1]|i^\#] \) \( \rightarrow \text{For}^2(i^\#, v^\#, N, k + 1)) \) Using \( \text{insts}(i^\#, v^\#), a) = \{(k, a[k]), (t^\#, a[i^\#])\} \) (this choice is explained in Section 4.2) and slight simplifications, we get: \( (\text{For}^2(k, a[k], N, k) \wedge \text{For}^2(t^\#, a[i^\#], N, k) \wedge k < N) \) \( \rightarrow \text{For}^2(i^\#, v^\#, N, k + 1) \) which is clause without quantifiers equivalent to the clause before instantiation due to our good choice of \( \text{insts} \). However, for any choice of \( \text{insts} \), soundness of Algorithm 2 is ensured as Step 2 only happens on premises of the clause (i.e. negative predicates) and \( \forall q, \text{expr}(q) \rightarrow \bigwedge_{q \in S} \text{expr}(q) \). In this Section, we have given a data abstraction technique that from a abstraction formula \( F_a \) and an instantiation heuristic \( \text{insts} \) transforms predicates on variables of the concrete domain into predicates over the abstract domain. The abstraction is always sound and its precision depends on \( \text{insts} \). We show in Section 5 using array abstraction that the precision loss does not impact our experiments. 4 Abstracting Arrays: Cell Abstractions To illustrate our data abstraction technique, we show how to handle the cell abstractions of Monniaux and Gonnord [14]. 4.1 Cell Abstractions Cell abstractions consist in viewing arrays by (a finite number of) their cells. However, instead of abstracting arrays by specific cells such as the first, the last or the second cell, ..., we use parametric cells (i.e. cells with a non fixed index). \( \text{Cell}_1 \) of Example 3 corresponds to one parametric cell. Definition 2. Cell abstractions \( \text{Cell}_n \) \( \sigma_{\text{Cell}_n}(a) = \{(i_1, a[i_1]), \ldots, i_n, a[i_n]\} \) and \( F_{\sigma_{\text{Cell}_n}}((i_1, o_1, \ldots, i_n, o_n), a) = o_1 = a[i_1] \wedge \ldots \wedge o_n = a[i_n] \) Cell abstractions are of great interest because of their expressivity: many interesting concrete properties can be expressed as abstract properties. Furthermore, our data abstraction framework allows us to formalize other existing array abstractions using compositions from cell abstractions. Example 6 gives examples of expressible properties by cell abstractions and Example 7 shows how to construct some common abstractions from cell abstractions. Example 6. Properties expressed with cell abstractions. For each concrete property in the table, we give a cell abstraction that allows to capture it with an abstract property. <table> <thead> <tr> <th>Concrete</th> <th>Abs</th> <th>Abstract property</th> </tr> </thead> <tbody> <tr> <td>( a[0] = 0 )</td> <td>( \text{Cell}_1 )</td> <td>( i_1 = 0 \Rightarrow v_1 = 0 )</td> </tr> <tr> <td>( a[n] = 0 )</td> <td>( \text{Cell}_1 )</td> <td>( i_1 = n \Rightarrow v_1 = 0 )</td> </tr> <tr> <td>( a[0] = a[n] )</td> <td>( \text{Cell}_2 )</td> <td>( i_1 = 0 \land i_2 = n \Rightarrow v_1 = v_2 )</td> </tr> <tr> <td>( \forall i.a[i] = 0 )</td> <td>( \text{Cell}_1 )</td> <td>( v_1 = 0 )</td> </tr> <tr> <td>( \forall i.a[i] = i^2 )</td> <td>( \text{Cell}_1 )</td> <td>( v_1 = i_1^2 )</td> </tr> <tr> <td>( \forall i.a[n] \geq a[i] )</td> <td>( \text{Cell}_2 )</td> <td>( i_2 = n \Rightarrow v_2 \geq v_1 )</td> </tr> </tbody> </table> Example 7. Array abstractions from cell abstractions. Array smashing: \( \sigma_{\text{smash}}(a) = \{a[i]\} \). This abstraction keeps the set of values reached but loses all information linking indices and values. It is the composition of \( \text{Cell}_1 \) and “forgetting \( i_1 \)”, that is, the data abstraction \( \sigma_{\text{forget}}(i_1) = \top \) Array slicing. There are several variations, and for readability we present the one that corresponds to “smashing each slice” ?? and pick the slices \( j < \infty, i_1, [i, i], i, \infty[ \) \( \sigma_{\text{slic}}(a) = \{(a[j_1], a[i], a[j_3]), j_1 < i \land j_3 > i\} \) It is the composition of \( \text{Cell}_3 \) and knowing if \( i_1, i_2, i_3 \) are in the slice: \( \sigma_{m}(i_1, i_2, i_3) = \{i_1 \land i_2 \land i_3 > i\} \). This creates a Boolean which, after simplification, can be removed. 4.2 Instantiating Cell Abstractions The data abstraction framework, requires an instantiation heuristic \( \text{insts} \). Inspired by [5, 14], we create the heuristics \( \text{instsCell}_n \) of Definition 3. The idea is that relevant indices for clause instantiation are those that are read and this is how the instantiation set in Example 5 was constructed. Definition 3. Instantiation heuristic for \( \text{Cell}_n \). Let \( C \) be a clause after the step 1 of Algorithm 2. \( \text{instsCell}_n(q, \text{expr}) = \{(e, \text{expr}[e]) | \forall e', e'[e] \in C\}^n \) if this set is non empty \( \{(\ldots \text{expr}[\ldots])^n \) with being any value otherwise 4.3 Entirely Removing Arrays: Ackermanisation[1] Motivation. Although predicates do not have arguments of array types after abstraction, clauses still use the arrays to express the transition relation. Removing those arrays is a theoretically solved issue as we do not have any quantifiers in our clauses [5]. However, we experimentally noticed that doing so in our preprocessing improves the solver’s results. Technique. The axiom \( a[i \rightarrow v][j] \equiv \text{ite}(i = j, v, a[j]) \) is applied to remove array writes (ite denotes if-then-else). Then, for each index \( \text{expr} \) at which an array \( a \) is read, we create a fresh variable \( v_{\text{expr}} \) and replace \( a[\text{expr}] \) by \( v_{\text{expr}} \) in the clause, then, for each pair of indices \( \text{expr} \), \( \text{expr} \) added, we generate the constraint \( \text{expr}_1 = \text{expr}_2 \rightarrow v_{\text{expr}_1} = v_{\text{expr}_2} \). Example 8. Ackermanisation of arrays. Removing array writes on the running clause from Example 5 yields: \[(\text{For2}^2(k, a[k], N, k) \land \text{For2}^2(i^\#, a[i^\#, N, k] \land k < N) \rightarrow \text{For2}^2(i^\#, \text{ite}(k = i^\#, a[k] + 1, a[i^\#]), N, k + 1)\] and removing array reads with \(a_{r^\#}, a_k\) new variables: \[(\text{For2}^2(k, a_k, N, k) \land \text{For2}^2(i^\#, a_{r^\#}, N, k) \land k < N \land (k = i^\# \rightarrow a_k = a_{r^\#})) \rightarrow \text{For2}^2(i^\#, \text{ite}(k = i^\#, a_k + 1, a_{r^\#}), N, k + 1)\] 5 Experiments Benchmarks. We used the mini-java benchmarks [14]. We modified them to add loop invariants as optional hints, increased readability by reducing the number of intermediate variables, and assertions are now checked through a loop instead of checking a random index (i.e. instead of checking that \(a[k]\) verifies the property for a random \(k\), we iterate with a loop \(0 \leq k < N\) and check that \(a[k]\) verifies the property). We divided our experiments into several categories: 1. Our running example with and without hints 2. The mini-java benchmarks [14] without hints 3. The mini-java benchmarks [14] with hints 4. The buggy (the assertion is wrong) mini-java benchmarks [14] to check for soundness of our tool. Toolchain. We used the following toolchain: 1. The mini-java to Horn converter used [14] to convert programs into Horn clauses with an added option to handle hints. It also contains options to handle the syntactical output of the clauses without changing their semantics (i.e. such as naming conventions). 2. One of the following abstraction method from Horn clauses to Horn clauses: - No abstraction: we keep the original file. - The Vaphor abstraction [14] (i.e. excluding the part that converts mini-java to Horn clauses tool). - Our data abstraction tool (removing arrays in predicates using \(\text{Cell}_1\) abstraction). - Our data abstraction tool with ackermanisation. 3. The Z3\(^2\) Horn solver with a 30s timeout. The code for all tools and benchmarks is available on github\(^3\). The version used of each tool is tagged with "NSAD20". Results. Our experimental results are summarized in Table 1. It contains, for each different toolchain and each category of example, the number of examples for which: - The solver computed the desired result (\(\checkmark\)) (i.e. sat if the example is not buggy, unsat otherwise) with default syntax options - The solver returned an undesired result (\(\times\)) (i.e. unsat when the example was not undesired result) with default syntax options - The solver returned unknown (i.e. the solver abandoned) or timed-out (\(\_\_\_\_\_\_\_\_\_\_\_\_\_\_), that is took more than 30s seconds with default syntax options - The solver computed the desired result in at least one of the syntax options (\(\geq 1\)) We have no case of problems in the toolchain and results are identical with a timeout of 120 seconds. Analysis. The experimental results show that 1. The tool seems sound (without bugs): no buggy example becomes not buggy. 2. \(\text{Cell}_1\) abstraction with our instantiation heuristic is expressive enough that the solver never returns that there is a bug when there was not one initially. Even better, we know that the invariant is expressible in the abstract domain as the column \(\geq 1\) for \(\text{Cell}_1\) ackermanised on hinted examples is equal to \#exp. 3. Data abstraction behaves better than Vaphor. 4. The Z3 solver is not yet good enough on integers to find the necessary invariants without hints. 5. The Z3 solver is dependant on syntax as the column \(\geq 1\) is not equal to the column \(\checkmark\). 6. Increasing the timeout does not seem to help the solver converge as results at timeout=30s are equal to results at timeout=120s. 7. Completely removing arrays helps. 8. Non-hinted or non-abstracted versions timeout. Discussion. Points 1 and 2 show that the tool achieves its purpose, that is, reducing invariants on arrays requiring quantifiers to invariants without quantifiers on integers by using the \(\text{Cell}_1\) abstraction without losing precision (i.e. that the invariants are expressible in the abstract domain). Future work should use more array programs benchmarks [3] and possibly use another front-end to handle them [2, 12]. Point 3 can be explained by several reasons. First, [14] does not give an explicit technique on how to abstract multiple arrays and the effective transformation in the tool seems less expressive than applying \(\text{Cell}_1\) abstraction to each array. Furthermore, Horn solvers based on Sat Modulo Theory (SMT) are very sensible to the SMT proofs. Our data-abstraction tool implements several simple expression simplifying techniques, which may lead to better convergence of the solver by reducing the noise in SMT proofs. Points 4 to 7 show that the Z3 tool is not yet mature enough to handle the Horn clauses we have after abstraction. One possible reason may be that the Z3 Horn solver heuristics were optimized for Horn clauses directly constructed from programs and not for the type of Horn clauses we generate after abstraction. A possible solution to improve predictability and reduce the impact of syntax could be to solve the \(^2\)version 4.8.8 - 64 bit \(^3\)https://github.com/vaphor Horn clauses using abstract interpretation. However, this would require relational invariants which may be expensive. Point 8 shows that the proposed technique can not be used to automatically generate invariants on Horn clauses containing arrays, however, it succeeds to reduce the problem of finding quantified invariants on arrays to solving integer Horn clauses. It seems the latter is still too hard and this may change in the near future, possibly by using another solver. 6 Related Work Numerous abstractions for arrays have been proposed in the literature, among which array slicing [8]. In Example 7 we showed how they are expressible in our framework. Similarly to [13] we think that disconnecting the array abstraction from other abstractions and from solving enables to better use back-end solvers. Like [14] we use Horn Clauses to encode our program under verification, but we go a step further in the use of Horn Clauses as an intermediate representation useful to chain abstractions. Furthermore, our formalization is cleaner when multiple arrays are involved. Our instantiation method had been inspired from previous work on solving quantified formula [4, 5, 9]. The paper [5] does not consider Horn clauses, that is, expressions with unknown predicates but only expressions with quantifiers. The paper [4] does a very similar approach to ours, however, they do not suggest the notion of data abstractions in a goal to analyze them and they use trigger based instantiation. Both instantiation methods of [4, 5] lead to bigger instantiation sets than the one we suggest, and yet, we proved through data abstractions may allow to abstract within their induction proofs. 7 Conclusion In this paper we gave an abstraction framework for data using Horn clauses. Using this framework, we successfully described the cell abstractions[14] in a simple manner and some other common array abstraction using composition. The method has been implemented and shows interesting preliminary experimental results. Experiments show that the chosen solver Z3 seems to be very unpredictable for the kind of Horn clauses we generate and further investigation needs to be done. Another direction is to experiment with other Horn clauses solving techniques. Moreover, we plan to improve our implementation by parametrizing it with the desired data-abstraction, and on the theoretical side, work on isolating a fragment on which the Cell_i heuristic is complete. References Table 1. Experimental results <table> <thead> <tr> <th></th> <th>#exp</th> <th>Noabs</th> <th>VapHor</th> <th>Cell_1</th> <th>Cell_1 ackerminated</th> </tr> </thead> <tbody> <tr> <td>Running</td> <td>1</td> <td>0 0 1</td> <td>0 0 1 0</td> <td>0 0 1 0</td> <td>0 0 1 0</td> </tr> <tr> <td>RunningHinted</td> <td>1</td> <td>0 0 1</td> <td>0 0 1 0</td> <td>0 0 1 0</td> <td>0 0 1 0</td> </tr> <tr> <td>NotHinted</td> <td>1</td> <td>0 0 1</td> <td>0 0 1 0</td> <td>0 0 1 0</td> <td>0 0 1 0</td> </tr> <tr> <td>Hinted</td> <td>11</td> <td>0 0 11</td> <td>5 0 6 5</td> <td>6 0 5 10</td> <td>8 0 3 11</td> </tr> <tr> <td>Buggy</td> <td>4</td> <td>4 0 0</td> <td>4 0 0 4</td> <td>4 0 0 4</td> <td>4 0 0 4</td> </tr> </tbody> </table> In a simple manner and
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-02948081/file/nsad20-brainegonnord_authorversion_taggued.pdf", "len_cl100k_base": 7190, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23148, "total-output-tokens": 8298, "length": "2e12", "weborganizer": {"__label__adult": 0.0004646778106689453, "__label__art_design": 0.00042128562927246094, "__label__crime_law": 0.000537872314453125, "__label__education_jobs": 0.0008292198181152344, "__label__entertainment": 8.147954940795898e-05, "__label__fashion_beauty": 0.0002161264419555664, "__label__finance_business": 0.0002689361572265625, "__label__food_dining": 0.0004880428314208984, "__label__games": 0.000751495361328125, "__label__hardware": 0.0009822845458984375, "__label__health": 0.0008482933044433594, "__label__history": 0.0003285408020019531, "__label__home_hobbies": 0.00014448165893554688, "__label__industrial": 0.0006518363952636719, "__label__literature": 0.0003681182861328125, "__label__politics": 0.0004439353942871094, "__label__religion": 0.0007634162902832031, "__label__science_tech": 0.052398681640625, "__label__social_life": 0.00013399124145507812, "__label__software": 0.00461578369140625, "__label__software_dev": 0.9326171875, "__label__sports_fitness": 0.0004489421844482422, "__label__transportation": 0.000858306884765625, "__label__travel": 0.0002434253692626953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26496, 0.03547]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26496, 0.59315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26496, 0.78547]], "google_gemma-3-12b-it_contains_pii": [[0, 915, false], [915, 4584, null], [4584, 10363, null], [10363, 16471, null], [16471, 21817, null], [21817, 26496, null]], "google_gemma-3-12b-it_is_public_document": [[0, 915, true], [915, 4584, null], [4584, 10363, null], [10363, 16471, null], [16471, 21817, null], [21817, 26496, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26496, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26496, null]], "pdf_page_numbers": [[0, 915, 1], [915, 4584, 2], [4584, 10363, 3], [10363, 16471, 4], [16471, 21817, 5], [21817, 26496, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26496, 0.07075]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
00412aa8e03a4ace71994503cfea95f0553db4ea
POSE Reference Manual POSE software and this manual are entirely the fault of Terry L. Wilmarth. Credit for Charm++ is due to the developers, past and present, at the Parallel Programming Laboratory (http://charm.cs.uiuc.edu). Questions or comments on POSE or this manual can be directed to Terry at wilmarth@cse.uiuc.edu. ## Contents 1 **Introduction** 1.1 Developing a model in Pose .................................................. 2 1.2 PDES in Pose ........................................................................ 2 2 **Compiling, Running and Debugging a Sample Pose program** 2.1 Compiling ............................................................................... 3 2.2 Running .................................................................................. 3 2.3 Debugging ............................................................................... 3 2.4 Sequential Mode ..................................................................... 3 3 **Programming in Pose** 3.1 Pose Modules ........................................................................... 4 3.2 Event Message and Poser Interface Description ......................... 4 3.3 Declaring Event Messages and Posers ........................................ 4 3.4 Implementing Posers .................................................................. 5 3.5 Creation of Poser Objects .......................................................... 6 3.6 Event Method Invocations ......................................................... 6 3.7 Elapsing Simulation Time ............................................................ 7 3.8 Interacting with a Pose Module and the Pose System .................. 7 4 **Configuring Pose** 4.1 Pose Command Line Options ..................................................... 9 5 **Communication Optimizations** ............................................... 10 6 **Load Balancing** ...................................................................... 10 7 **Glossary of Pose-specific Terms** ............................................ 10 1 Introduction POSE (Parallel Object-oriented Simulation Environment) is a tool for parallel discrete event simulation (PDES) based on Charm++. You should have a background in object-oriented programming (preferably C++) and know the basic principles behind discrete event simulation. Familiarity with simple parallel programming in Charm++ is also a plus. POSE uses the approach of message-driven execution of Charm++, but adds the element of discrete timestamps to control when, in simulated time, a message is executed. Users may choose synchronization strategies (conservative or several optimistic variants) on a per class basis, depending on the desired behavior of the object. However, POSE is intended to perform best with a special type of adaptive synchronization strategy which changes its behavior on a per object basis. Thus, other synchronization strategies may not be properly maintained. There are two significant versions of the adaptive strategy, adapt4, a simple, stable version, and adept, the development version. 1.1 Developing a model in POSE Modelling a system in POSE is similar to how you would model in C++ or any OOP language. Objects are entities that hold data, and have a fixed set of operations that can be performed on them (methods). Charm++ provides the parallelism we desire, but the model does not differ dramatically from C++. The primary difference is that objects may exist on a set of processors, and so invoking methods on them requires communication via messages. These parallel objects are called chares. POSE adds to Charm++ by putting timestamps on method invocations (events), and executing events in timestamp order to preserve the validity of the global state. Developing a model in POSE involves identifying the entities we wish to model, determining their interactions with other entities and determining how they change over time. 1.2 PDES in POSE A simulation in POSE consists of a set of Charm++ chares performing timestamped events in parallel. In POSE, these chares are called posers. POSE is designed to work with many such entities per processor. The more a system can be broken down into its parallel components when designing the simulation model, the more potential parallelism in the final application. A poser class is defined with a synchronization strategy associated with it. We encourage the use of the adaptive strategies, as mentioned earlier. Adaptive strategies are optimistic, and will potentially execute events out of order, but have rollback and cancellation messages as well as checkpointing abilities to deal with this behind the scenes. Execution is driven by events. An event arrives for a poser and is sorted into a queue by timestamp. The poser has a local time called object virtual time (OVT) which represents its progress through the simulation. When an event arrives with a timestamp $t >$ OVT, the OVT is advanced to $t$. If the event has timestamp $t <$ OVT, it may be that other events with greater timestamps were executed. If this is the case, a rollback will occur. If not, the event is simply executed along with the others in the queue. Time can also pass on a poser within the course of executing an event. An elapse function is used to advance the OVT. POSE maintains a global clock, the global virtual time (GVT), that represents the progress of the entire simulation. Currently, POSE has no way to directly specify event dependencies, so if they exist, the programmer must handle errors in ordering carefully. POSE provides a delayed error message print and abort function that is only performed if there is no chance of rolling back the dependency error. Another mechanism provided by POSE is a method of tagging events with sequence numbers. This allows the user to determine the execution order of events which have the same timestamp. ## 2 Compiling, Running and Debugging a Sample POSE program Sample code is available in the Charm++ source distribution. Assuming a net-linux build of Charm++, look in `charm/net-linux/examples/pose`. The SimBenchmark directory contains a synthetic benchmark simulation and is fairly straightforward to understand. ### 2.1 Compiling To build a POSE simulation, run `etrans.pl` on each POSE module to get the new source files. `etrans.pl` is a source to source translator. Given a module name it will translate the `module.h`, `module.ci`, and `module.C` files into `module_sim.h`, `module_sim.ci`, and `module_sim.C` files. The translation operation adds wrapper classes for POSE objects and handles the interface with strategies and other poser options. To facilitate code organization, the `module.C` file can be broken up into multiple files and those files can be appended to the `etrans.pl` command line after the module name. These additional `.C` files will be translated and their output appended to the `module_sim.C` file. The `-s` switch can be passed to use the sequential simulator feature of POSE on your simulation, but you must also build a sequential version when you compile (see below). Once the code has been translated, it is a Charm++ program that can be compiled with `charmc`. Please refer to the CHARM++/CONVERSE Installation and Usage Manual for details on the `charmc` command. You should build the new source files produced by `etrans.pl` along with the main program and any other source needed with `charmc`, linking with `-module pose` (or `-module seqpose` for a sequential version) and `-language charm++`. The SimBenchmark example has a Makefile that shows this process. ### 2.2 Running To run the program in parallel, a `charmrun` executable was created by `charmc`. The flag `+p` is used to specify a number of processors to run the program on. For example: ``` > ./charmrun pgm +p4 ``` This runs the executable `pgm` on 4 processors. For more information on how to use `charmrun` and set up your environment for parallel runs, see the CHARM++/CONVERSE Installation and Usage Manual. ### 2.3 Debugging Because POSE is translated to Charm++, debugging is a little more challenging than normal. Multi-processor debugging can be achieved with the `charmrun ++debug` option, and debugging is performed on the `module_sim.C` source files. The user thus has to track down problems in the original POSE source code. A long-term goal of the POSE developers is to eliminate the translation phase and rely on the interface translator of Charm++ to provide similar functionality. ### 2.4 Sequential Mode As mentioned above, the same source code can be used to generate a purely sequential POSE executable by using the `-s` flag to `etrans.pl` and linking with `-module seqpose`. This turns off all aspects of synchronization, checkpointing and GVT calculation that are needed for optimistic parallel execution. Thus you should experience better one-processor times for executables built for sequential execution than those built for parallel execution. This is convenient for examining how a program scales in comparison to sequential time. It is also helpful for simulations that are small and fast, or in situations where multiple processors are not available. ## 3 Programming in POSE This section details syntax and usage of POSE constructs with code samples. 3.1 Pose Modules A Pose module is similar to a Charm++ module. It is comprised of an interface file with suffix .ci, a header .h file, and the implementation in .C files. Several posers can be described in one module, and the module can include regular chares as well. The module is translated into Charm++ before the simulation can be compiled. This translation is performed by a Perl script called etrans.pl which is included with POSE. It generates files suffixed sim.ci, sim.h, and sim.C. 3.2 Event Message and Poser Interface Description Messages, be they event messages or otherwise, are described in the .ci file exactly the way they are in Charm++. Event messages cannot make use of Charm++’s parameter marshalling, and thus you must declare them in the .h file. Charm++ varsize event messages are currently not implemented in Pose. All event messages inherit from a Pose type eventMsg which includes data for timestamps and miscellaneous Pose statistics. message myMessage; Posers are described similar to chares, with a few exceptions. First, the poser keyword is used to denote that the class is a Pose simulation object class. Second, event methods are tagged with the keyword event in square brackets. Finally, three components are specified which indicate how objects of the poser class are to be simulated. The sim component controls the wrapper class and event queue used by the object. The strat component controls the synchronization strategy the object should use (i.e. adaptive or basic optimistic). The rep component specifies the global state representation, which controls how the global state is kept accurate depending on the synchronization strategy being used (i.e. checkpointing or no checkpointing). Currently, there is only one wrapper type, sim. This 3-tuple syntax is likely to become obsolete, replaced simply by synchronization strategy only. Keeping the global state accurate is largely a function of the synchronization strategy used. poser mySim : sim strat rep { entry mySim(myMessage *); entry [event] void myEventMethod(eventMsg *); ... }; A typical .ci file poser specification might look like this: poser Worker : sim adapt4 chpt { entry Worker(WorkerCreationMsg *); entry [event] void doWork(WorkMsg *); ... }; Note that the constructors and event methods of a poser must take an event message as parameter. If there is no data (and thereby no message defined) that needs to be passed to the method, then the parameter should be of type eventMsg *. This ensures that Pose will be able to timestamp the event. 3.3 Declaring Event Messages and Posers Currently, event messages are declared with no reference to what they might inherit from (unlike in Charm++). The translator takes care of this. In addition, they must define operator=. class myMessage { public: int someData; myMessage& operator=(const myMessage& obj) { eventMsg::operator=(obj); someData = obj.someData; return *this; } Similarly, posers do not refer to a base class when they are declared. Posers are required to have a void constructor declared that simply initializes the data to sensible values. A destructor must be provided as well. In addition, a *pup* and *operator=* must be provided. The *pup* method should call the *pup* method of the global state representation class being used. class mySim { int anInt; float aFloat; char aString[20]; public: mySim(); mySim(myMessage *m); '~mySim()'; void pup(PUP::er &p); myMessage& operator=(const myMessage& obj); void myEventMethod(eventMsg *m); void myEventMethod_anti(eventMsg *m); void myEventMethod_commit(eventMsg *m); ... }; Further, for each event method, a commit method should be declared, and if the synchronization strategy being used is optimistic or involves any sort of rollback, an anti-method should also be provided. The syntax of these declarations is shown above. Their usage and implementation will be described next. ### 3.4 Implementing Posers The void constructor for a poser should be defined however the user sees fit. It could be given an empty body and should still work for **POSE**. Poser entry constructors (those described in the *.ci* file) should follow the template below: ```c mySim::mySim(myMessage *m) {} // initializations from m ... delete m; ... }; ``` Note that while the incoming message *m* may be deleted here in the constructor, event messages received on event methods should **not** be deleted. The PDES fossil collection will take care of those. An event method should have the following form: ```c void mySim::myEventMethod(eventMsg *m) { // body of method }; ``` Again, $m$ is never deleted in the body of the event. A side effect of optimistic synchronization and rollback is that we would like the effects of event execution to be dependent only upon the state encapsulated in the corresponding poser. Thus, accessing arbitrary states outside of the simulation, such as by calling $\text{rand}$, is forbidden. We are planning to fix this problem by adding a $\text{POSE_rand}$ operation which will generate a random number the first time the event is executed, and will checkpoint the number for use in subsequent re-executions should a rollback occur. ### 3.5 Creation of Poser Objects Posers are created within a module using the following syntax: ```cpp int hdl = 13; // handle should be unique myMessage *m = new myMessage; m->someData = 34; POSE_create(mySim(m), hdl, 0); ``` This creates a $\text{mySim}$ object that comes into existence at simulation time zero, and can be referred to by the handle 13. Creating a poser from outside the module (i.e. from $\text{main}$) is somewhat more complex: ```cpp int hdl = 13; myMessage *m = new myMessage; m->someData = 34; m->Timestamp(0); *(CProxy mySim *) & POSIX_Objects)[hdl].insert(m); ``` This is similar to what the module code ultimately gets translated to and should be replaced by a macro with similar syntax soon. ### 3.6 Event Method Invocations Event method invocations vary significantly from entry method invocations in Charm++, and various forms should be used depending on where the event method is being invoked. In addition, event messages sent to an event method should be allocated specifically for an event invocation, and cannot be recycled or deleted. There are three ways to send events within a Pose module. The first and most commonly used way involves specifying and offset in simulation time from the current time. The syntax follows: ```cpp aMsg = new eventMsg; POSE_invoke(myEventMethod(aMsg), mySim, hdl, 0); ``` Here, we’ve created an $\text{eventMsg}$ and sent it to $\text{myEventMethod}$, an event entry point on $\text{mySim}$. $\text{mySim}$ was created at handle $\text{hdl}$, and we want the event to take place now, i.e. at the current simulation time, so the offset is zero. The second way to send an event is reserved for use by non-poser objects within the module. It should not be used by posers. This version allows you to specify an absolute simulation time at which the event happens (as opposed to an offset to the current time). Since non-poser objects are not a part of the simulation, they do not have a current time, or OVT, by which to specify an offset. The syntax is nearly identical to that above, only the last parameter is an absolute time. ```cpp aMsg = new eventMsg; POSE_invoke_at(myEventMethod(aMsg), mySim, hdl, 56); ``` Posers should not use this approach because of the risk of specifying an absolute time that is earlier than the current time on the object sending the event. Using this method, event methods can be injected into the system from outside any module, but this is not recommended. The third approach is useful when an object send events to itself. It is simply a slightly shorter syntax for the same thing as `POSE_invoke`: ```plaintext aMsg = new eventMsg; POSE_local_invoke(myEventMethod(aMsg), offset); ``` ### 3.7 Elapsing Simulation Time We've seen in the previous section how it is possible to advance simulation time by generating events with non-zero offsets of current time. When such events are received on an object, if the object is behind, it advances its local simulation time (object virtual time or OVT) to the timestamp of the event. It is also possible to elapse time on an object while the object is executing an event. This is accomplished thus: ```plaintext elapse(42); ``` The example above would simulate the passage of forty-two time units by adding as much to the object's current OVT. ### 3.8 Interacting with a Pose Module and the Pose System POSE modules consist of `<modname>.ci`, `<modname>.h` and `<modname>.C` files that are translated via `etrans.pl` into `<modname>_sim.ci`, `<modname>_sim.h` and `<modname>_sim.C` files. To interface these with a main program module, say `Pgm` in files `pgm.ci`, `pgm.h` and `pgm.C`, the `pgm.ci` file must declare the Pose module as extern in the `mainmodule Pgm` block. For example: ```plaintext mainmodule Pgm { extern module `<modname>`; readonly CkChareID mainhandle; mainchare main { entry main(); }; } ``` The `pgm.C` file should include `pose.h` and `<modname>_sim.h` along with its own headers, declarations and whatever else it needs. Somewhere in the `main` function, `POSE_init()` should be called. This initializes all of Pose's internal data structures. The parameters to `POSE_init()` specify a termination method. Pose programs can be terminated in two ways: with inactivity detection or with an end time. Inactivity detection terminates after a few iterations of the GVT if no events are being executed and virtual time is not advancing. When an end time is specified, and the GVT passes it, the simulation exits. If no parameters are provided to `POSE_init()`, then the simulation will use inactivity detection. If a time is provided as the parameter, this time will be used as the end time. Now Pose is ready for posers. All posers can be created at this point, each with a unique handle. The programmer is responsible for choosing and keeping track of the handles created for posers. Once all posers are created, the simulation can be started: ```plaintext POSE_start(); ``` 4 Configuring Pose Pose can be configured in two different ways. Fundamental behaviors are controlled by altering values in the `pose_config.h` file in the Pose installation, and rebuilding Pose. Many of these configuration options can (and should) be controlled by command line options. These will be designated here by an asterisk (*). See section 4.1 for the command line options. - **POSE_STATS_ON** * - Turn on timing and statistics gathering for internal Pose operations. Produces a small slowdown in program. - **POSE_DOP_ON** * - Turn on timing and statistics gathering for degree of parallelism calculations. Generates log files that can be loaded by ploticus scripts to produce graphs plotting active entities over time. Slows down program dramatically. - **POSE_COMM_ON** - Turn on streaming communication optimization for small message packing. - **COMM_TIMEOUT** - Used by streaming communication library. Time to wait (in ?) before sending buffered messages. - **COMM_MAXMSG** - Used by streaming communication library. Number of messages to buffer before packing and sending as one. - **LB_ON** * - Turn on Pose load balancing. - **STORE_RATE** * - Default checkpointing rate: 1 for every `STORE_RATE` events. - **SPEC_WINDOW** * - Speculative window size: this is how far (in virtual time units) ahead of GVT posers are allowed to go. - **MIN_LEASH** * and **MAX_LEASH** * - Bounds on the speculative window, these are adjusted by adaptive synchronization strategies. - **LEASH_FLEX** * - Granularity of flexibility when speculative window is shrunk or expanded. - **MAX_POOL_SIZE** - Memory used by event messages is recycled. This controls how many messages of a particular size will be kept on hand. - **MAX_RECYCLABLE** - This is the largest size of message that will be recycled. • **LB_SKIP** * ○ This controls the frequency of load balance invocation. 1 in every LB_SKIP executions of the GVT algorithm will invoke load balancing. • **LB_THRESHOLD** * ○ What the heck does this number mean? I can’t remember. I’ll have to look through the code... later. Meanwhile, I think this indicates some sort of threshold a single processor has to cross before we even bother with analyzing the load. • **LB_DIFF** * ○ Once the load has been analyzed, we compute the difference between the max and min PE loads. Only if this difference exceeds LB_DIFF do we bother migrating posers. Several of the above flags and constants will be eliminated as the adaptive strategy is expanded. What remains will eventually become run-time options. ### 4.1 Pose Command Line Options Command line options are handled like Charm++ command line parameters. For namespace purity all Pose command line options have a _pose_ suffix. They can be inspected by appending a `-h` to an execution of a Pose program. Command line options override any defaults set in the _pose_config.h_ file • **+stats_pose** ○ Turn on timing and statistics gathering for internal Pose operations. Produces a small slowdown in program. • **+dop_pose** ○ Turn on timing and statistics gathering for degree of parallelism calculations. Generates log files that can be loaded by ploticus scripts to produce graphs plotting active entities over time. Slows down program dramatically. • **+lb_on_pose** ○ Turn on Pose load balancing. • **+store_rate_pose N** ○ Default checkpointing rate: 1 for every STORE_RATE events. • **+spec_window_pose N** ○ Speculative window size: this is how far (in virtual time units) ahead of GVT posers are allowed to go. • **+min_leash_pose N** and **+min_leash_pose N** ○ Bounds on the speculative window, these are adjusted by adaptive synchronization strategies. • **+leash_flex_pose N** ○ Granularity of flexibility when speculative window is shrunk or expanded. • **+lb_skip_pose N** ○ This controls the frequency of load balance invocation. 1 in every LB_SKIP executions of the GVT algorithm will invoke load balancing. 5 Communication Optimizations 6 Load Balancing 7 Glossary of Pose-specific Terms - void POSE_init() - Initializes various items in Pose; creates the load balancer if load balancing is turned on; initializes the statistics gathering facility if statistics are turned on. - Must be called in user's main program prior to creation of any simulation objects or reference to any other Pose construct. - void POSE_start() - Sets busy wait to default if none specified; starts quiescence detection; starts simulation timer. - Must be called in user's main program when simulation should start. - void POSE_registerCallBack(CkCallback cb) - Registers callback function with Pose – when program ends or quiesces, function is called. - CkCallback is created with the index of the callback function and a proxy to the object that function is to be called on. For example, to register the function \texttt{wrapUp} in the main module as a callback: \begin{verbatim} CProxy_main M(mainhandle); POSE_registerCallBack(CkCallback(CkIndex_main::wrapUp(), M)); \end{verbatim} - void POSE_stop() - Commits remaining events; prints final time and statistics (if on); calls callback function. - Called internally when quiescence is detected or program reaches POSE_endtime. - void POSE_exit() - Similar to CkExit(). - void POSE_set_busy_wait(int n) - Used to control granularity of events; when calling POSE_busy_wait, program busywaits for time to compute $fib(n)$. - +lb_threshold_pose N - Minimum threshold for load balancing, default is 4000 - +lb_diff_pose N - Once the load has been analyzed, we compute the difference between the max and min PE loads. Only if this difference exceeds \texttt{LB\_DIFF} do we bother migrating posers. - +checkpoint_gvt_pose N - Checkpoint to disk approximately every N GVT ticks (N is an integer). The default is 0, which indicates no checkpointing. - +checkpoint_time_pose N - Checkpoint to disk every N seconds (N is an integer). The default is 0, which indicates no checkpointing. If both this parameter and +checkpoint_gvt_pose are greater than 0, a warning will be given, the value of this parameter will be set to 0, and POSE will checkpoint based on GVT ticks. As a technical point, pose command line parsing is done inside the POSE_init() call. Therefore, the most consistent behavior for interleaving pose command line options with user application options will be achieved by calling POSE_init() before handling user application command line arguments. - **void** `POSE_busy_wait()` - Busywait for time to compute $fib(n)$ where $n$ is either 1 or set by `POSE_set_busy_wait`. - **POSE_useET(t)** - Set program to terminate when global virtual time (GVT) reaches $t$. - **POSE_useID()** - Set program to terminate when no events are available in the simulation. - **void** `POSE_create(constructorName(eventMsg *m), int handle, int atTime)` - Creates a poser object given its constructor, an event message $m$ of the appropriate type, any integer as the handle (by which the object will be referred from then on), and a time (in simulation timesteps) at which it should be created. - The handle can be thought of as a chare array element index in Charm++. - **void** `POSE_invoke_at(methodName(eventMsg *m), className, int handle, int atTime)` - Send a `methodName` event with message $m$ to an object of type `className` designated by handle `handle` at time specified by `atTime`. - This can be used by non-poser objects in the POSE module to inject events into the system being simulated. It should not be used by a poser object to generate an event. - **void** `POSE_invoke(methodName(eventMsg *m), className, int handle, int timeOffset)` - Send a `methodName` event with message $m$ to an object of type `className` designated by handle `handle` at current OVT + `timeOffset`. - This is used by poser objects to send events from one poser to another. - **void** `POSE_local_invoke(methodName(eventMsg *m), int timeOffset)` - Send a `methodName` event with message $m$ to this object at current OVT + `timeOffset`. - This is used by poser objects to send events to themselves. - **void** `CommitPrintf(char *s, args...)` - Buffered print statement; prints when event is committed (i.e. will not be rolled back). - Currently, must be called on the wrapper class (parent) to work properly, but a fix for this is in the works. - **void** `CommitError(char *s, args...)` - Buffered error statement; prints and aborts program when event is committed. - Currently, must be called on the wrapper class (parent) to work properly, but a fix for this is in the works. - **void** `elapse(int n)` - Elapse $n$ simulation time units. - **poser** - Keyword (used in place of chare) to denote a poser object in the `.ci` file of a POSE module. - **event** - Keyword used in square brackets in the `.ci` file of a POSE module to denote that the entry method is an event method. - **eventMsg** - Base class for all event messages; provides timestamp, priority and many other properties. - **sim** - Base class of all wrapper classes. - **strat** - Base class of all strategy classes. • **con** - Simple conservative strategy class. • **opt, opt2, opt3, spec, adapt, adapt2** - Optimistic strategy classes. • **rep** - Base class for all representation classes. • **chpt** - Simple checkpointing representation class. • **OVT()** - Returns the object virtual time (OVT) of the poser in which it is called. • **void MySim::terminus()** - When simulation has terminated and program is about to exit, this method is called on all posers. Implemented as an empty method in the base `rep` class, the programmer may choose to override this with whatever actions may need to be performed per object at the end of the simulation.
{"Source-Url": "http://charm.cs.illinois.edu/manuals/pdf/pose.pdf", "len_cl100k_base": 6616, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32198, "total-output-tokens": 7424, "length": "2e12", "weborganizer": {"__label__adult": 0.0003218650817871094, "__label__art_design": 0.0002391338348388672, "__label__crime_law": 0.00018203258514404297, "__label__education_jobs": 0.00033545494079589844, "__label__entertainment": 6.115436553955078e-05, "__label__fashion_beauty": 0.00014007091522216797, "__label__finance_business": 9.518861770629884e-05, "__label__food_dining": 0.0003342628479003906, "__label__games": 0.0012912750244140625, "__label__hardware": 0.0017213821411132812, "__label__health": 0.0002646446228027344, "__label__history": 0.0001908540725708008, "__label__home_hobbies": 9.59634780883789e-05, "__label__industrial": 0.0004546642303466797, "__label__literature": 0.00013720989227294922, "__label__politics": 0.00013625621795654297, "__label__religion": 0.000377655029296875, "__label__science_tech": 0.0123291015625, "__label__social_life": 5.543231964111328e-05, "__label__software": 0.00687408447265625, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.0003771781921386719, "__label__transportation": 0.00040268898010253906, "__label__travel": 0.00018024444580078125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29563, 0.01079]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29563, 0.31626]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29563, 0.84994]], "google_gemma-3-12b-it_contains_pii": [[0, 324, false], [324, 2165, null], [2165, 6015, null], [6015, 9421, null], [9421, 12259, null], [12259, 14113, null], [14113, 17059, null], [17059, 19700, null], [19700, 21538, null], [21538, 23693, null], [23693, 26236, null], [26236, 28910, null], [28910, 29563, null]], "google_gemma-3-12b-it_is_public_document": [[0, 324, true], [324, 2165, null], [2165, 6015, null], [6015, 9421, null], [9421, 12259, null], [12259, 14113, null], [14113, 17059, null], [17059, 19700, null], [19700, 21538, null], [21538, 23693, null], [23693, 26236, null], [26236, 28910, null], [28910, 29563, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29563, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29563, null]], "pdf_page_numbers": [[0, 324, 1], [324, 2165, 2], [2165, 6015, 3], [6015, 9421, 4], [9421, 12259, 5], [12259, 14113, 6], [14113, 17059, 7], [17059, 19700, 8], [19700, 21538, 9], [21538, 23693, 10], [23693, 26236, 11], [26236, 28910, 12], [28910, 29563, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29563, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
d3d33871e66ab0ee760798db6eb96a4183fa990c
11. Visible-Surface Detection Methods **Problem definition of Visible-Surface Detection Methods:** To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by other opaque surfaces along the line of sight (projection) are invisible to the viewer. **Characteristics of approaches:** - Require large memory size? - Require long processing time? - Applicable to which types of objects? **Considerations:** - Complexity of the scene - Type of objects in the scene - Available equipment - Static or animated? **Classification of Visible-Surface Detection Algorithms:** 1. **Object-space Methods** Compare objects and parts of objects to each other within the scene definition to determine which surfaces, as a whole, we should label as visible: For each object in the scene do Begin 1. Determine those parts of the object whose view is unobstructed by other parts of it or any other object with respect to the viewing specification. 2. Draw those parts in the object color. End - Compare each object with all other objects to determine the visibility of the object parts. - If there are n objects in the scene, complexity = \(O(n^2)\) - Calculations are performed at the resolution in which the objects are defined (only limited by the computation hardware). - Process is unrelated to display resolution or the individual pixel in the image and the result of the process is applicable to different display resolutions. - Display is more accurate but computationally more expensive as compared to image space methods because step 1 is typically more complex, e.g. Due to the possibility of intersection between surfaces. - Suitable for scene with small number of objects and objects with simple relationship with each other. 2. **Image-space Methods (Mostly used)** Visibility is determined point by point at each pixel position on the projection plane. For each pixel in the image do Begin 1. Determine the object closest to the viewer that is pierced by the projector through the pixel 2. Draw the pixel in the object colour. End - For each pixel, examine all n objects to determine the one closest to the viewer. - If there are p pixels in the image, complexity depends on n and p (\(O(np)\)). - Accuracy of the calculation is bounded by the display resolution. - A change of display resolution requires re-calculation. Application of Coherence in Visible Surface Detection Methods: - Making use of the results calculated for one part of the scene or image for other nearby parts. - Coherence is the result of local similarity - As objects have continuous spatial extent, object properties vary smoothly within a small local region in the scene. Calculations can then be made incremental. Types of coherence: 1. Object Coherence: Visibility of an object can often be decided by examining a circumscribing solid (which may be of simple form, eg. A sphere or a polyhedron.) 2. Face Coherence: Surface properties computed for one part of a face can be applied to adjacent parts after small incremental modification. (eg. If the face is small, we sometimes can assume if one part of the face is invisible to the viewer, the entire face is also invisible). 3. Edge Coherence: The Visibility of an edge changes only when it crosses another edge, so if one segment of an non-intersecting edge is visible, the entire edge is also visible. 4. Scan line Coherence: Line or surface segments visible in one scan line are also likely to be visible in adjacent scan lines. Consequently, the image of a scan line is similar to the image of adjacent scan lines. 5. Area and Span Coherence: A group of adjacent pixels in an image is often covered by the same visible object. This coherence is based on the assumption that a small enough region of pixels will most likely lie within a single polygon. This reduces computation effort in searching for those polygons which contain a given screen area (region of pixels) as in some subdivision algorithms. 6. Depth Coherence: The depths of adjacent parts of the same surface are similar. 7. Frame Coherence: Pictures of the same scene at successive points in time are likely to be similar, despite small changes in objects and viewpoint, except near the edges of moving objects. Most visible surface detection methods make use of one or more of these coherence properties of a scene. To take advantage of regularities in a scene, eg., constant relationships often can be established between objects and surfaces in a scene. 11.1 Back-Face Detection In a solid object, there are surfaces which are facing the viewer (front faces) and there are surfaces which are opposite to the viewer (back faces). These back faces contribute to approximately half of the total number of surfaces. Since we cannot see these surfaces anyway, to save processing time, we can remove them before the clipping process with a simple test. Each surface has a normal vector. If this vector is pointing in the direction of the center of projection, it is a front face and can be seen by the viewer. If it is pointing away from the center of projection, it is a back face and cannot be seen by the viewer. The test is very simple, suppose the z axis is pointing towards the viewer, if the z component of the normal vector is negative, then, it is a back face. If the z component of the vector is positive, it is a front face. Note that this technique only caters well for nonoverlapping convex polyhedra. For other cases where there are concave polyhedra or overlapping objects, we still need to apply other methods to further determine where the obscured faces are partially or completely hidden by other objects (e.g., using Depth-Buffer Method or Depth-Sort Method). 11.2 Depth-Buffer Method (Z-Buffer Method) This approach compares surface depths at each pixel position on the projection plane. Object depth is usually measured from the view plane along the z axis of a viewing system. This method requires 2 buffers: one is the image buffer and the other is called the z-buffer (or the depth buffer). Each of these buffers has the same resolution as the image to be captured. As surfaces are processed, the image buffer is used to store the color values of each pixel position and the z-buffer is used to store the depth values for each (x,y) position. Algorithm: 1. Initially each pixel of the z-buffer is set to the maximum depth value (the depth of the back clipping plane). 2. The image buffer is set to the background color. 3. Surfaces are rendered one at a time. 4. For the first surface, the depth value of each pixel is calculated. 5. If this depth value is smaller than the corresponding depth value in the z-buffer (i.e., it is closer to the viewpoint), both the depth value in the z-buffer and the color value in the image buffer are replaced by the depth value and the color value of this surface calculated at the pixel position. 6. Repeat step 4 and 5 for the remaining surfaces. 7. After all the surfaces have been processed, each pixel of the image buffer represents the color of a visible surface at that pixel. - This method requires an additional buffer (if compared with the Depth-Sort Method) and the overheads involved in updating the buffer. So this method is less attractive in the cases where only a few objects in the scene are to be rendered. - Simple and does not require additional data structures. - The z-value of a polygon can be calculated incrementally. - No pre-sorting of polygons is needed. - No object-object comparison is required. - Can be applied to non-polygonal objects. - Hardware implementations of the algorithm are available in some graphics workstation. - For large images, the algorithm could be applied to, eg., the 4 quadrants of the image separately, so as to reduce the requirement of a large additional buffer. 11.3 Scan-Line Method In this method, as each scan line is processed, all polygon surfaces intersecting that line are examined to determine which are visible. Across each scan line, depth calculations are made for each overlapping surface to determine which is nearest to the view plane. When the visible surface has been determined, the intensity value for that position is entered into the image buffer. ![Scan lines crossing the projection of two surfaces, S₁ and S₂, in the view plane. Dashed lines indicate the boundaries of hidden surfaces.](image) For each scan line do Begin For each pixel (x,y) along the scan line do ------------ Step 1 Begin z_buffer(x) = 0 Image_buffer(x,y) = background_color End For each polygon in the scene do ------------ Step 2 Begin For each pixel (x,y) along the scan line that is covered by the polygon do Begin 2a. Compute the depth or z of the polygon at pixel location (x,y). 2b. If z < z_buffer(x) then Set z_buffer(x) = z Set Image_buffer(x,y) = polygon's colour End End End End - Step 2 is not efficient because not all polygons necessarily intersect with the scan line. - Depth calculation in 2a is not needed if only 1 polygon in the scene is mapped onto a segment of the scan line. - To speed up the process: Recall the basic idea of polygon filling: For each scan line crossing a polygon, this algorithm locates the intersection points of the scan line with the polygon edges. These intersection points are sorted from left to right. Then, we fill the pixels between each intersection pair. With similar idea, we fill every scan line span by span. When polygons overlap on a scan line, we perform depth calculations at their edges to determine which polygon should be visible at which span. Any number of overlapping polygon surfaces can be processed with this method. Depth calculations are performed only when there are polygons overlapping. We can take advantage of coherence along the scan lines as we pass from one scan line to the next. If there is no change in the pattern of the intersection of polygon edges with the successive scan lines, it is not necessary to do depth calculations. This works only if surfaces do not cut through or otherwise cyclically overlap each other. If cyclic overlap happens, we can divide the surfaces to eliminate the overlaps. - The algorithm is applicable to non-polygonal surfaces. - Memory requirement is less than that for depth-buffer method. - Lot of sortings are done on x-y coordinates and on depths. 11.4 Depth-Sort Method 1. Sort all surfaces according to their distances from the view point. 2. Render the surfaces to the image buffer one at a time starting from the farthest surface. 3. Surfaces close to the view point will replace those which are far away. 4. After all surfaces have been processed, the image buffer stores the final image. The basic idea of this method is simple. When there are only a few objects in the scene, this method can be very fast. However, as the number of objects increases, the sorting process can become very complex and time consuming. Example: Assuming we are viewing along the z axis. Surface S with the greatest depth is then compared to other surfaces in the list to determine whether there are any overlaps in depth. If no depth overlaps occur, S can be scan converted. This process is repeated for the next surface in the list. However, if depth overlap is detected, we need to make some additional comparisons to determine whether any of the surfaces should be reordered. In case if there are any overlaps in depth, we need to make some additional comparisons to determine whether a pair of surfaces should be reordered. The checking is as follows: a. The bounding rectangles in the xy plane for the 2 surfaces do not overlap b. The surface S with greater depth is completely behind the overlapping surface relative to the viewing position. c. The overlapping surface is completely in front of the surface S with greater depth relative to the viewing position. d. The projections of the 2 surfaces onto the view plane do not overlap. If any of the above tests is passed, then the surfaces no need to be re-ordered. 11.5 Binary Space Partitioning - suitable for a static group of 3D polygons to be viewed from a number of viewpoints. - based on the observation that hidden surface elimination of a polygon is guaranteed if all polygons on the other side of it as the viewer is painted first, then itself, then all polygons on the same side of it as the viewer. 1. The algorithm first builds the BSP tree: - a root polygon is chosen (arbitrarily) which divides the region into 2 half-spaces (2 nodes => front and back) - a polygon in the front half-space is chosen which divides the half-space into another 2 half-spaces - the subdivision is repeated until the half-space contains a single polygon (leaf node of the tree) - the same is done for the back space of the polygon. 2. To display a BSP tree: - see whether the viewer is in the front or the back half-space of the root polygon. - if front half-space then first display back child (subtree) then itself, followed by its front child / subtree - the algorithm is applied recursively to the BSP tree. Discussion: - Back face removal is achieved by not displaying a polygon if the viewer is located in its back half-space - It is an object space algorithm (sorting and intersection calculations are done in object space precision) - If the viewpoint changes, the BSP needs only minor re-arrangement. - A new BSP tree is built if the scene changes - The algorithm displays polygon back to front (cf. Depth-sort) BSP Algorithm Procedure DisplayBSP(tree: BSP_tree) Begin If tree is not empty then If viewer is in front of the root then Begin DisplayBSP(tree.back_child) displayPolygon(tree.root) DisplayBSP(tree.front_child) End Else Begin DisplayBSP(tree.front_child) displayPolygon(tree.root) DisplayBSP(tree.back_child) End End End 11.6 Area Subdivision Algorithms The area-subdivision method takes advantage of area coherence in a scene by locating those view areas that represent part of a single surface. The total viewing area is successively divided into smaller and smaller rectangles until each small area is simple, i.e., it is a single pixel, or is covered wholly by a part of a single visible surface or no surface at all. The procedure to determine whether we should subdivide an area into smaller rectangle is: 1. We first classify each of the surfaces, according to their relations with the area: - Surrounding surface - a single surface completely encloses the area - Overlapping surface - a single surface that is partly inside and partly outside the area - Inside surface - a single surface that is completely inside the area - Outside surface - a single surface that is completely outside the area. To improve the speed of classification, we can make use of the bounding rectangles of surfaces for early confirmation or rejection that the surfaces should be belong to that type. 2. Check the result from 1., that, if any of the following condition is true, then, no subdivision of this area is needed. a. All surfaces are outside the area. b. Only one surface is inside, overlapping or surrounding surface is in the area. c. A surrounding surface obscures all other surfaces within the area boundaries. For cases b and c, the color of the area can be determined from that single surface. 11.7 Octree Methods In these methods, octree nodes are projected onto the viewing surface in a front-to-back order. Any surfaces toward the rear of the front octants (0,1,2,3) or in the back octants (4,5,6,7) may be hidden by the front surfaces. With the numbering method (0,1,2,3,4,5,6,7), nodes representing octants 0,1,2,3 for the entire region are visited before the nodes representing octants 4,5,6,7. Similarly the nodes for the front four sub-octants of octant 0 are visited before the nodes for the four back sub-octants. When a colour is encountered in an octree node, the corresponding pixel in the frame buffer is painted only if no previous color has been loaded into the same pixel position. In most cases, both a front and a back octant must be considered in determining the correct color values for a quadrant. But - If the front octant is homogeneously filled with some color, we do not process the back octant. - If the front is empty, it is necessary only to process the rear octant. - If the front octant has heterogeneous regions, it has to be subdivided and the sub-octants are handled recursively. 11.8 Ray-Casting Method The intensity of a pixel in an image is due to a ray of light, having been reflected from some objects in the scene, pierced through the centre of the pixel. So, visibility of surfaces can be determined by tracing a ray of light from the centre of projection (viewer's eye) to objects in the scene. (backward-tracing). ⇒ Find out which objects the ray of light intersects. ⇒ Then, determine which one of these objects is closest to the viewer. ⇒ Then, set the pixel color to this object. The ray-casting approach is an effective visibility-detection method for scenes with curved surfaces, particularly spheres. Speeding up the intersection calculation in ray casting For 1024x1024 pixels image and 100 objects in the scene, total number of object intersection calculations is about 100 millions. 1. Bounding Volume Approach - Test for intersection of the ray with the object's bounding volume. - Typical bounding volumes are sphere, ellipsoid, rectangular solid. The intersection calculation for these bounding volumes is usually faster than the displayed object. - If the ray does not intersect the bounding volume, no further processing of the object is needed. - Other bounding volumes include convex polygons formed by a set of planes. 2. Using Hierarchies - If a parent bounding volume does not intersect with a ray, all its children bounding volumes do not intersect with the ray and need not be processed. - Thus reduce the number of intersection calculations. 3. Space Partitioning Approach - Partition the space into a regular grid of equal-size volumes. - Each volume has associated with it a list of objects which are contained within or intersect the volume. - Intersection calculation is only applied to those objects that are contained within the partition through which the ray passes. - Objects lying within the partitions which do not intersect with the ray are not processed. 11.9 Summary and Comparison The most appropriate algorithm to use depends on the scene - depth-sort is particularly suited to the scenes with objects which are spreading out along the z-axis and/or with a small number of objects => rarely overlap in depth - scan-line and area subdivision algorithms are suitable to the scenes where objects are spreading out horizontally and/or the scenes with small number of objects (about several thousand surfaces). - Z-buffer and subdivision algorithms perform best for scene with fewer than a few thousand surfaces. - Octree is particularly good because it does not require any pre-sorting or intersection calculation. - If parallel processing hardware is available, ray casting would be a good choice (each processor handles a ray).
{"Source-Url": "http://www.cs.cityu.edu.hk/~helena/cs31622000B/Chap11Notes.pdf", "len_cl100k_base": 4172, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 18157, "total-output-tokens": 4696, "length": "2e12", "weborganizer": {"__label__adult": 0.0003979206085205078, "__label__art_design": 0.002685546875, "__label__crime_law": 0.0006604194641113281, "__label__education_jobs": 0.001544952392578125, "__label__entertainment": 0.00015866756439208984, "__label__fashion_beauty": 0.00024509429931640625, "__label__finance_business": 0.0002639293670654297, "__label__food_dining": 0.0003986358642578125, "__label__games": 0.00141143798828125, "__label__hardware": 0.0094146728515625, "__label__health": 0.0004165172576904297, "__label__history": 0.0005788803100585938, "__label__home_hobbies": 0.0002624988555908203, "__label__industrial": 0.0012941360473632812, "__label__literature": 0.00039124488830566406, "__label__politics": 0.0002872943878173828, "__label__religion": 0.0006399154663085938, "__label__science_tech": 0.311767578125, "__label__social_life": 0.0001157522201538086, "__label__software": 0.0626220703125, "__label__software_dev": 0.603515625, "__label__sports_fitness": 0.0002741813659667969, "__label__transportation": 0.000576019287109375, "__label__travel": 0.0002341270446777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19611, 0.0112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19611, 0.86309]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19611, 0.92678]], "google_gemma-3-12b-it_contains_pii": [[0, 2342, false], [2342, 4620, null], [4620, 7218, null], [7218, 9204, null], [9204, 10685, null], [10685, 12351, null], [12351, 14231, null], [14231, 15738, null], [15738, 17504, null], [17504, 19611, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2342, true], [2342, 4620, null], [4620, 7218, null], [7218, 9204, null], [9204, 10685, null], [10685, 12351, null], [12351, 14231, null], [14231, 15738, null], [15738, 17504, null], [17504, 19611, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19611, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19611, null]], "pdf_page_numbers": [[0, 2342, 1], [2342, 4620, 2], [4620, 7218, 3], [7218, 9204, 4], [9204, 10685, 5], [10685, 12351, 6], [12351, 14231, 7], [14231, 15738, 8], [15738, 17504, 9], [17504, 19611, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19611, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
e81320c18064b46c9d37dd6c94342d685b4401a5
Computer Programmers (O*NET 15-1021.00) **Significant Points** - Employment growth will be considerably slower than that of other computer specialists, due to the spread of pre-packaged software solutions. - Three out of 5 computer programmers held at least a bachelor’s degree in 2000. - Prospects should be best for college graduates with knowledge of a variety of programming languages and tools; those with less formal education or its equivalent in work experience should face strong competition for programming jobs. **Nature of the Work** Computer programmers write, test, and maintain the detailed instructions, called programs, that computers must follow to perform their functions. They also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming—advanced computing technologies and sophisticated new languages and programming tools—have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization. In this occupational statement, computer programmer refers to individuals whose main job function is programming; this group has a wide range of responsibilities and educational backgrounds. Computer programs tell the computer what to do, such as which information to identify and access, how to process it, and what equipment to use. Programs vary widely depending upon the type of information to be accessed or generated. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on board an aircraft for pilots training in a flight simulator. Although simple programs can be written in a few hours, programs that use complex mathematical formulas, whose solutions can only be approximated, or that draw data from many existing systems, may require more than a year of work. In most cases, several programmers work together as a team under a senior programmer’s supervision. Programmers write programs according to the specifications determined primarily by computer software engineers and system analysts. (Separate statements on computer software engineers and systems analysts, computer scientists, and database administrators appear elsewhere in the *Handbook.* ) After the design process is complete, it is the job of the programmer to convert that design into a logical series of instructions that the computer can follow. They then code these instructions in a conventional programming language, such as COBOL; an artificial intelligence language, such as Prolog; or one of the most advanced object-oriented languages such as Java, C++, or Smalltalk. Different programming languages are used depending on the purpose of the program. COBOL, for example, is commonly used for business applications, whereas Fortran (short for “formula translation”) is used in science and engineering. C++ is widely used for both scientific and business applications. Programmers generally know more than one programming language; and since many languages are similar, they often can learn new languages relatively easily. In practice, programmers often are referred to by the language they know, such as Java programmers, or the type of function they perform or environment in which they work, such as database programmers, mainframe programmers, or Internet programmers. Many programmers update, repair, modify, and expand existing programs. When making changes to a section of code, called a routine, programmers need to make other users aware of the task the routine is to perform. They do this by inserting comments in the coded instructions, so others can understand the program. Many programmers use computer-assisted software engineering (CASE) tools to automate much of the coding process. These tools enable a programmer to concentrate on writing the unique parts of the program, because the tools automate various pieces of the program being built. CASE tools generate whole sections of code automatically, rather than line by line. This also yields more reliable and consistent programs and increases programmers’ productivity by eliminating some routine steps. Programmers test a program by running it, to ensure the instructions are correct and it produces the desired information. If errors do occur, the programmer must make the appropriate change and recheck the program until it produces the correct results. This process is called debugging. Programmers may continue to fix these problems throughout the life of a program. Programmers working in a mainframe environment may prepare instructions for a computer operator who will run the program. (A separate statement on computer operators appears elsewhere in the *Handbook.*) They also may contribute to a manual for users. Programmers often are grouped into two broad types—applications programmers and systems programmers. Applications programmers write programs to handle a specific job, such as a program to track inventory, within an organization. They may also revise --- **Related Occupations** Actuaries need a strong background in mathematics, statistics, and related fields. Other workers whose jobs involve related skills include actuaries, insurance underwriters, financial analysts, and statisticians. (Separate statements on actuaries, employee benefits and pensions, and finance and investments, appear elsewhere in the *Handbook.* ) Sources of Additional Information Career information on actuaries specializing in pensions is available from: - **American Society of Pension Actuaries, 4245 N. Fairfax Dr., Suite 750, Arlington, VA 22203. Internet:** [http://www.aspa.org](http://www.aspa.org) - **Society of Actuaries (SOA), 475 N. Martingale Rd., Suite 800, Schaumburg, IL 60173-2226. Internet:** [http://www.soact.org](http://www.soact.org) - **Casualty Actuarial Society (CAS), 1100 N. Glebe Rd., Suite 600, Arlington, VA 22201. Internet:** [http://www.casact.org](http://www.casact.org) - **The SOA and CAS jointly sponsor a Web site for those interested in pursuing an actuarial career. Internet:** [http://www.BeAnActuary.org](http://www.BeAnActuary.org) - **For general facts about actuarial careers, contact:** - **American Academy of Actuaries, 1100 17th St. NW., 7th Floor, Washington, DC 20036. Internet:** [http://www.actuary.org/index.htm](http://www.actuary.org/index.htm) - **American Academy of Actuaries, 1100 17th St. NW., 7th Floor, Washington, DC 20036. Internet:** [http://www.soa.org](http://www.soa.org) - **American Society of Pension Actuaries, 4245 N. Fairfax Dr., Suite 750, Arlington, VA 22201. Internet:** [http://www.aspa.org](http://www.aspa.org) - **For information about actuarial careers in property and casualty insurance, contact:** - **American Academy of Actuaries, 1100 17th St. NW., 7th Floor, Washington, DC 20036. Internet:** [http://www.actuary.org/index.htm](http://www.actuary.org/index.htm) existing packaged software. *Systems programmers*, on the other hand, write programs to maintain and control computer systems software, such as operating systems, networked systems, and database systems. These workers make changes in the sets of instructions that determine how the network, workstations, and central processing unit of the system handle the various jobs they have been given and how they communicate with peripheral equipment, such as terminals, printers, and disk drives. Because of their knowledge of the entire computer system, systems programmers often help applications programmers determine the source of problems that may occur with their programs. Programmers in software development companies may work directly with experts from various fields to create software—either programs designed for specific clients or packaged software for general use—ranging from games and educational software to programs for desktop publishing, financial planning, and spreadsheets. Much of this type of programming is in the preparation of packaged software, which comprises one of the most rapidly growing segments of the computer services industry. In some organizations, particularly small ones, workers commonly known as *programmer-analysts* are responsible for both the systems analysis and the actual programming work. (A more detailed description of the work of programmer-analysts is presented in the statement on systems analysts, computer scientists, and database systems analysis and the actual programming work. A more detailed description of the statement on systems analysts, computer scientists, and database systems analysis and the actual programming work may be needed. Even with a degree, employers appear to be placing more emphasis on previous experience, for all types of programmers. Programmers generally work in offices in comfortable surroundings. Many programmers may work long hours or weekends, to meet deadlines or fix critical problems that occur during off hours. Given the technology available, telecommuting is becoming common for a wide range of computer professionals—including computer programmers. As computer networks expand, more programmers are able to connect to a customer’s computer system remotely to make corrections or fix problems, using modems, e-mail, and the Internet. Like other workers who spend long periods of time in front of a computer terminal typing at a keyboard, programmers are susceptible to eyestrain, back discomfort, and hand and wrist problems, such as carpal tunnel syndrome. Employment Computer programmers held about 585,000 jobs in 2000. Programmers are employed in almost every industry, but the largest concentration is in the computer and data processing services industry, which includes firms that write and sell software. Large numbers of programmers can also be found working for firms that provide engineering and management services, telecommunications companies, manufacturers of computer and office equipment, financial institutions, insurance carriers, educational institutions, and government agencies. A large number of computer programmers are employed on a temporary or contract basis or work as independent consultants, as companies demand expertise with new programming languages or specialized areas of application. Rather than hiring programmers as permanent employees and then laying them off after a job is completed, employers can contract with temporary help agencies, consulting firms, or directly with programmers themselves. A marketing firm, for example, may only require the services of several programmers to write and debug the software necessary to get a new customer resource management system running. This practice also enables companies to bring in people with a specific set of skills—usually in one of the latest technologies—as it applies to their business needs. Bringing in an independent contractor or consultant with a certain level of experience in a new or advanced programming language, for example, enables an establishment to complete a particular job without having to retrain existing workers. Such jobs may last anywhere from several weeks to a year or longer. There were 22,000 self-employed computer programmers in 2000. Training, Other Qualifications, and Advancement While there are many training paths available for programmers, mainly because employers’ needs are so varied, the level of education and experience employers seek has been rising, due to the growing number of qualified applicants and the specialization involved with most programming tasks. Bachelor’s degrees are commonly required, although some programmers may qualify for certain jobs with 2-year degrees or certificates. Employers are primarily interested in programming knowledge, and computer programmers are able to get certified in a language such as C++ or Java. College graduates who are interested in changing careers or developing an area of expertise also may return to a 2-year community college or technical school for additional training. In the absence of a degree, substantial specialized experience or expertise may be needed. Even with a degree, employers appear to be placing more emphasis on previous experience, for all types of programmers. ![Computer programmers write programs according to the specifications determined by software engineers or systems analysts.](image-url) About 3 out of 5 computer programmers had a bachelor’s degree or higher in 2000 (table 1). Of these, some hold a degree in computer science, mathematics, or information systems, whereas others have taken special courses in computer programming to supplement their study in fields such as accounting, inventory control, or other areas of business. As the level of education and training required by employers continues to rise, this proportion should increase in the future. Required skills vary from job to job, but the demand for various skills generally is driven by changes in technology. Employers using computers for scientific or engineering applications usually prefer college graduates who have degrees in computer or information science, mathematics, engineering, or the physical sciences. Graduate degrees in related fields are required for some jobs. Employers who use computers for business applications prefer to hire people who have had college courses in management information systems (MIS) and business and who possess strong programming skills. Although knowledge of traditional languages still is important, increasing emphasis is placed on newer, object-oriented programming languages and tools, such as C++ and Java. Additionally, employers are seeking persons familiar with fourth and fifth generation languages that involve graphic user interface (GUI) and systems programming. Employers also prefer applicants who have general business skills and experience related to the operations of the firm. Students can improve their employment prospects by participating in a college work-study program or by undertaking an internship. Most systems programmers hold a 4-year degree in computer science. Extensive knowledge of a variety of operating systems is essential. This includes being able to configure an operating system to work with different types of hardware and adapting the operating system to best meet the needs of a particular organization. Systems programmers also must be able to work with database systems, such as DB2, Oracle, or Sybase, for example. When hiring programmers, employers look for people with the necessary programming skills who can think logically and pay close attention to detail. The job calls for patience, persistence, and the ability to work on exacting analytical work, especially under pressure. Ingenuity and imagination also are particularly important, when programmers design solutions and test their work for potential failures. The ability to work with abstract concepts and to do technical analysis is especially important for systems programmers, because they work with the software that controls the computer’s operation. Because programmers are expected to work in teams and interact directly with users, employers want programmers who are able to communicate with nontechnical personnel. Entry-level or junior programmers may work alone on simple assignments after some initial instruction or on a team with more experienced programmers. Either way, beginning programmers generally must work under close supervision. Because technology changes so rapidly, programmers must continuously update their training by taking courses sponsored by their employer or software vendors. For skilled workers who keep up to date with the latest technology, the prospects for advancement are good. In large organizations, programmers may be promoted to lead programmer and be given supervisory responsibilities. Some applications programmers may move into systems programming after they gain experience and take courses in systems software. With general business experience, programmers may become programmer analysts or systems analysts or be promoted to a managerial position. Other programmers, with specialized knowledge and experience with a language or operating system, may work in research and development areas, such as multimedia or Internet technology. As employers increasingly contract out programming jobs, more opportunities should arise for experienced programmers with expertise in a specific area to work as consultants. Technical or professional certification is a way to demonstrate a level of competency or quality. In addition to language-specific certificates that a programmer can obtain, product vendors or software firms also offer certification and may require professionals who work with their products to be certified. Voluntary certification also is available through other organizations. Professional certification may provide a job seeker a competitive advantage. ### Job Outlook Employment of programmers is expected to grow about as fast as the average for all occupations through 2010. Jobs for both systems and applications programmers should be most plentiful in data processing service firms, software houses, and computer consulting businesses. These types of establishments are part of computer and data processing services, which is projected to be the fastest growing industry in the economy over the 2000-10 period. As organizations attempt to control costs and keep up with changing technology, they will need programmers to assist in conversions to new computer languages and systems. In addition, numerous job openings will result from the need to replace programmers who leave the labor force or transfer to other occupations such as manager or systems analyst. Employment of programmers, however, is expected to grow much slower than that of other computer specialists. With the rapid gains in technology, sophisticated computer software now has the capability to write basic code, eliminating the need for more programmers to do this routine work. The consolidation and centralization of systems and applications, developments in packaged software, advanced programming languages and tools, and the growing ability of users to design, write, and implement more of their own programs means more of the programming functions can be transferred to other types of workers. As the level of technological innovation and sophistication increases, programmers should continue to face increasing competition from programming businesses overseas where much routine work can be contracted out at a lower cost. Nevertheless, employers will continue to need programmers who have strong technical skills and who understand an employer’s business and its programming needs. This will mean that programmers will need to keep up with changing programming languages and techniques. Given the importance of networking and the expansion of client/server environments and web-based environments, organizations will look for programmers who can support data communications and help implement electronic commerce and intranet strategies. Demand for programmers with strong object-oriented programming capabilities and technical specialization in areas such as client/server programming, multimedia technology, and graphic user interface (GUI), should arise from the expansion of intranets, extranets, and Internet applications. Programmers also will be needed to create and maintain expert systems and embed these technologies in more and more products. As programming tasks become increasingly sophisticated and an additional level of skill and experience is demanded by employers, graduates of 2-year programs and people with less than a 2-year degree or its equivalent in work experience should face strong competition for programming jobs. Competition for entry-level positions, however, can affect applicants with a bachelor’s degree. Prospects should be best for college graduates with knowledge of, and experience working with, a variety of programming languages and tools—including C++ and other object-oriented languages like Java, as well as newer, domain-specific languages that apply to computer networking, data base management, and Internet application development. Obtaining vendor or language specific certification also can provide a competitive edge. Because demand fluctuates with employers’ needs, job seekers should keep up to date with the latest skills and technologies. Individuals who want to become programmers can enhance their prospects by combining the appropriate formal training with practical work experience. **Earnings** Median annual earnings of computer programmers were $57,590 in 2000. The middle 50 percent earned between $44,850 and $74,500 a year. The lowest 10 percent earned less than $35,020; the highest 10 percent earned more than $93,210. Median annual earnings in the industries employing the largest numbers of computer programmers in 2000 were: <table> <thead> <tr> <th>Industry</th> <th>Median Annual Earnings</th> </tr> </thead> <tbody> <tr> <td>Personnel supply services</td> <td>$65,780</td> </tr> <tr> <td>Professional and commercial equipment</td> <td>$63,780</td> </tr> <tr> <td>Computer and data processing services</td> <td>$61,010</td> </tr> <tr> <td>Commercial banks</td> <td>$60,180</td> </tr> <tr> <td>Management and public relations</td> <td>$57,120</td> </tr> </tbody> </table> According to the National Association of Colleges and Employers, starting salary offers for graduates with a bachelor’s degree in computer programming averaged $48,602 a year in 2001. According to Robert Half International, average annual starting salaries in 2001 ranged from $58,500 to $90,000 for applications development programmers/developers, and from $54,000 to $77,750 for software development programmers/analysts. Average starting salaries for Internet programmers/analysts ranged from $56,500 to $84,000. **Related Occupations** Other professional workers who deal with data and detail include computer software engineers; systems analysts, computer scientists, and database administrators; statisticians; mathematicians; engineers; financial analysts and personal financial advisors; accountants and auditors; actuaries; and operations research analysts. **Sources of Additional Information** State employment service offices can provide information about job openings for computer programmers. Municipal chambers of commerce are other sources of information on an area’s largest employers. For information about certification as a computing professional, contact: - Association for Computing Machinery (ACM), 1515 Broadway, New York, NY 10036. Internet: [http://www.acm.org](http://www.acm.org) - National Workforce Center for Emerging Technologies, 3000 Landerholm Circle SE., Bellevue, WA 98007. --- **Computer Software Engineers** (O*NET 15-1031.00, 15-1032.00) **Significant Points** - Computer software engineers are projected to be the fastest growing occupation over the 2000-10 period. - Very favorable opportunities are expected for college graduates with at least a bachelor’s degree in computer engineering or computer science and practical experience working with computers. - Computer software engineers must continually strive to acquire new skills as computer technology changes rapidly. **Nature of the Work** The explosive impact of computers and information technology on our everyday lives has generated a need to design and develop new computer software systems and to incorporate new technologies in a rapidly growing range of applications. The tasks performed by workers known as computer software engineers evolve rapidly, reflecting new areas of specialization or changes in technology, as well as the preferences and practices of employers. Computer software engineers apply the principles and techniques of computer science, engineering, and mathematical analysis to the design, development, testing, and evaluation of the software and systems that enable computers to perform their many applications. (A separate statement on computer hardware engineers appears elsewhere in the Handbook.) Software engineers working in applications or systems development analyze users’ needs and design, create, and modify general computer applications software or systems. Software engineers can be involved in the design and development of many types of software including software for operating systems, network distribution, and compilers, which convert programs for faster processing. In programming, or coding, software engineers instruct a computer, line by line, how to perform a function. They also solve technical problems that arise. Software engineers must possess strong programming skills, but are more concerned with developing algorithms and analyzing and solving programming problems than with actually writing code. (A separate statement on computer programmers appears elsewhere in the Handbook.) **Computer applications software engineers** analyze users’ needs and design, create, and modify general computer applications software or specialized utility programs. Different programming languages are used, depending on the purpose of the program. The programming languages most often used are C, C++, and Java, with Fortran and Cobol used less commonly. Some software engineers develop both packaged systems and systems software or create customized applications. **Computer systems software engineers** coordinate the construction and maintenance of a company’s computer systems, and plan their future growth. Working with a company, they coordinate each department’s computer needs—ordering, inventory, billing, and payroll recordkeeping, for example—and make suggestions about its technical direction. They also might set up the company’s intranets, networks that link computers within the organization and ease communication. Systems software engineers work for companies that configure, implement, and install complete computer systems. They may be members of the marketing or sales staff, where they serve as the
{"Source-Url": "http://www.csstudents.gwu.edu:80/LaborDept2.pdf", "len_cl100k_base": 4846, "olmocr-version": "0.1.50", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 11947, "total-output-tokens": 5267, "length": "2e12", "weborganizer": {"__label__adult": 0.0011081695556640625, "__label__art_design": 0.0008616447448730469, "__label__crime_law": 0.0013589859008789062, "__label__education_jobs": 0.478271484375, "__label__entertainment": 0.0002830028533935547, "__label__fashion_beauty": 0.0006451606750488281, "__label__finance_business": 0.005817413330078125, "__label__food_dining": 0.0009675025939941406, "__label__games": 0.0022602081298828125, "__label__hardware": 0.00212860107421875, "__label__health": 0.0011701583862304688, "__label__history": 0.00042557716369628906, "__label__home_hobbies": 0.00038743019104003906, "__label__industrial": 0.0009679794311523438, "__label__literature": 0.000732421875, "__label__politics": 0.0008215904235839844, "__label__religion": 0.0011606216430664062, "__label__science_tech": 0.00403594970703125, "__label__social_life": 0.0005602836608886719, "__label__software": 0.01371002197265625, "__label__software_dev": 0.47998046875, "__label__sports_fitness": 0.0009298324584960938, "__label__transportation": 0.0010156631469726562, "__label__travel": 0.00044155120849609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26532, 0.03112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26532, 0.88622]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26532, 0.92994]], "google_gemma-3-12b-it_contains_pii": [[0, 6987, false], [6987, 12400, null], [12400, 19731, null], [19731, 26532, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6987, true], [6987, 12400, null], [12400, 19731, null], [19731, 26532, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26532, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26532, null]], "pdf_page_numbers": [[0, 6987, 1], [6987, 12400, 2], [12400, 19731, 3], [19731, 26532, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26532, 0.08235]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
e397b8f83965ad9dc33c8ff7f71022a010d0c336
This assignment is worth 50 marks representing 8.33% of your total course grade. Model answers with brief instructions for marking • 25% of a total mark for each step of solution should be deducted if detailed comments are absent. 1. (2 marks) Work out the time complexity $T(n)$ of the following piece of code in terms of the number of operations: ``` for ( int i = 1; i < n; i *= 3 ) { for ( int j = 0; j < n; j += 2 ) { for ( int k = 1; k < n; k *= n ) { // constant number $C$ of operations } } } ``` (0.25 marks) The inner loop is executed only once (for $k = 1$), so it contributes $C$ operations. (0.5 marks) The middle loop is executed $\nu = n/2$ times where $2(\nu - 1) < n \leq 2\nu$. (1 mark) The outer loop is executed $m$ times where $3^{m-1} < n \leq 3^m$, or $m = \log_3 n$. (0.25 marks) Thus $T(n) = \frac{1}{2}Cn \log_3 n$ operations. More accurate evaluations such as $\nu = \lfloor n/2 \rfloor$ where $\lfloor z \rfloor$ is the closest integer smaller than or equal to $z$, or $\lfloor \log_3 n \rfloor \leq m < 1 + \lfloor \log_3 n \rfloor$, or $C[n/2]\lfloor \log_3 n \rfloor \leq T(n) < C[n/2] (\lfloor \log_3 \rfloor + 1)$, are not expected and should receive just the same marks. It is quite legal to solve in terms of Big-Oh provided that all steps of solution are commented so that the processing time was evaluated in terms of the number of operations. 2. (2 marks) Work out the time complexity $T(n)$ of the following piece of code in terms of the number of operations: ``` for ( int i = 0; i < n; i += 3 ) { for ( int j = 2; j < n; j = ( j * j ) ) { // constant number $C$ of operations } for ( int k = n; k > 0; k /= 2 ) { // constant number $C$ of operations } } ``` (1 mark) The loop variable $j$ in the first inner loop is changing as $2, 2^2, 2^4, \ldots, 2^{m-1} < n \leq 2^m$ where $m = \log_3 \log_2 n$. Thus, this loop contributes $C \log_2 \log_2 n$ operations. (0.75 marks) The second inner loop is executed $\mu$ times where $\frac{\mu}{\log_2 n} > 1 > \frac{\mu}{\log_2 n + 1}$ so that $\log_2 n \leq \mu < \log_2 n + 1$. Thus, this loop contributes about $C \log_2 n$ operations. (0.25 marks) The outer loop is executed $n/3$ times, so that the time complexity $T(n) = \frac{1}{3}Cn (\log_2 \log_2 n + \log_2 n)$. More accurate evaluations are not expected and should receive just the same marks. It is quite legal to solve in terms of Big-Oh provided that all steps of solution are commented so that the processing time was evaluated in terms of the number of operations. 3. (8 marks) Define formally and prove basic arithmetic relationships for the “Big Omega” notation, namely, scaling, transitivity, rule of sums, and rule of products. **Hint:** Reformulate Lemmas 1.16–1.19 from the course textbook, p.18. Solutions need not follow exactly the model answers below: the markers should check logical rather than textual correctness. **Proof.** By the definition of \( \Omega \), there exist the nonnegative integers \( n_0 \) and \( n_0' \) and positive real constants \( c \) and \( c' \) such that \( h(n) \geq c g(n) \) for all \( n \geq n_0 \) and \( g(n) \geq c' f(n) \) for all \( n \geq n_0' \). Therefore, \( h(n) \geq C f(n) \) for \( n \geq N_0 \) where \( C = c \cdot c' \) and \( N = \max\{n_0, n_0', \} \), what is to be demonstrated. **Rule of sums.** If \( g_1 \) is \( \Omega(f_1) \) and \( g_2 \) is \( \Omega(f_2) \), then \( g_1 + g_2 \) is \( \Omega(\max\{f_1, f_2\}) \). **Rule of products.** If \( g_1 \) is \( \Omega(f_1) \) and \( g_2 \) is \( \Omega(f_2) \), then \( g_1 g_2 \) is \( \Omega(f_1 f_2) \). 4. (3 marks) Processing time \( T(n) \) for algorithms below depends on the problem size \( n \). Find the dominant terms having the steepest increase in \( n \) and determine the “Big-Theta” complexity of each algorithm: (a) \( T(n) = 100n^2 + n^3 + 0.003n^5 \) (b) \( T(n) = 100n^2 \log_{128} n + n^2 \log_2 n \) **Hint:** \( \log_a x = \log_n b \cdot \log_b x \) for \( a > 0, b > 0, \) and \( x > 0 \). 5. (5 marks) Prove that $T(n) = 0.1n^2 \log_2 n + 500n$ is $\Omega(n^2)$ and $O(n^{2+\varepsilon})$ where $\varepsilon > 0$ is an arbitrary small positive constant. (a) (1 mark) $T(n)$ is $\Omega(n^2)$ because $T(n) = 0.1n^2 \log_2 n + 500n \geq 0.1n^2$ for all $n \geq 2$. (b) (4 marks) $T(n)$ is $O(n^{2+\varepsilon})$ because - (2 marks out of 4) by the Limit Rule $\lim_{n \to \infty} \frac{T(n)}{n^{2+\varepsilon}} = 0$, i.e. $$ \lim_{n \to \infty} \frac{0.1n^2 \log_2 n + 500n}{n^{2+\varepsilon}} = \lim_{n \to \infty} \left( \frac{0.1 \log_2 n}{n^{\varepsilon}} \right) + \lim_{n \to \infty} \left( 500 \frac{1}{n^{1+\varepsilon}} \right) = 0 $$ - (0.5 marks out of 4) For $n \to \infty$, the last term $\frac{1}{n^{1+\varepsilon}}$ tends to zero. - (1.5 mark) By the L’Hopital’s rule of calculus for $n \to \infty$, the first term $0.1 \frac{\log_2 n}{n^{\varepsilon}}$ also tends to zero: $$ \lim_{x \to \infty} \frac{\log_2 x}{x^{\varepsilon}} = \lim_{x \to \infty} \frac{\log_2 e}{e x^{1+\varepsilon}} = \lim_{x \to \infty} \frac{\log_2 e}{x^{\varepsilon}} = 0 $$ where $e = 2.71828 \ldots$ is the base of the natural logarithms. 6. (5 marks) Suggest which of the two software packages, A and B, of the same price should be bought to maintain large data bases having each up to $10^9 \approx 2^{30}$ records. As was found empirically, the average time to process $n$ records with A and B is $T_A(n) = 0.1n \log_2 n$ milliseconds and $T_B(n) = 2.5n$ milliseconds, respectively. Decide which package is better in “Big-Oh” sense, work out exact conditions, in terms of the database size $n$, when this package outperforms the other, and recommend the best choice in your case. - (1 mark) In “Big-Oh” sense, the package B is better (linear vs. $n \log n$ time complexity). - (3 marks) The package B begins to outperform the package A when $T_B(n) \leq T_A(n)$, that is, when $2.5n \leq 0.1n \log_2 n$, or $\log_2 n \geq 25$, or $n \geq 2^{25}$. - (1 mark) Therefore, to maintain the databases of size $2^{30}$, the package B is the best choice. 7. (5 marks) Derive a closed-form formula for $T(n)$ by solving the recurrence relation $$ nt(n) = (n + 1)T(n - 1) + c $$ with the base condition $T(0) = 0$. (a) (0.5 marks) $T(n) = n^5 \left( 0.003 + \frac{1}{n^2} + 100 \frac{1}{n^5} \right)$ so that the dominant term is $0.003n^5$. (1 mark) Thus, $0.003n^5 < T(n) \leq 101.003n^5$ for $n \geq 1$, so that $T(n)$ is $\Theta(n^5)$. (b) (0.5 marks) Both terms are dominant (because $\log_2 n = \log_2 128 \cdot \log_{128} n = 7 \log_{128} n$ and $T(n) = 107 \log_{128} n -$ but this proof is not expected). (1 mark) Thus, $106 \log_{128} n \leq T(n) \leq 108 \log_{128} n$ for $n \geq 1$, so that $T(n)$ is $\Theta(\log n)$ Hint: you may need to prove, e.g. by the math induction, that \( \frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \ldots + \frac{1}{n(n+1)} = \frac{n}{n+1} \). - **(1.5 marks)** By dividing both sides of the given recurrence by \( n(n+1) \), the given recurrence is represented in the following form: \[ \frac{T(n)}{n+1} = \frac{T(n-1)}{n} + \frac{c}{n(n+1)} \] which is more convenient for telescoping. - **(2 marks)** The telescoping (note that an explicit system of its equations is not expected and should receive the same marks) yields the closed form formula \[ \frac{T(n)}{n+1} = \frac{T(0)}{1} + c \left( \frac{1}{1 \cdot 2} + \frac{1}{2 \cdot 3} + \ldots + \frac{1}{n(n+1)} \right) = c \frac{n}{n+1} \] so that \( T(n) = cn \). - **(1.5 marks)** The math induction can be used to prove that the sum in parentheses is equal to \( \frac{n}{n+1} \). - (0.5 marks out of 1.5) The base case holds for \( n = 1 \): \( \frac{1}{1 \cdot 2} = \frac{1}{2} \). - (1 mark out of 1.5) By the induction hypothesis, let the relationship \( S_n = \frac{1}{1 \cdot 2} + \ldots + \frac{1}{n(n+1)} = \frac{n}{n+1} \) hold for \( n = k - 1 \), i.e. \( S_{k-1} = \frac{k-1}{k} \). Then \[ S_k = S_{k-1} + \frac{1}{k(k+1)} = \frac{k-1}{k} + \frac{1}{k(k+1)} = \frac{(k-1)(k+1) + 1}{k(k+1)} = \frac{k^2}{k(k+1)} = \frac{k}{k+1} \] what is to be demonstrated. 8. **(5 marks)** Assuming \( n = 10^m \) with the integer \( m = \log_{10} n \), derive a closed-form formula for \( T(n) \) by solving the recurrence relation \( T(n) = 10T(n/10) + 5 \) with the base condition \( T(1) = 0 \). - **(4 marks)** Telescoping the recurrence \( T(10^m) = 10T(10^{m-1}) + 5 \) yields the closed-form formula \( T(10^m) = 10^m T(1) + 5(1 + 10 + 10^2 + \ldots + 10^{m-1}) \). - **(1 mark)** Therefore, \( T(10^m) = 5 \frac{10^m - 1}{10-1} = \frac{5}{9}(10^m - 1) \), or \( T(n) \approx 0.55n \). *If the sum \( 1 + x + \ldots + x^{p-1} \) in parentheses is not reduced to \( \frac{x^p - 1}{x - 1} \), but the solution indicates clearly that \( T(n) \) is linear by \( n \), the last step should receive its mark.* 9. **(5 marks)** You need to select \( k = 100 \) most successful students from an unordered array of academic records of \( n = 1,000,000 \) students all over the world. Each record contains a non-negative GPA (Grade Point Average) score specifying how successful the student is. You know (possibly, from your course COMPSCI.220.SIT) two options for selecting the \( k \) higher-rank students: - **(a)** to run \( k \) times the quickselect algorithm with linear processing time, \( T_{\text{select}}(n) = cn \), in order to select each time the next rank (i.e. the ranks \( n, n-1, \ldots, n-k+1 \)), or (b) to run once the quicksort algorithm with \( n \log n \) processing time, \( T_{\text{qsort}} = cn \log_2(n) \), to sort the array in ascending order of GPAs and fetch the \( k \) higher-rank entries. Providing the factors \( c \) are the same for both the algorithms and the fetching time is negligibly small comparing to the sorting time, which option does result in the fastest selection? - \((3.5 \text{ marks})\) The option (a) is better if \( kT_{\text{qselect}} \leq T_{\text{qsort}} \), that is, \( ckn \leq cn \log_2 n \), or \( k < \log_2 n \). - \((1.5 \text{ marks})\) Because in this case \( k = 100 > \log_2 10^6 \approx 20 \), the option (b) ensures the fastest selection. 10. \((5 \text{ marks})\) Convert the array \([30, 5, 25, 10, 15, 0, 20, 40, 35, 45]\) of the size 10 into the maximum heap, delete the maximum key, and restore the heap order for the remaining nine items. <table> <thead> <tr> <th>Array position:</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>Key:</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Steps: pos.5</th> <th>45</th> <th>15</th> </tr> </thead> <tbody> <tr> <td>pos.4</td> <td>40</td> <td>10</td> </tr> <tr> <td>pos.3</td> <td>25</td> <td></td> </tr> <tr> <td>pos.2</td> <td>45</td> <td>5</td> </tr> <tr> <td>pos.1</td> <td>45</td> <td>15</td> </tr> <tr> <td></td> <td>40</td> <td>30</td> </tr> <tr> <td></td> <td>30</td> <td>35</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Heap:</th> <th>45</th> <th>40</th> <th>25</th> <th>35</th> <th>15</th> <th>0</th> <th>20</th> <th>10</th> <th>30</th> <th>5</th> </tr> </thead> </table> <table> <thead> <tr> <th>Delete max:</th> <th>5</th> <th>40</th> <th>25</th> <th>35</th> <th>15</th> <th>0</th> <th>20</th> <th>10</th> <th>30</th> <th>–</th> </tr> </thead> <tbody> <tr> <td>Restore heap:</td> <td>40</td> <td>5</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>35</td> <td>5</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>30</td> <td>5</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Heap:</th> <th>40</th> <th>35</th> <th>25</th> <th>30</th> <th>15</th> <th>0</th> <th>20</th> <th>10</th> <th>5</th> <th>–</th> </tr> </thead> </table> Comments need not be in tabular form, but otherwise they should explain in detail how the initial array is converted into the heap, how the maximum value is deleted, and how the reduced heap is restored. Every erroneous position of key values is penalised by \(-0.25\) marks. 11. \((5 \text{ marks})\) Place the key \( k = 42 \) into the hash table of size 11 using the modulo-based hash address \( i = k \) modulo 11 and double hashing with backward step \( \Delta = \max\{1, \lfloor k/11 \rfloor \} \) where \( \lfloor z \rfloor \) is the largest integer smaller than or equal to \( z \). Assume that just before you place the key 42, the array is already filled as follows (“–” indicates free locations): <table> <thead> <tr> <th>address</th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>key</td> <td>55</td> <td>45</td> <td>–</td> <td>25</td> <td>–</td> <td>–</td> <td>17</td> <td>–</td> <td>–</td> <td>31</td> <td>–</td> </tr> </tbody> </table> • (0.5 mark) For the key \( k = 42 \), the hash address \( i = 42 \mod 11 = 9 \) and the backward step \( \Delta = \max\{1, \lfloor 42/11 \rfloor \} = 3 \). • (3.5 marks) Successive steps of placing the key to this hash table are as follows (every erroneous step is penalised by \(-0.25\) marks): - the initial address \( i = 9 \) – collision; - the backward step of double hashing to the address \( (9 - 3 = 6) \) – collision; - the next backward step to the address \( (6 - 3 = 3) \) – collision; - the next backward step to the address \( (3 - 3 = 0) \) – collision; - the next wrap-around address \( (0 - 3) \mod 11 = 8 \) – the empty place is found to insert the key. • (1 mark) The resulting hash table is as follows: <table> <thead> <tr> <th></th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>key</td> <td>55</td> <td>45</td> <td>–</td> <td>25</td> <td>–</td> <td>–</td> <td>17</td> <td>–</td> <td>42</td> <td>31</td> <td>–</td> </tr> </tbody> </table> Submission *The due date is Thursday, 29th March 2007, 8:30 p.m. (ADB time)* [then penalty linearly grows in time from 0% to 50% on 31st March 2007, 8:30 p.m.; no submission afterwards]. The markers should evaluate only solutions in each submission; the course administrator will account for the above penalty.
{"Source-Url": "https://www.cs.auckland.ac.nz/courses/compsci220s1t/assignments/pdf-files/2007-archive/assig220S1T-2007-1model.pdf", "len_cl100k_base": 5419, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 26762, "total-output-tokens": 5574, "length": "2e12", "weborganizer": {"__label__adult": 0.0007786750793457031, "__label__art_design": 0.0009217262268066406, "__label__crime_law": 0.0009212493896484376, "__label__education_jobs": 0.10284423828125, "__label__entertainment": 0.0002543926239013672, "__label__fashion_beauty": 0.0005035400390625, "__label__finance_business": 0.0007925033569335938, "__label__food_dining": 0.0014085769653320312, "__label__games": 0.0034046173095703125, "__label__hardware": 0.0026683807373046875, "__label__health": 0.0010747909545898438, "__label__history": 0.0010099411010742188, "__label__home_hobbies": 0.0005383491516113281, "__label__industrial": 0.0013103485107421875, "__label__literature": 0.0012426376342773438, "__label__politics": 0.0007801055908203125, "__label__religion": 0.0013589859008789062, "__label__science_tech": 0.06573486328125, "__label__social_life": 0.0005998611450195312, "__label__software": 0.00937652587890625, "__label__software_dev": 0.79931640625, "__label__sports_fitness": 0.0009946823120117188, "__label__transportation": 0.0015697479248046875, "__label__travel": 0.0004982948303222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13544, 0.05347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13544, 0.52163]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13544, 0.77559]], "google_gemma-3-12b-it_contains_pii": [[0, 2641, false], [2641, 4116, null], [4116, 6880, null], [6880, 9616, null], [9616, 12310, null], [12310, 13544, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2641, true], [2641, 4116, null], [4116, 6880, null], [6880, 9616, null], [9616, 12310, null], [12310, 13544, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13544, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13544, null]], "pdf_page_numbers": [[0, 2641, 1], [2641, 4116, 2], [4116, 6880, 3], [6880, 9616, 4], [9616, 12310, 5], [12310, 13544, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13544, 0.1745]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c58fcd989e9db45c0bfc6ba76bba6ef7dcf34037
Programming Principles in Python (CSCI 503) Sets, Comprehensions, Iterators, and Generators Dr. David Koop (some slides adapted from Dr. Reva Freedman) Dictionary - AKA associative array or map - Collection of key-value pairs - Keys must be unique - Values need not be unique - Syntax: - Curly brackets {} delineate start and end - Colons separate keys from values, commas separate pairs - \( d = \{ 'DeKalb': 783, 'Kane': 134, 'Cook': 1274, 'Will': 546 \} \) - No type constraints - \( d = \{ 'abc': 25, 12: 'abc', ('Kane', 'IL'): 123.54 \} \) Collections - A dictionary is not a sequence - Sequences are ordered - Conceptually, dictionaries need no order - A dictionary is a collection - Sequences are also collections - All collections have length (\texttt{len}), membership (\texttt{in}), and iteration (loop over values) - Length for dictionaries counts number of key-value pairs - Pass dictionary to the \texttt{len} function - \texttt{d = \{'abc\': 25, 12: 'abc', ('Kane', 'IL'): 123.54\}} - \texttt{len(d) # 3} Mutability - Dictionaries are **mutable**, key-value pairs can be added, removed, updated - ``` d = {'DeKalb': 783, 'Kane': 134, 'Cook': 1274, 'Will': 546} d['Winnebago'] = 1023 # add a new key-value pair d['Kane'] = 342 # update an existing key-value pair d.pop('Will') # remove an existing key-value pair del d['Winnebago'] # remove an existing key-value pair d.update({'Winnebago': 1023, 'Kane': 324}) d.update([('Winnebago', 1023), ('Kane', 324)]) d.update(Winnebago=1023, Kane=324) ``` ## Dictionary Methods <table> <thead> <tr> <th>Method</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td><code>&lt;dict&gt;.clear()</code></td> <td>Remove all key-value pairs</td> </tr> <tr> <td><code>&lt;dict&gt;.update(other)</code></td> <td>Updates the dictionary with values from <code>other</code></td> </tr> <tr> <td><code>&lt;dict&gt;.pop(k, d=None)</code></td> <td>Removes the pair with key <code>k</code> and returns value or default <code>d</code> if no key</td> </tr> <tr> <td><code>&lt;dict&gt;.get(k, d=None)</code></td> <td>Returns the value for the key <code>k</code> or default <code>d</code> if no key</td> </tr> <tr> <td><code>&lt;dict&gt;.items()</code></td> <td>Returns iterable view over all pairs as (key, value) tuples</td> </tr> <tr> <td><code>&lt;dict&gt;.keys()</code></td> <td>Returns iterable view over all keys</td> </tr> <tr> <td><code>&lt;dict&gt;.values()</code></td> <td>Returns iterable view over all values</td> </tr> </tbody> </table> ## Dictionary Methods <table> <thead> <tr> <th>Method</th> <th>Meaning</th> <th>Mutate</th> </tr> </thead> <tbody> <tr> <td><code>&lt;dict&gt;.clear()</code></td> <td>Remove all key-value pairs</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.update(other)</code></td> <td>Updates the dictionary with values from other</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.pop(k, d=None)</code></td> <td>Removes the pair with key k and returns value or default d if no key</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.get(k, d=None)</code></td> <td>Returns the value for the key k or default d if no key</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.items()</code></td> <td>Returns iterable view over all pairs as (key, value) tuples</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.keys()</code></td> <td>Returns iterable view over all keys</td> <td></td> </tr> <tr> <td><code>&lt;dict&gt;.values()</code></td> <td>Returns iterable view over all values</td> <td></td> </tr> </tbody> </table> Iteration • Even though dictionaries are not sequences, we can still iterate through them • Principle: Don't depend on order • for k in d: # iterate through keys print(k, end=" ") • for k in d.keys(): # iterate through keys print('key:', k) • for v in d.values(): # iterate through values print('value:', v) • for k, v in d.items(): # iterate through key-value pairs print('key:', k, 'value:', v) Assignment 3 • Lists and Dictionaries • US Senate Stock Trading • Out Later Today Sets Sets - Sets are dictionaries but without the values - Same curly braces, no pairs - `s = {'DeKalb', 'Kane', 'Cook', 'Will'}` - Only one instance of a value is in a set—sets **eliminate duplicates** - Adding multiple instances of the same value to a set doesn't do anything - `s = {'DeKalb', 'DeKalb', 'DeKalb', 'Kane', 'Cook', 'Will'}` - `# {'Cook', 'DeKalb', 'Kane', 'Will'}` - Watch out for the empty set - `s = {} # not a set!` - `s = set() # an empty set` Sets are Mutable Collections - Sets are **mutable** like dictionaries: we can add, replace, and delete - Again, no type constraints - `s = {12, 'DeKalb', 22.34}` - Like a dictionary, a set is a **collection** but not a sequence - Q: What three things can we do for any collection? Collection Operations on Sets • \( s = \{ 'DeKalb', 'Kane', 'Cook', 'Will' \} \) • Length - \( \text{len}(s) \# 4 \) • Membership: fast just like dictionaries - 'Kane' in s # True - 'Winnebago' not in s # True • Iteration - for county in s: print(county) Mathematical Set Operations • $s = \{\text{'DeKalb', 'Kane', 'Cook', 'Will'}\}$ $t = \{\text{'DeKalb', 'Winnebago', 'Will'}\}$ • Union: $s \cup t \equiv \{\text{'DeKalb', 'Kane', 'Cook', 'Will', 'Winnebago'}\}$ - Unlike dictionaries, is commutative for sets ($s \cup t == t \cup s$) • Intersection: $s \cap t \equiv \{\text{'DeKalb', 'Will'}\}$ • Difference: $s - t \equiv \{\text{'Kane', 'Cook'}\}$ • Symmetric Difference: $s \Delta t \equiv \{\text{'Kane', 'Cook', 'Winnebago'}\}$ • Object method variants: $s.union(t)$, $s.intersection(t)$, $s.difference(t)$, $s.symmetric_difference(t)$ • Disjoint: $s.isdisjoint(t) \equiv False$ Mutation Operations • add: s.add('Winnebago') • discard: s.discard('Will') • remove: s.remove('Will') # generates KeyError if not exist • clear: s.clear() # removes all elements • Variants of the mathematical set operations (have augmented assignments) - update (union): |= - intersection_update: &= - difference_update: -= - symmetric_difference_update: ^= • Methods take any iterable, operators require sets Comprehensions Comprehension • Shortcut for loops that **transform** or **filter** collections • Functional programming features this way of thinking: Pass functions to functions! • Imperative: a loop with the actual functionality buried inside • Functional: specify both functionality and data as inputs List Comprehension • output = [] for d in range(5): output.append(d ** 2 - 1) • Rewrite as a map: output = [d ** 2 - 1 for d in range(5)] • Can also filter: output = [d for d in range(5) if d % 2 == 1] • Combine map & filter: output = [d ** 2 - 1 for d in range(5) if d % 2 == 1] Comprehensions using other collections • Comprehensions can use existing collections, too (not just ranges) • Anything that is iterable can be used in the for construct (like for loop) • names = ['smith', 'Smith', 'John', 'mary', 'jan'] • names2 = [item.upper() for item in names] Any expression works as output items - Tuples inside of comprehension - `[(s, s+2) for s in slist]` - Dictionaries, too - `['i': i, 'j': j} for (i, j) in tuple_list] - Function calls - `names = ['smith', 'Smith', 'John', 'mary', 'jan']` - `names2 = [item.upper() for item in names]` Multi-Level and Nested Comprehensions • **Flattening** a list of lists - `my_list = [[1,2,3],[4,5],[6,7,8,9,10]]` ```python [v for vlist in my_list for v in vlist] ``` - `[1,2,3,4,5,6,7,8,9,10]` • Note that the for loops are in order • Difference between **nested** comprehensions - `[[v**2 for v in vlist] for vlist in my_list]` - `[[1,4,9],[16,25],[36,49,64,81,100]]` Comprehensions for other collections • Dictionaries - `{k: v for (k, v) in other_dict.items() if k.startswith('a')}` - Sometimes used for one-to-one map inverses • How? Comprehensions for other collections • Dictionaries - `{k: v for (k, v) in other_dict.items() if k.startswith('a')}` - Sometimes used for one-to-one map inverses • `{v: k for (k, v) in other_dict.items()}` • Be careful that the dictionary is actually one-to-one! • Sets: - `{s[0] for s in names}` Tuple Comprehension? - thing = (x ** 2 for x in numbers if x % 2 != 0) thing # not a tuple! <generator object <genexpr> ... - Actually a **generator**! - This **delays** execution until we actually need each result Iterators - Key concept: iterators only need to have a way to get the next element - To be iterable, an object must be able to produce an iterator - Technically, must implement the \_\_iter\_\_ method - An iterator must have two things: - a method to get the next item - a way to signal no more elements - In Python, an iterator is an object that must - have a defined \_\_next\_\_ method - raise \texttt{StopException} if no more elements available Iteration Methods • You can call iteration methods directly, but rarely done - my_list = [2, 3, 5, 7, 11] it = iter(my_list) first = next(it) print("First element of list:", first) • iter asks for the iterator from the object • next asks for the next element • Usually just handled by loops, comprehensions, or generators For Loop and Iteration - `my_list = [2,3,5,7,11]` ```python for i in my_list: print(i * i) ``` - Behind the scenes, the for construct - asks for an iterator `it = iter(my_list)` - calls `next(it)` each time through the loop and assigns result to `i` - handles the `StopIteration` exception by ending the loop - Loop won't work if we don't have an iterable! ```python for i in 7892: print(i * i) ``` Generators - Special functions that return **lazy** iterables - Use less memory - Change is that functions **yield** instead of **return** ```python def square(it): for i in it: yield i*i ``` - If we are iterating through a generator, we hit the first yield and immediately return that first computation - Generator expressions just shorthand (remember no tuple comprehensions) - `(i * i for i in [1,2,3,4,5])` Generators - If memory is not an issue, a comprehension is probably faster - ...unless we don't use all the items ```python • def square(it): for i in it: yield i*i ``` ```python • for j in square([1,2,3,4,5]): if j >= 9: break print(j) ``` - The square function only runs the computation for 1, 2, and 3 - What if this computation is **slow**? Lazy Evaluation - \( u = \text{compute\_fast\_function}(s, t) \) \( v = \text{compute\_slow\_function}(s, t) \) if \( s > t \) and \( s^2 + t^2 > 100 \): return \( u / 100 \) else: return \( v / 100 \) - We don't write code like this! Why? Lazy Evaluation - $u = \text{compute\_fast\_function}(s, t)$ - $v = \text{compute\_slow\_function}(s, t)$ - if $s > t$ and $s^2 + t^2 > 100$: - return $u / 100$ - else: - return $v / 100$ - We don't write code like this! Why? - Don't compute values until you need to! Lazy Evaluation • Rewriting • if $s > t$ and $s^2 + t^2 > 100$: $u = \text{compute\_fast\_function}(s, t)$ $\text{res} = \frac{u}{100}$ else: $v = \text{compute\_slow\_function}(s, t)$ $\text{res} = \frac{v}{100}$ • slow function will not be executed unless the condition is true Lazy Evaluation • What if this were rewritten as: ```python def my_function(s, t, u, v): if s > t and s**2 + t**2 > 100: res = u else: res = v return res my_function(s, t, compute_fast_function(s, t), compute_slow_function(s, t)) ``` • In some languages (often pure functional languages), computation of \( u \) and \( v \) may be **deferred** until we need them • Python doesn't work that way in this case Short-Circuit Evaluation • But Python, and many other languages, do work this way for **boolean** operations ```python if b != 0 and a/b > c: return ratio - c ``` • Never get a divide by zero error! • Compare with: ```python def check_ratio(val, ratio, cutoff): if val != 0 and ratio > cutoff: return ratio - cutoff check_ratio(b, a/b, c) ``` • Here. *a/b* is computed before *check_ratio* is called (but **not used**!)} Short-Circuit Evaluation - Works from left to right according to order of operations (and before or) - Works for and and or - and: - if any value is False, stop and return False - \( a, b = 2, 3 \) - \( a > 3 \) and \( b < 5 \) - or: - if any value is True, stop and return True - \( a, b, c = 2, 3, 7 \) - \( a > 3 \) or \( b < 5 \) or \( c > 8 \) Short-Circuit Evaluation • Back to our example ```python if s > t and compute_slow_function(s, t) > 50: c = compute_slow_function(s, t) else: c = compute_fast_function(s, t) ``` • $s, t = 10, 12$ # `compute_slow_function` is never run • $s, t = 5, 4$ # `compute_slow_function` is run once • $s, t = 12, 10$ # `compute_slow_function` is run twice Short-Circuit Evaluation - Walrus operator saves us one computation - ``` if s > t and (c := compute_slow_function(s, t) > 50): pass else: c = s ** 2 + t ** 2 ``` - ```s, t = 10, 12 # compute_slow_function is never run ``` - ```s, t = 5, 4 # compute_slow_function is run once ``` - ```s, t = 12, 10 # compute_slow_function is run once ``` What about multiple executions? - for \( s, t \) in [(12, 10), (4, 5), (5, 4), (12, 10)]: - if \( s > t \) and (\( c := \text{compute\_slow\_function}(s, t) > 50 \)): - pass - else: - \( c = \text{compute\_fast\_function}(s, t) \) - What's the problem here? What about multiple executions? For $s, t$ in [(12, 10), (4, 5), (5, 4), (12, 10)]: if $s > t$ and ($c := \text{compute\_slow\_function}(s, t) > 50$): pass else: $c = \text{compute\_fast\_function}(s, t)$ What's the problem here? Executing the function for the same inputs twice! Memoization • memo_dict = {} def memoized_slow_function(s, t): if (s, t) not in memo_dict: memo_dict[(s, t)] = compute_slow_function(s, t) return memo_dict[(s, t)] • for s, t in [(12, 10), (4, 5), (5, 4), (12, 10)]: if s > t and (c := memoized_slow_function(s, t) > 50): pass else: c = compute_fast_function(s, t) • Second time executing for s=12, t=10, we don't need to compute! • Tradeoff memory for compute time Memoization - Heavily used in functional languages because there is no assignment - Cache (store) the results of a function call so that if called again, returns the result without having to compute - If arguments of a function are hashable, fairly straightforward to do this for any Python function by caching in a dictionary - In what contexts, might this be a bad idea? Memoization - Heavily used in functional languages because there is no assignment - Cache (store) the results of a function call so that if called again, returns the result without having to compute - If arguments of a function are hashable, fairly straightforward to do this for any Python function by caching in a dictionary - In what contexts, might this be a bad idea? - def memoize_random_int(a, b): if (a,b) not in random_cache: random_cache[(a,b)] = random.randint(a,b) return random_cache[(a,b)] - When we want to rerun, e.g. random number generators
{"Source-Url": "http://faculty.cs.niu.edu/~dakoop/cs503-2021fa/lectures/lecture08.pdf", "len_cl100k_base": 4763, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 48095, "total-output-tokens": 6291, "length": "2e12", "weborganizer": {"__label__adult": 0.00028395652770996094, "__label__art_design": 0.00032401084899902344, "__label__crime_law": 0.00027251243591308594, "__label__education_jobs": 0.003797531127929687, "__label__entertainment": 7.110834121704102e-05, "__label__fashion_beauty": 0.0001232624053955078, "__label__finance_business": 0.00016260147094726562, "__label__food_dining": 0.0004582405090332031, "__label__games": 0.0004048347473144531, "__label__hardware": 0.0007300376892089844, "__label__health": 0.0004334449768066406, "__label__history": 0.00018393993377685547, "__label__home_hobbies": 0.0001550912857055664, "__label__industrial": 0.00048613548278808594, "__label__literature": 0.000324249267578125, "__label__politics": 0.0002346038818359375, "__label__religion": 0.00048732757568359375, "__label__science_tech": 0.01442718505859375, "__label__social_life": 0.0001342296600341797, "__label__software": 0.006671905517578125, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00029087066650390625, "__label__transportation": 0.00037980079650878906, "__label__travel": 0.00020396709442138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15169, 0.01974]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15169, 0.42159]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15169, 0.62706]], "google_gemma-3-12b-it_contains_pii": [[0, 155, false], [155, 561, null], [561, 1042, null], [1042, 1585, null], [1585, 2496, null], [2496, 3389, null], [3389, 3826, null], [3826, 3909, null], [3909, 3914, null], [3914, 4385, null], [4385, 4669, null], [4669, 4940, null], [4940, 5587, null], [5587, 6008, null], [6008, 6023, null], [6023, 6319, null], [6319, 6615, null], [6615, 6897, null], [6897, 7189, null], [7189, 7576, null], [7576, 7750, null], [7750, 8063, null], [8063, 8283, null], [8283, 8744, null], [8744, 9082, null], [9082, 9537, null], [9537, 9965, null], [9965, 10342, null], [10342, 10601, null], [10601, 10875, null], [10875, 11181, null], [11181, 11621, null], [11621, 12078, null], [12078, 12448, null], [12448, 12828, null], [12828, 13192, null], [13192, 13464, null], [13464, 13771, null], [13771, 14218, null], [14218, 14592, null], [14592, 15169, null]], "google_gemma-3-12b-it_is_public_document": [[0, 155, true], [155, 561, null], [561, 1042, null], [1042, 1585, null], [1585, 2496, null], [2496, 3389, null], [3389, 3826, null], [3826, 3909, null], [3909, 3914, null], [3914, 4385, null], [4385, 4669, null], [4669, 4940, null], [4940, 5587, null], [5587, 6008, null], [6008, 6023, null], [6023, 6319, null], [6319, 6615, null], [6615, 6897, null], [6897, 7189, null], [7189, 7576, null], [7576, 7750, null], [7750, 8063, null], [8063, 8283, null], [8283, 8744, null], [8744, 9082, null], [9082, 9537, null], [9537, 9965, null], [9965, 10342, null], [10342, 10601, null], [10601, 10875, null], [10875, 11181, null], [11181, 11621, null], [11621, 12078, null], [12078, 12448, null], [12448, 12828, null], [12828, 13192, null], [13192, 13464, null], [13464, 13771, null], [13771, 14218, null], [14218, 14592, null], [14592, 15169, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15169, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15169, null]], "pdf_page_numbers": [[0, 155, 1], [155, 561, 2], [561, 1042, 3], [1042, 1585, 4], [1585, 2496, 5], [2496, 3389, 6], [3389, 3826, 7], [3826, 3909, 8], [3909, 3914, 9], [3914, 4385, 10], [4385, 4669, 11], [4669, 4940, 12], [4940, 5587, 13], [5587, 6008, 14], [6008, 6023, 15], [6023, 6319, 16], [6319, 6615, 17], [6615, 6897, 18], [6897, 7189, 19], [7189, 7576, 20], [7576, 7750, 21], [7750, 8063, 22], [8063, 8283, 23], [8283, 8744, 24], [8744, 9082, 25], [9082, 9537, 26], [9537, 9965, 27], [9965, 10342, 28], [10342, 10601, 29], [10601, 10875, 30], [10875, 11181, 31], [11181, 11621, 32], [11621, 12078, 33], [12078, 12448, 34], [12448, 12828, 35], [12828, 13192, 36], [13192, 13464, 37], [13464, 13771, 38], [13771, 14218, 39], [14218, 14592, 40], [14592, 15169, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15169, 0.04775]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
437fee2a53aeb53e6782da0678ed48f8d5c03570
An object-oriented programming of an explicit dynamics code: application to impact simulation Olivier Pantalé* LGP CMAO, ENIT, 47 Av d’Azeriex, BP 1629, 65016 Tarbes Cedex, France Abstract During the last fifty years, the development of better numerical methods and more powerful computers has been a major enterprise for the scientific community. Recent advances in computational softwares have lead to the possibility of solving more physical and complex problems (coupled problems, nonlinearities, high strain and high strain rate problems, etc.). The development of object-oriented programming leads to better structured codes for the finite element method and facilitates the development, the maintainability and the expandability of such codes. This paper presents an implementation in C++ of an explicit finite element program dedicated to the simulation of impacts. We first present a brief overview of the kinematics, the conservative and constitutive laws related to large deformation inelasticity. Then we present the design and the numerical implementation of some aspects developed with an emphasis on the object-oriented programming adopted. Finally, the efficiency and accuracy of the program are investigated through some benchmark tests. Keywords: Nonlinear finite-element; Explicit integration; Large deformations; Plasticity; Impact; C++; Object-oriented programming 1. Introduction After a long time of intensive developments, the finite element method has become a widely used tool for researchers and engineers. An accurate analysis of large deformation inelastic problems occurring in impact simulations is extremely important as a consequence of a high amount of plastic flow. This research field has been widely explored and a number of computational algorithms for the integration of constitutive relations have been developed for the analysis of large deformation problems [1,2]. In this paper an object-oriented (OO) implementation of an explicit finite element program called DynELA is presented. This FEM program is written in C++ [3]. The development of object-oriented programming (OOP) leads to better-structured codes for the finite element method and facilitates the development and maintainability [4,5]. A significant advantage of OOP concerns the modeling of complex physical systems such as deformation processing where the overall complex problem is partitioned in individual subproblems based on physical, mathematical or geometric reasoning. Therefore, the finite element concept leads naturally to an object representation. 2. Governing equations, discretization and numerical integration The conservative laws and the constitutive equations for path-dependent material are formulated in an updated Lagrangian finite element method in large deformations. Both the geometrical and material nonlinearities are included in this setting. The finite element method (FEM) is used for the discretization of the conservative equations, and an explicit integration scheme is then adopted for time discretization of those equations. In the next paragraphs, we summarize some basic results concerning nonlinear mechanics relevant to our subsequent developments and refers for example to Hughes [6] or Simo and Hughes [7] for details concerning finite element method and the integration of constitutive laws. 2.1. Basic kinematics and constitutive equations In a Lagrangian description, the mass, momentum and energy equations which govern the continuum are given by \[ \rho + \rho \, \text{div} \, \tilde{v} = 0 \quad (1) \] \[ \rho \tilde{v} = \rho \tilde{\gamma} + \text{div} \, \sigma \quad (2) \] \[ \rho \dot{e} = \sigma : \mathbf{D} = \text{div} \, \tilde{q} + \rho r \quad (3) \] where \( \rho \) is the mass density, \( (\cdot) \) the time derivative of \((\cdot)\), \( \tilde{v} \) the material velocity, \( f \) the body force vector, \( \sigma \) the Cauchy stress tensor, \( \mathbf{D} \) the spatial rate of deformation, \( r \) the body heat generation rate and \( \tilde{q} \) is the heat flux vector. The symbol \( ^{\cdot} \cdot \cdot \) denotes the contraction of a pair of repeated indices which appear in the same order, so \( \mathbf{A} : \mathbf{B} = A_{ij} B_{ij} \). The matricial forms of Eqs. (1)–(3) are obtained, according to the finite element method, by subdividing the domain of interest \( \Omega \) into a finite number of elements, leading to the following forms of the conservative equations: \[ \mathbf{M}^\rho \rho + \mathbf{K}^\rho \rho = 0 \quad (4) \] \[ \mathbf{M}^\tilde{v} \tilde{v} + \mathbf{F}^{\text{int}} = \mathbf{F}^{\text{ext}} \quad (5) \] \[ \mathbf{M}^e e + \mathbf{g} = \mathbf{r} \quad (6) \] If we use the same form \( \varphi^{i} \) for the shape and test function (as usually done for a serendipity element), one may obtain the following expressions for the elementary matrices of Eqs. (4)–(6): \[ \mathbf{M}^\rho = \int_{\Omega} \varphi^{i} \varphi^{j} \rho \, d\Omega_x, \quad \mathbf{K}^\rho = \int_{\Omega} \varphi^{i} \nabla \varphi^{j} \, d\Omega_x \quad (7) \] \[ \mathbf{M}^\tilde{v} = \int_{\Omega} \rho \varphi^{j} \tilde{v}^{j} \, d\Omega_x, \quad \mathbf{F}^{\text{int}} = \int_{\Omega} \nabla \varphi^{j} \sigma \, d\Omega_x \quad (8) \] \[ \mathbf{M}^e = \int_{\Omega} \varphi^{j} \varphi^{k} \, d\Omega_x, \quad \mathbf{g} = \int_{\Omega} \nabla \varphi^{j} \tilde{q} \, d\Omega_x \quad (9) \] \[ \mathbf{r} = \int_{\Gamma_s} \varphi^{j} \left( \sigma : \mathbf{D} + \rho r \right) \, d\Gamma_x - \int_{\Gamma_s} \varphi^{j} \theta \, d\Gamma_x \] where \( \nabla \) is the gradient operator, \( \Gamma_s \) is the surface of the domain \( \Omega \), \( \mathbf{M}^\rho \) are consistent mass matrices, \( \mathbf{F}^{\text{int}} \) is the external force vector and \( \mathbf{F}^{\text{int}} \) is the internal force vector. As usually done, we associate the explicit integration scheme with the use of lumped mass matrices in calculations, therefore quantities \((\cdot)\) are directly obtained from Eqs. (4)–(6) without the need of any matrix inversion algorithm. ### 2.2. Constitutive law This finite element code is dedicated to large strains simulations, therefore we must ensure the objectivity of all the terms appearing in the constitutive law. The symmetric part of the spatial velocity gradient \( \mathbf{L} \), denoted by \( \mathbf{D} \) is objective while its skew-symmetric part \( \mathbf{W} \), called the spin tensor, is not objective. The incremental formulation of the constitutive law is given by \( \sigma = f(\mathbf{D} \cdots) \). Assuming that the Cauchy stress tensor \( \sigma \) is objective, its material time derivative \( \dot{\sigma} \) is nonobjective, so one must introduce an objective rate notion which is a modified time derivative form of the Cauchy stress tensor \( \sigma \) as the Jaumann–Zaremba or the Green–Naghdi derivatives. One of the solutions to this problem consists of defining a new Cauchy stress rate in a rotating referential defined using a rotation tensor \( \mathbf{w} \) with \( \dot{\mathbf{w}} = \omega \mathbf{w} \). Defining any quantity \((\cdot)\) in this rotating referential as a corotational one denoted by \((\cdot)^{\tau}\), one may obtain: \[ \dot{\sigma}^{\tau} = \mathbf{w}^T \dot{\sigma} \mathbf{w} \quad \text{and} \quad \dot{\sigma} = \mathbf{w}^T \sigma \mathbf{w} \quad (10) \] In fact, the choice of \( \omega = \mathbf{W} \) with the initial condition \( \mathbf{w}(t_0) = \mathbf{I} \) corresponds to the Jaumann rate. The major consequence of corotational rates is that if we choose the local axis system as the corotational one, constitutive laws integration can be performed as in small deformation. According to the decomposition of the Cauchy stress tensor into a deviatoric term \( \mathbf{s} \) and an hydrostatic term \( \rho \), one may obtain \[ \mathbf{s}^{\tau} = \mathbf{C}^{\tau} : \mathbf{D}^{\tau} \quad \text{and} \quad \rho = K \text{tr}[\mathbf{D}]^c \quad (11) \] where \( K \) is the bulk modulus of the material and \( C \) is the fourth-order constitutive tensor. In this application, we use a \( J_2 \) plasticity model with nonlinear isotropic/kinematic hardening law. The associated von Mises yield criterion allows the use of the radial-return mapping strategy briefly summarized hereafter. #### 2.2.1. Elastic prediction Due to the objectivity, and therefore the use of a corotational system, all the terms of the constitutive equation are corotational ones, so we can drop the subscript \( c \) in the following equations for simplicity. The elastic stresses are calculated using the Hooke’s law, according to Eq. (11), by the following equation \[ p_{n+1}^{\text{trial}} = p_n + K \text{tr}[\mathbf{\Delta e}] \quad \text{and} \quad s_{n+1}^{\text{trial}} = s_n + 2G\mathbf{\Delta e} \quad (12) \] where \( \mathbf{\Delta e} \) is the incremental strain tensor between increment \( n \) and increment \( n + 1 \). Hence, the deviatoric part of the predicted elastic stress is given by \[ \phi_{n+1}^{\text{trial}} = \phi_{n+1}^{\text{trial}} - \mathbf{\alpha}_n \quad (13) \] where \( \mathbf{\alpha}_n \) is the back-stress tensor. The von Mises criterion \( f \) is then defined by: \[ f_{n+1}^{\text{trial}} = \sqrt{\frac{2}{3} \phi_{n+1}^{\text{trial}} : \phi_{n+1}^{\text{trial}} - \sigma_e} \quad (14) \] where \( \sigma_e \) is the yield stress in the von Mises sense. Hence, if \( f_{n+1}^{\text{trial}} \leq 0 \), the predicted solution is physically admissible, and the whole increment is assumed to be elastic. #### 2.2.2. Plastic correction If the predicted elastic stresses does not correspond to a physically admissible state, a plastic correction has to be performed. The previous trial stresses serve as the initial condition for the so-called return-mapping algorithm. This one is summarized by the following equation: \[ s_{n+1} = s_{n+1}^{\text{trial}} - 2G \gamma n \quad (15) \] where \( n = (\mathbf{d}^{\text{trial}}_{n+1}/\|\mathbf{d}^{\text{trial}}_{n+1}\|) \) is the unit normal to the von Mises yield surface, and \( \gamma \) is the consistency parameter defined as the solution of the one scalar parameter (\( \gamma \)) nonlinear equation below: \[ f(\gamma) = \left[ d_{n+1}^{\text{trial}} \right] - 2G \gamma - \frac{2}{3} (\sigma(\gamma) - \|\sigma(\gamma)\|) = 0 \quad (16) \] Eq. (16) is effectively solved by a local Newton iterative procedure [7]. Since \( f(\gamma) \) is a convex function, convergence is guaranteed. 2.3. Time integration All above equations are integrated by an explicit scheme associated with lumped mass matrices. The flowchart for the explicit time integration of the Lagrangian mesh is given in Box 1. 3. Object-oriented design Object-oriented calculations have received extensive attention in computational mathematics and several engineering applications have already been published in computational journals. The benefits of OOP to implementation of FEM programs has been explored by Miller [8] and Mackie [9], and more recently applied to a Lagrangian analysis of thermo-plasticity coupled with ductile damage at finite strains by Zabaras and Srikanth [10]. The main used language in OOP is the C++, but some prospectives have been made to use other languages such as Java [11] with an extensive performance analysis. In this section only some aspects of the architecture are presented. Section 3.2 describes the basic classes and linear algebra. Some more specific aspects of the numerical implementation are presented in Section 3.3. 3.1. Overview of object-oriented programming Traditionally, numerical softwares are based on use of a procedural programming language such as C or Fortran, in which the finite element algorithm is broken down into procedures that manipulate data. When developing a large application, the procedures are wrapped up in libraries which are used as modules and sometimes linked with external libraries such as the well-known Blas [12] for linear algebra. Over the last few years, the use of object-oriented programming techniques has increased, leading to highly modularized code structure through user defined classes which can be seen as the association of data and methods. In OOP, an object is in fact an instance of a class. This approach is very attractive because of well-defined mechanisms for modular design and re-use of code. Briefly speaking, OOP encourages computer implementations of mathematical abstractions such as the work done concerning partial differential equations with Diffpack [13]. Efficient OOP results in the association of low level numeric computations encapsulated in high level abstraction such as inheritance, members and operators overload, abstraction and polymorphism or templates [14]. All those well-known characteristics for programmers are briefly presented here after, with their applications to numerical FEM program development. Inheritance is a mechanism which allows the exploitation of commonality between objects. For example, assuming that we implements and Element base class containing methods such as integration of conservation laws over the element, one can derive this class to create two-dimensional, three-dimensional or axisymmetric elements. Those inherited classes, for example the two-dimensional element class, may be derivated one more time to create rectangular or triangular planar elements defined with various number of nodes as shown in Fig. 1. Therefore, only the highly specialized code, as shape functions calculations, is implemented in those derived classes. Members and operators overload allow an easy writing of mathematical functions such as matrices product using a generic syntax of the form \( A = B \ast C \) where \( A, B \) and \( C \) are three matrices of compatible sizes. The overloaded operators \( \ast \) and \( = \) may use efficient matrix calculation and affection algorithms associated with a set of basic checks like size compatibility of the operators. The same kind of operation is possible when the parameters are instances of different classes, such as the definition of the product of a matrix and a vector. Abstraction is the ability of defining abstract objects using virtual member methods. Abstract classes allow the writing of generic algorithms and the easy extension of the existing code. The resulting class is said to have a polymorphic behavior. An example of an abstract class is the class Element defined in Fig. 1. In this case, we never create an instance of the class Element, but only instances of derived classes depending on the type of the element needed. Template classes are generic ones, for example generic lists of any kind of object (nodes, elements, integration points, etc.). Templates are the fundamental enabling technology that supports construction of maintainable highly abstract, high performance scientific codes in C++ [15]. The use of OOP, and here the C++ language, has been criticized because its computational efficiency is commonly believed to be much lower than the one of comparable Fortran codes. Recent studies on relative efficiency of C++ numerical computations [15] have shown that there is a performance increase with optimized codes but libraries must be implemented carefully so that the CPU intensive numerics take place in functions that are easily optimized by C compilers. Creation of user defined class libraries with overload operators and encapsulation of low level operations on the basic data types allows for optimizations to be introduced incrementally through the development cycle. For example, in the linear algebra library, we use low level C and Fortran routines coming from the Lapack and Blas [12] libraries. 3.2. Basic classes and linear algebra In a FEM application, the most logical point of departure will be the creation of a basic and mathematical class library. In this project, we have made the choice of developing our own basic classes such as the template class List (used to manage a list of any kind of object: Node, Element, etc.) and linear algebra ones such as Vector, Matrix and Tensor classes. Other projects described in literature are usually based on free or commercial libraries of C++ as the work done by Zabaras [16] with Diffpack. This choice has been done because we need linear algebra classes optimized for an explicit FEM program and in order to distribute this work according to the GNU general public license. Also, we did not wanted to waste a lot of time working with a free library becoming no longer free distributed from one day to another but rather a commercial package like the Diffpack library. To illustrate one of the major advantage of the OOP, if we consider that the objects \( \hat{s} \) and \( D^c \) are instances of the Tensor2 class, while the object \( C^c \) is an instance of the Tensor4 class, this allows us to implement both terms of Eq. (11) in a simple, compact and elegant manner: \[ \text{Tensor2 } dS, D_c; // two instances of the Tensor2 class \] \[ \text{Tensor4 } C_c; // an instance of the Tensor4 class \] \[ double K, dp; // two scalars \] \[ \cdots // some various operations \] \[ dp = K \cdot D_c.trace(); // first equation \] \[ (d) = K \cdot D_c.trace(); // second equation (\hat{s}_c = C_c \cdot dS_c) \] Box 2 presents the minimum parts of the two classes Tensor2 and Tensor4 needed to implement those C++ code lines. Box 2 Headers of the Tensor2 and Tensor4 classes ```cpp class Tensor2 { ... Public: Tensor2(); // constructor ~Tensor2(); // destructor ... Tensor2 operator = (const Tensor2& t); // assign Friend Tensor2 operator + (const double& value, const Tensor2& tensor); // addition double trace(); // return the trace of the tensor } class Tensor4 { ... Public: Tensor4(); // constructor ~Tensor4(); // destructor ... Tensor2 operator * (const Tensor2& t) const; // multiply } ``` To implement those operations, we first need of course the default constructors and destructors of both classes Tensor2 and Tensor4. Those two methods take no arguments here. For the first equation, we need the implementation of the method `trace()` used to compute the trace of a Tensor2, and an overload of the operator between a scalar value, and a Tensor2 object. This one is to be declared as a friend method because we need to access some private members of the Tensor2 class in this method. An overloading of the operator between the two classes Tensor2 and Tensor4 and of the operator \(\ast\) between two Tensor2 classes have been implemented for the second equation. 3.3. Finite element classes As it can be found in many other papers dealing with the implementation of FEM [8,9,16] some basic classes have been introduced in this work. In this paragraph, an overview of the FEM classes is presented. Then, we focus on the implementation of the nonlinear material behavior used in this FEM code to illustrate the use of OOP in FEM. 3.3.1. Overview of the FEM classes The FEM represented by the class Domain is mainly composed by the modules represented by the abstract classes Node, Element, Material, Interface and ioDomain as shown in Fig. 2. The class Node contains nodal data, such as node number or nodal coordinates. Two instances of the NodalField class containing all nodal quantities at each node are linked to each node of the structure. The first one is relative to time \(t\), the second one to \(t + \Delta t\). At the end of the increment, we just have to swap the references to those objects to transfer all quantities from one step to another (see step 2 of the explicit time integration flowchart in Box 1). Boundary conditions (BC) through the BoundaryCondition class may affect the behavior of each node in particular subtreatments such as contact conditions, external forces or thermal flux treatment. A list of BC is attached to each node, this gives the ability to change the BC during the main solve loop. For example a call to the Node::updatePosition() method changes the coordinates according to the current BC. The class Element is a virtual class that contains the definition of each element of the structure (see Fig. 1). This class serves as a base class for a number of other classes depending on the type of analysis and the nature of elements needed. The difference between all derived element classes concerns for example the shape functions. Of course, it is possible to mix together various types of elements in the same computation. The only restriction here concerns the first level of inheritance, you cannot have an axisymmetric element and a plane strain one in the same model. Each element of the structure contains a given number of nodes, an arbitrary number of integration points (see IntegrationPoint class) and refers an associate constitutive law through the material definition. The Interface class contains all definitions concerning the contact interfaces the contact law through the ContactLaw class and the contact definition through the Side class. We do not present more this one here. The class iioDomain is used to serve as an interface between the Domain and input/output files. This class serves as a base class for many other derived classes which implements specific interfaces for various file formats. The most important one is the class InputData used to read the model. The class Material is used for the definition of the materials used in various models. This class is a generalization for all possible kinds of material definition. Some details concerning the implementation of this class are given here after. ### 3.3.2. Implementation of the nonlinear material behavior The isotropic inelastic material behavior is defined via the evolution of the equivalent plastic strain \( \varepsilon_p \) and the evolution of a number of state variables. A simplified UML diagram concerning the Material class is presented in Fig. 3. From this later, we can see that the class Material is virtual and serves as a base class for other material classes such as Mat_Elastic, Mat_Elastoplastic or Mat_El_Plas_Tabular. The first one is used for the definition of an elastic material, the second one for an elastoplastic material of the form \( \varepsilon_p = A + \frac{B}{\varepsilon^n} \) where \( A, B \) and \( n \) are material constants, and the last one allows us to define an arbitrary form for the strain hardening function using a tabular function \( \varepsilon_p = f(\varepsilon^p) \). Various constitutive models are represented as virtual functions in classes derivated from the Material base class. Some attributes and methods are implemented in the base class Material, while other attributes or methods are implemented in the derived classes. First ones concern methods and attributes that are common to each kind of material. For example the Young’s modulus \( E \), the density \( \rho \) or the Poisson ratio \( \nu \) are common attributes shared by each kind of constitutive law. The \( A, B \) and \( n \) material constants are attributes dedicated to the Mat_Elastoplastic class. The definition of the nonlinear hardening law through a DiscreteFunction class is dedicated to the Mat_El_Plas_Tabular class. To define a new material law, one has to derivate a new class from the Material class. Box 3 presents a summary of the basic functionalities **Box 3** Headers of the material and Mat_El_Plas_Tabular classes ```cpp class Material { friend class List(Material *); protected: Tensor4 C; double young, poisson, density; double heat, dilatation, T0 conductivity; String Name; public: Material( ); Material(const Material& material); virtual ~Material( ); virtual String getLawName(); virtual double getYieldStress(); virtual double getDerYieldStress(); void computeHookeParameters( ); Friend ostream & operator<<(ostream & stream, Material& material); } class Mat_Elastic { protected: DiscreteFunction p_function; public: Mat_Elastic( ); Mat_Elastic(const Mat_Elastic& material); virtual double computeHookeParameters( ); virtual String getLawName(); virtual double getYieldStress(); virtual double getDerYieldStress(); void setFunction(DiscreteFunction func) {p_function = func;} DiscreteFunction * getFunction() {return (p_function);} Friend ostream & operator<<(ostream & os, Mat_Elastic& material); } class Mat_Elastoplastic { protected: double getDerYieldStress(); public: Mat_Elastoplastic( ); Mat_Elastoplastic(const Mat_Elastoplastic& material); virtual double computeHookeParameters( ); DiscreteFunction * getFunction() {return (p_function);} Friend ostream & operator<<(ostream & os, Mat_Elastoplastic& material); } class Mat_El_Plas_Tabular { protected: String getLawName(); public: Mat_El_Plas_Tabular( ); virtual double computeHookeParameters( ); virtual double getDerYieldStress(); void setFunction(DiscreteFunction func) {p_function = func;} DiscreteFunction * getFunction() {return (p_function);} Friend ostream & operator<<(ostream & os, Mat_El_Plas_Tabular& material); } }; ``` ![Fig. 3. UML diagram of the material class (simplified representation).](image-url) of class Material and Mat_El_Plas_Tabular. The main effort to implement a new constitutive model is to define the getYieldStress() and getDerYieldStress() methods which must return, respectively, the value of the hardening parameter \( \sigma_e = f(\varepsilon^p,...) \) and the slope of the hardening law \( h = \partial \sigma_e(\varepsilon^p,...)/\partial \varepsilon^p \). 3.4. Pre-processing language In the FEM code DynELA, we developed a specific high level language using the Lex and Yacc [17] utilities. This language has a grammar presenting analogies with C++. The most important points are summarized here after: - fully free-form language supporting classic features such as comments, files inclusion through #include commands - supports for various computations between reals or vectors, arithmetic, trigonometric, increments or variables comparisons - includes tests (if, then and else) and loops (for and while) - i/o functionalities such as cout, fopen, fclose or < - many other useful features (we refer to the DynELA user manual [18]). As an example we present here after a semi-analytic declaration of the nonlinear hardening law used in the necking of a circular bar example (see Eq. (17) and related parameters in Table 1). This nonlinear hardening law is well described using the Mat_El_Plas_tabular class associated with a discrete function. The definition of this hardening law using the DynELA specific language is given by: ```cpp #include <DynELA.h> MATERIAL: steel { YOUNG: 2.069E+11; // Young modulus NU: 2.9E-01; // Poisson ratio DENSITY: 7.8E+03; // Density ELASTOPLASTIC TABULAR { DISCRETE FUNCTION hard_funct; } }; ``` In this example we first begin the block with the definition of the material constants of the hardening law equation. By default, if no type specification is done, the pre-processor assumes that the variable is a scalar. Vectors, strings or other types are also available. Then, in the example we use a classic FOR loop in the range [0:1] to calculate and create each point of the hardening law via the definition of a discrete function named here hard_funct. This FOR loop have an increasing increment size because more points are needed for such function near the origin. Then, in the last part of the program, we define a new material, called steel here, and associates the previously defined discrete function hard_funct to it. This method allows us to modify in a simply way the definition of the hardening law by changing the variable values at top of the program. This can also be done externally from other program, and leads to parametrized numerical models used in identification of constitutive law parameters. 4. Numerical validations The objective of this section is to assess the numerical implementations made in DynELA concerning the \( J_f \) flow theory presented in Section 3. For this validation we consider two representative examples related to well documented numerical experiments available in literature, the necking of a circular bar subjected to traction forces and the simulation of a direct Taylor test impact. All computations were performed with an AMD K6-3 400 MHz under Linux. 4.1. Necking of a circular bar This experimentally well documented example [7,19] is concerned with necking of a circular bar with a radius of 6.413 mm and a length 52.34 mm, subjected to uniaxial tension resulting from an axial elongation of 14 mm. This example serves here as a testbed for the plastic algorithm developed in DynELA. The material considered here is a special steel (A533, Grade B, Class 1), with a general nonlinear hardening law of the form: \[ \sigma_v = \sigma_v^0 + (\sigma_v^\infty - \sigma_v^0)(1 - \exp(-\delta\dot{e}_p)) + h\dot{e}_p \] Material properties given by Norris et al. [19] are reported in Table 1. This calculation problem is nonlinear, both by the constitutive equation and by the large deformation and rotation that occur at necking. Two different meshes consisting of 50 and 400 elements are considered to assess the influence of the discretization. Only half of the axisymmetric geometry of the bar has been meshed in the model. This example is a quasi-static one, but because we used an explicit algorithm, we introduced a prescribed velocity of 7 m/s at the top of the workpiece to control the displacement. This rate corresponds to the one used in the numerical model presented by Norris et al. Fig. 4 reports final meshes obtained for the full elongation. In this figure, the deformed solution obtained with the coarse and the finer meshes are in good agreement. Fig. 5 shows the ratio of the current to initial radius at the necking section vs. the axial displacement. It is a comparison between numerical (this work and Simo and Hughes [7]) and experimental results [19]. The results are in good agreement with experimental and previously reported computations. ### 4.2. Impact of a copper rod The impact of a copper rod on a rigid wall problem, known as the Taylor impact problem, is a standard benchmark for dynamics computer codes. This problem simulates a high velocity impact of a copper rod on a rigid wall, it is used by many authors such as Liu et al. [20]. The initial dimensions of the rod are \( r_0 = 3.2 \) mm and \( l_0 = 32.4 \) mm. The impact is assumed to be frictionless and the impact velocity is set to 227 m/s. The final | Table 2: Material properties of the OHFC copper rod for the Taylor test | |----------------|-----------------| | Young’s modulus | \( E \) | 117.0 GPa | | Poisson ratio | \( \nu \) | 0.35 | | Density | \( \rho \) | 8930 kg/m\(^3\) | | Initial flow stress | \( \sigma_v^0 \) | 400.0 MPa | | Linear hardening | \( h \) | 100.0 MPa | Fig. 4. Necking of a circular bar: final meshes obtained for 50 (left) and 400 (right) elements. Fig. 5. Necking of a circular bar: ratio of the current to initial radius at the necking section versus axial displacement. configuration is obtained after 80 ms. The constitutive law is elastoplastic with a linear isotropic hardening, material properties given in Ref. [20] corresponding to an OFHC copper reported in Table 2. Here again, only half of the axisymmetric geometry of the rod has been meshed in the model. Two different meshes were used, the first one with 250 elements (50 × 5), and the second one with 2000 elements (20 × 100). Fig. 6 shows the equivalent plastic strain contour plot for both meshes. Comparison between left- and right-hand sides of this figure shows a good level of agreement both for the final geometry and for the equivalent plastic strain contour plot with previously reported results. Table 3 reports a comparison for the final length \( l_f \), the footprint radius \( r_f \) and the maximum equivalent plastic strain \( \varepsilon_{\text{p}}^{\text{max}} \) obtained with our finite element code and other numerical results such as the one obtained by Liu et al. [20] or the same simulation problem with the Abaqus Explicit [21] program. The differences between the solutions are reasonable. 5. Conclusions An object-oriented simulator was developed for the analysis of large inelastic deformations and impact processes. Several benchmark test problems were examined to demonstrate the accuracy of the developed algorithms. The benefits of using an OOP approach in comparison with traditional programming language approaches were presented in this paper. The use of OOP provides us with the ability of better representing, through the definition of classes and inheritance, the physical, mathematical and geometric structures of the kinematics and constitutive aspects of a FEM analysis. The main purpose of this FEM development is to serve as a testbed for new and more efficient algorithms related to various parts of a FEM program, such as new contact algorithms (here, the contact is included but has not been presented) or more efficient constitutive integration schemes. One of the main advantages of the present FEM code is that the class hierarchies adopted allow the implementation of additional constitutive models such as new constitutive laws, new elements or contact laws by derivating this new feature from existing one using the inheritance feature. One of the future use of this simulator is related to inverse problem when one wants to make a parameter identification of the material coefficients. This FEM code is continuously developed and new features are implemented such as a new constitutive algorithm including damage effects or the use of various multi-grid resolution algorithms. References
{"Source-Url": "http://oatao.univ-toulouse.fr/5360/1/Pantale_5360.pdf", "len_cl100k_base": 8020, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33003, "total-output-tokens": 9756, "length": "2e12", "weborganizer": {"__label__adult": 0.0004601478576660156, "__label__art_design": 0.0006542205810546875, "__label__crime_law": 0.0005841255187988281, "__label__education_jobs": 0.00122833251953125, "__label__entertainment": 0.00011444091796875, "__label__fashion_beauty": 0.00028443336486816406, "__label__finance_business": 0.0003767013549804687, "__label__food_dining": 0.0006604194641113281, "__label__games": 0.0009202957153320312, "__label__hardware": 0.0018291473388671875, "__label__health": 0.0010271072387695312, "__label__history": 0.000576019287109375, "__label__home_hobbies": 0.0002334117889404297, "__label__industrial": 0.003021240234375, "__label__literature": 0.0002942085266113281, "__label__politics": 0.0004940032958984375, "__label__religion": 0.0008559226989746094, "__label__science_tech": 0.34326171875, "__label__social_life": 0.00015461444854736328, "__label__software": 0.00952911376953125, "__label__software_dev": 0.63134765625, "__label__sports_fitness": 0.0007491111755371094, "__label__transportation": 0.001338958740234375, "__label__travel": 0.0002799034118652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36792, 0.02437]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36792, 0.81694]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36792, 0.84729]], "google_gemma-3-12b-it_contains_pii": [[0, 3455, false], [3455, 9724, null], [9724, 14575, null], [14575, 17967, null], [17967, 20768, null], [20768, 25489, null], [25489, 29104, null], [29104, 31435, null], [31435, 35621, null], [35621, 36792, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3455, true], [3455, 9724, null], [9724, 14575, null], [14575, 17967, null], [17967, 20768, null], [20768, 25489, null], [25489, 29104, null], [29104, 31435, null], [31435, 35621, null], [35621, 36792, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36792, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36792, null]], "pdf_page_numbers": [[0, 3455, 1], [3455, 9724, 2], [9724, 14575, 3], [14575, 17967, 4], [17967, 20768, 5], [20768, 25489, 6], [25489, 29104, 7], [29104, 31435, 8], [31435, 35621, 9], [35621, 36792, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36792, 0.02869]]}
olmocr_science_pdfs
2024-12-02
2024-12-02