id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
7ddc63d6a214cf470b42e9d943bbc69789e0e4b1
SPECIFICATION VALIDATION RESULTS Workpackage 1 – Task 1.5 This document shows end-users’ validation towards the functional specifications for the TRENDS system. <table> <thead> <tr> <th>Acronym</th> <th>TRENDS</th> </tr> </thead> <tbody> <tr> <td>List of participants</td> <td>LCPI SERAM, PERTIMM, INRIA, ROBOTIKER, CRF (FIAT), STILE BERTONE, UNIVERSITY OF LEEDS, UNIVERSITY OF CARDIFF</td> </tr> <tr> <td>Coordinator organization</td> <td>LCPI SERAM : Laboratoire Conception de Produits et Innovation, SOCIETE D'ETUDES ET DE RECHERCHES DE L'Ecole Nationale Superieure d'Arts et Metiers</td> </tr> <tr> <td>E-mail contact person</td> <td><a href="mailto:carole.bouchard@pers.ensam.fr">carole.bouchard@pers.ensam.fr</a></td> </tr> <tr> <td>Project Website</td> <td><a href="http://www.trendsproject.org">www.trendsproject.org</a></td> </tr> <tr> <td>Project Type</td> <td>STREP (Specific Targeted Research Project)</td> </tr> <tr> <td>Contract number</td> <td>FP6-IST-27916</td> </tr> <tr> <td>Start Date</td> <td>1 January 2006</td> </tr> <tr> <td>Duration</td> <td>36 months</td> </tr> </tbody> </table> # INDEX ## 1. OBJECTIVES 1.1 General Context: TRENDS Project WP1 ................................................................. 3 1.2 Functional Specification Objectives ................................................................. 3 1.3 Contents of the Report ..................................................................................... 4 1.4 Task Schedule ............................................................................................... 4 ## 2. PROTOCOL .......................................................................................................... 6 2.1 Functional Analysis ....................................................................................... 6 2.2 Questionnaire Elaboration ............................................................................ 6 2.3 Functional Analysis Validation ...................................................................... 7 ## 3. RESULTS ............................................................................................................. 8 3.1 End-users’ Feedback ...................................................................................... 8 3.2 Discussion ...................................................................................................... 9 ## 4. CONCLUSION .................................................................................................... 12 ## 5. LIST OF FIGURES AND TABLES ........................................................................ 13 ## 6. GLOSSARY ........................................................................................................ 14 1. OBJECTIVES 1.1 GENERAL CONTEXT: TRENDS PROJECT WP1 TRENDS project first work-package (WP1) consisted in an “END-USER NEEDS ANALYSIS”, which means specifying and validating end-users needs for the future TRENDS system and software. End-users count among the relevant concerned industrial actors, ‘Centro Ricerche FIAT’ and ‘Stile Bertone’, as Marketing, Design and Innovation. It was proved by previous studies that designers by car builders and by car manufacturers use the same information. Information about their needs was completed by additional information on existing systems in industry and research areas. From these input data, a functional analysis was done in order to find out functional specifications as input data for the following work package. The validation of the first interface elements will be done with a specific sample including the end users of the TRENDS consortium and additional designers working by other companies and by car manufacturers. WP1 objectives are the following: - To define the user needs, and the methodology of interviewing, benchmarking, etc. - To make a world-wide state of the art and a benchmarking data base on design information systems. - To define functional specifications for the TRENDS system. - To validate result data with end users. Needs analysis was based on recent methods in ergonomics as ethnographic approach. The collaborative work between partners essentially lies in the interviews first, then in a functional analysis. Functional analysis was supported by teamwork meeting between the partners. 1.2 FUNCTIONAL SPECIFICATION OBJECTIVES The work package was structured in two main phases. WP1 first phase included the three initial parallel subtasks: - T1.1. Interviews with the end-users: designers, engineers, marketing people at CRF and at SB. - T1.2. Worldwide state of the art on design and innovation information systems - T1.3. Benchmarking with innovation, design and R&D departments. The “functional specification” phase used the previous results as input data for the formalization of a **functional analysis** (T1.4) in which WP1 partners participated. It was a collective task allowing a mix of various points of view over the TRENDS system. The functional analysis consisted in: - structuring end-users needs - defining the future software goals, the software external environment, its life cycle - expressing the main functions to be offered, the constraint functions and criteria linked to these functions The functional analysis was provided with interviews synthesis, giving birth to the list of users needs specifications: users specifications expressed under the form of a verbal list are provided, with a current situation diagnosis about specific problems and needs identification, as well as an ideal vision of the future in terms of information system linked to the trends analysis, idea generation and design activities. ### 1.3 CONTENTS OF THE REPORT This report is related to the task T1.5. It aims at setting out validation results expressed by the end-users after an initial validation of the function list by the TRENDS partners. The first part is dedicated to the protocol used for the questionnaire. This questionnaire included three levels of detail. The results were obtained with about twenty end-users. After its refining and reduction by LCPI, the whole list of functions was ranked by the end-users according to the importance of each quoted function for them. The second part details the results and the operational conclusions. In fact the previous ranking will be used in order prioritize the different functions. Beside it will be helpful for preparing the first creativity session, which aims to propose the first graphical interfaces for TRENDS-system. ### 1.4 TASK SCHEDULE If we consider the whole task schedule of the first work package WP1 (see figure 4), we can see that the deliverable D1.5 is the last one. It corresponds to the overall validation task for the complete work package. This validation includes both points of view: the end-users one and the TRENDS system designers and developers one. These two visions are complementary and essential because the end-users are not always able to imagine new functionalities that could fulfill their needs, and the system developers can’t imagine the problems the end-users can or could encounter in their activity currently or when using the future TRENDS system in an exhaustive way. Fig. 2: D1.4 and D1.5 Subtasks Outputs Description - **Needs Analysis** - Extraction of users' needs and functionalities - TRENDS-System Description - Raw List of Specifications - TENDS-System Description - CRF - LCPI-SERAM - LEEDS - SB - **Specifications Validation** - CRF - LCPI-SERAM - LEEDS - SB - **Criteria List** - by TRENDS-system designers - **Validation of Functions** - CRF - LCPI-SERAM - LEEDS - SB - **Statistics: Functions / Sources** - LCPI-SERAM - LEEDS - SB - **TRENDS-System Functional Description** - CRF - LCPI-SERAM - LEEDS - SB 2. PROTOCOL 2.1 FUNCTIONAL ANALYSIS Previous deliverables D1.1, D1.2 and D1.3 presented the outputs from the interviews with TRENDS-system end-users, from the state-of-the-art of research on tools close to TRENDS-system and from a market study about existing tools close to TRENDS-system. Those deliverables provided us with a list of around 200 items that were taken out of the reports by the design researchers. A functional analysis protocol was carried out, in order to structure the needs and to formalize the functional requirements for the TRENDS-system, based on end-users needs. We ended up with a list of main functions and additional functions to be found in the TRENDS-system. In the functional analysis task (cf. D1.4 deliverable), TRENDS-system developers participated by giving their view in terms of technical capabilities. 2.2 QUESTIONNAIRE ELABORATION Starting with the list of functions that was built through the functional analysis session, we came up with a reduced 124-item questionnaire, divided into five parts; the functional proposals covered issues (cf. figure 2) that were raised in the interviews, such as: - interaction - search - use - store - resources Fig. 3: Design process as a guideline for needs-ranking questionnaire In each of these five categories, questions were grouped in sub-categories; therefore we had 3 different levels in the questionnaire as shown in table 1. <table> <thead> <tr> <th>Levels Description</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>Level 1 – Design process categories</td> <td>“SEARCH”</td> </tr> <tr> <td>Level 2 – Functional subcategories</td> <td>“Searching not only with text as input data”</td> </tr> <tr> <td>Level 3 – Functionality</td> <td>“Possible input data: sketches”</td> </tr> </tbody> </table> If they covered similar issues, some of the functions coming from the functional analysis were grouped into a single question, as shown in figure 3. This operation helped to reduce the questionnaire length; this ensures a better rate of replies among the end-users who received the questionnaire. The questions were translated in end-users language. Indeed some functions coming from the state of the art or from benchmarking were more proposed by computer scientists and are not, in this sense, easy to understand by the end-users. Fig. 4: Items processing from “functional specification list” (D1.4) to “ranking questionnaire” (D1.5) <table> <thead> <tr> <th>Data from the functional specifications list (2 items)</th> </tr> </thead> <tbody> <tr> <td><strong>MA</strong>&lt;sup&gt;1&lt;/sup&gt;</td> </tr> <tr> <td><strong>SOA</strong>&lt;sup&gt;2&lt;/sup&gt;</td> </tr> </tbody> </table> ⇒ *Data in the ranking questionnaire (1 item)* *Level 1: SEARCH* *Level 2: Being able to search through subjective concepts (emotions)* *Level 3: Be able to use subjective data as search inputs (aggressive, comfortable…)* ### 2.3 Functional Analysis Validation As a final step for functional analysis (cf. figure 3), end-users were involved again in the functional description of TRENDS-system, since they had to rank the functional requirements according to their major expectations towards TRENDS-system. In this purpose, a questionnaire (cf. Annex: “Needs Ranking – Interactive Questionnaire Grid to End-users”) was e-mailed to each end-user who participated to the interviews, i.e. 32 individuals. 20 of the latter sent back a completed questionnaire to us, with the following repartition: - 15 designers - 2 design project managers - 3 R&D collaborators End-users had to evaluate functional proposals for TRENDS-system on a 5-level scale, from “1 not important” to “5 essential”. The raw results are shown in the following paragraph (cf. 3.1 End-users’ Feedback). Functional requirements ranking by end-users is shown as a validation for the functional specification and described in the following report. --- <sup>1</sup> Market analysis (D1.3 output) <sup>2</sup> State-of-the-art (D1.2 output) 3. RESULTS 3.1 END-USERS’ FEEDBACK We put together all replies to the questionnaire in one single table (cf. table 2). The answerer’s profile was notified with the letters “d” (design department), “rd” (R&D department) and “pm” (project management). Every answer was coded with a number corresponding to the chosen level in the scale from “not important” (which is coded by “1”) to “essential (which is coded by “5”). For each question-item, several average values were computed - the average value related to the designers’ answers - the average value related to the design-project managers’ answers - the average value related to the R&D people’s answers - the overall average value, related to all twenty answering end-users All participants answered to level 3 question, while only designers answered to level-2 question, probably due to communication troubles with respect to the questionnaire instructions. --- 3 For explanations about questionnaire levels, please refer to table 1. © 2006 TRENDS Consortium Members. All rights reserved 3.2 DISCUSSION The first statistics study shows that the average values representing the answers by the end-users are all above the average value between “not important” and “essential”, which means that, in average, all end-users found that the items proposed in the questionnaire were useful. This is due to the fact that the proposed items were partly coming from the previous end-users’ needs analysis (interviews). ![Diagram showing designers' feedback](image) The statistical results allow visualizing the items ranking at the various questionnaire levels: - The level-1 ranking made by the designers shows the following order (by decreasing order of importance): “store”, “search”, “interaction”, “use”, “resources”. - At the level-2, in the “store” category for instance, we notice that the “accessibility of the collections” of data, pictures... was ranked as the most important item, with a 4.6-value average (on the 5-level scale). → In the future development of TRENDS-software, this need should be taken into account by the developers, in making a clear interface and a clear usage protocol. For explanations about questionnaire levels, please refer to table 1. As another example, in the “store” category, we notice that the “accessibility of the collections” is ranked higher than the capability of “having a private space for collections” → Clear interface and clear collections usage protocol should be a priority over the development of a private area for collection storage. • This ranking table also enables to visualize the various points of view among the end-users, depending on their professional background. • For instance, in the “Knowing the context of image” category, designers and project managers have different needs, since designers rated 3.9 the “Giving contextual information on sources (origin, specialty, authors...)” proposal, while project manager rated it 1.5. → If some functionality for “giving contextual information on sources” is developed in the TRENDS - system, then it will mainly address designers’ needs (at first). • But sometimes, needs are common for all skills: in “Relation with tool” category, designers, project managers and R&D people all have a very similar point of view about “Increasing the speed of search”. This item was respectively rated 4.5, 5 and 4.7. → The speed efficiency of the future TRENDS-tool should be w priority for the developers, since it was rated very high by all professional skills. • This ranking also allows for withdrawing the least-ranked questionnaire items from the list, in order to keep only the most essential needs, for which solutions will be developed in TRENDS system. • Considering in detail the interaction, the designers aspire distinctively to an attractive display of information where feeling words could be combined with objective information. Conviviality is a high valued requirement for all types of end-users and more specifically the speed of search has to be high. The active control of information parameter is of high value as well, especially for decision makers. • In view of the use, the designers favor functions like controlling the quantity and quality of information with a large visualization potential, browsing in various sectors and databases, translating market language into design language. All other functions seem to be very important for them to a lower extent: it is rare to find quotations under 3. It is just the case for two items, linked to sharing points of view between departments and communicating about inspiration sources. • Taking the search into account, it is important for the designers to be able to combine vague or focused search at the same time, and to have the possibility to use a similarity based search. The search should be based on several data like images, sketches on top of words. The search has to be very quick and very picture oriented with functions like identifying, controlling, and categorizing. • The storing function is of great importance, especially for the designers. Most of the time the quotation is beyond 4 on the evaluation scale. For them storing means to have both private and shared spaces for a flexible collection being possibly implemented everywhere and at every time. They would like to be able to organize and categorize information in flexible categories, with different structures like by-project or by sectors. • The expected resources of the future TRENDS system include mainly images classified by categories in many sectors and meta-information. Rather than a huge quantity of data, designers look forward recent and high quality images coming from various sectors. It could be surprising that designers did not assign the highest value to the necessity to link the images with impression words. Indeed this item was considered as a big challenge of the TRENDS project. In fact even if designers constantly use impression words and emotions in order to characterize images, this work is their own business and it is often done very intuitively. For that reason we tend towards the elaboration of a learning system for the future TRENDS system, being able to learn rules expressed directly by the designers. It is necessary to propose a great database on precedents, showing related information on specific brands. User data and contextual data are expected as well. 4. CONCLUSION The ranking questionnaire made possible to get end-users evaluation about functional proposals that were listed in the “Functional Specifications List” (D1.4 deliverable). Thus, functional specifications were ranked from “not important” to “essential”. We can then reduce the list of functionalities that will be developed in TRENDS-system, by keeping only the needs of most importance. The ranking questionnaire allowed the evaluation of end-users needs from various professional backgrounds (designers, project managers and R&D people). Clear differences in the needs depending on the profile were observed. Designers put emphasis on visualization, quality and freshness of information, mainly under the form of images in various sectors. The most important function they expect is storing. In fact they are limited by their own memory in their usual activity. The storing function could help them to find and retrieve adequate information. The designers stated two specific needs involving antinomical constraints: they would like to store information everywhere and at every time. This function could be fulfill on a PDA or any other mobile storing device. But then they want to visualize high quality images with high resolution, which is more appropriate on big screens. People from R&D focus more on the need of a certain variety of information, the possibility to control the search, the link between occupations and the necessity to integrate consumer and user related information. In this way, they emphasize more the collaborative aspects of the design activity. The project managers are more interested in decisional aspects, and less on operational functions. The ranking presented in this report will be used for the preparation of the creativity sessions where the interface solutions will be proposed. 5. LIST OF FIGURES AND TABLES List of Figures Fig. 1: TRENDS Project “Workpackage 1” Activities ................................................................. 3 Fig. 2: D1.4 and D1.5 Subtasks Outputs Description ............................................................... 5 Fig. 3: Design process as a guideline for needs-ranking questionnaire .................................. 6 Fig. 4: Items processing from “functional specification list” (D1.4) to “ranking questionnaire” (D1.5) .................................................. 7 Fig. 5: Designers’ Feedback (panel: 10 individuals) ................................................................. 9 List of Tables Tab. 1: Questionnaire level description ..................................................................................... 6 6. GLOSSARY DESIGNERS “Designers” are designers of TRENDS-system to be developed, i.e. European project partners. END-USERS “End-users” are end-users of TRENDS-system to be developed, i.e. people from design-related skills, coming from such departments as design, marketing or innovation. FUNCTION In design science, a function corresponds to the need to fulfill through the product. It is directly linked to the service to ensure and is enounced in terms of finality: “the new system has to enable to: ...”. FUNCTIONAL CRITERIA The functional criteria aim to characterize each identified function with a targeted quantitative interval where the system has to be positioned. Sometimes it is not possible to quantify the criteria of the system and it is useful to go through a qualitative description.
{"Source-Url": "http://www.trendsproject.org/files/deliverables/TRENDS%20D1.5%20-%20Specification%20Validation%20Results%20PUBLIC.pdf", "len_cl100k_base": 4583, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 23791, "total-output-tokens": 4983, "length": "2e12", "weborganizer": {"__label__adult": 0.00110626220703125, "__label__art_design": 0.07891845703125, "__label__crime_law": 0.0011758804321289062, "__label__education_jobs": 0.06402587890625, "__label__entertainment": 0.0005130767822265625, "__label__fashion_beauty": 0.0009369850158691406, "__label__finance_business": 0.00897216796875, "__label__food_dining": 0.0011091232299804688, "__label__games": 0.002040863037109375, "__label__hardware": 0.00501251220703125, "__label__health": 0.0019969940185546875, "__label__history": 0.0025577545166015625, "__label__home_hobbies": 0.0006289482116699219, "__label__industrial": 0.004505157470703125, "__label__literature": 0.0018262863159179688, "__label__politics": 0.0007295608520507812, "__label__religion": 0.0014400482177734375, "__label__science_tech": 0.277587890625, "__label__social_life": 0.0003514289855957031, "__label__software": 0.03509521484375, "__label__software_dev": 0.50634765625, "__label__sports_fitness": 0.0004987716674804688, "__label__transportation": 0.0019130706787109375, "__label__travel": 0.0005469322204589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21488, 0.0187]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21488, 0.15424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21488, 0.89181]], "google_gemma-3-12b-it_contains_pii": [[0, 776, false], [776, 2427, null], [2427, 4397, null], [4397, 6897, null], [6897, 7522, null], [7522, 9375, null], [9375, 11610, null], [11610, 12659, null], [12659, 13840, null], [13840, 17537, null], [17537, 18034, null], [18034, 19873, null], [19873, 20684, null], [20684, 21488, null]], "google_gemma-3-12b-it_is_public_document": [[0, 776, true], [776, 2427, null], [2427, 4397, null], [4397, 6897, null], [6897, 7522, null], [7522, 9375, null], [9375, 11610, null], [11610, 12659, null], [12659, 13840, null], [13840, 17537, null], [17537, 18034, null], [18034, 19873, null], [19873, 20684, null], [20684, 21488, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21488, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21488, null]], "pdf_page_numbers": [[0, 776, 1], [776, 2427, 2], [2427, 4397, 3], [4397, 6897, 4], [6897, 7522, 5], [7522, 9375, 6], [9375, 11610, 7], [11610, 12659, 8], [12659, 13840, 9], [13840, 17537, 10], [17537, 18034, 11], [18034, 19873, 12], [19873, 20684, 13], [20684, 21488, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21488, 0.1]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
387dd62668af03e5506b359451a44f0d84ac9f9a
MVC-3D: Adaptive Design Pattern for Virtual and Augmented Reality Systems Samir Benbelkacem, Djamel Aouam, Nadia Zenati-Henda, Abdelkader Bellarbi, Ahmed Bouhena, Samir Otmane To cite this version: HAL Id: hal-02052171 https://hal.science/hal-02052171 Submitted on 28 Feb 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. MVC-3D: Adaptive Design Pattern for Virtual and Augmented Reality Systems Samir Benbelkacem, Djamel Aouam, Nadia Zenati-Henda Centre de Développement des technologies Avancées. Cité 20 août 1956, Baba-Hassen, Algiers sbenbelkacem@cdta.dz Abdelkader Bellarbi, Ahmed Bouhena Centre de Développement des technologies Avancées. Cité 20 août 1956, Baba-Hassen, Algiers abellarbi@cdta.dz Samir Otmane IBISC, Univ Evry, Université Paris-Saclay 91025, Evry, France samir.otmane@ibisc.univ-evry.fr Abstract In this paper, we present MVC-3D design pattern to develop virtual and augmented (or mixed) reality interfaces that use new types of sensors, modalities and implement specific algorithms and simulation models. The proposed pattern represents the extension of classic MVC pattern by enriching the View component (interactive View) and adding a specific component (Library). The results obtained on the development of augmented reality interfaces showed that the complexity of components is reduced. The complexity increases only on the Library component (L). This helps the programmers to well structure their models even if the interface complexity increases. The proposed design pattern is also used in a design process called MVC-3D in the loop that enables a seamless evolution from initial prototype to the final system. 1 Introduction The evolution of the interaction between a user and mechatronic systems require a new design of task-adapted and usable user interfaces. This leads to a new generation of user interfaces (NGUIs) that provide a set of new technologies of human-computer interaction based on newly developed modalities, sensors and algorithms. NGUIs diverge from the well-known WIMP (Window Icon, Menu and Pointer) and use novel interaction paradigms such as Virtual Reality (VR) and Augmented Reality (AR), tangible, embodied and multi-modal interfaces. In recent years, researches focus on technical issues such as, tracking, rendering, I/O devices in order to design such interfaces. The lack of design models and interdisciplinary nature of virtual and augmented (or mixed) reality design require design approaches based on iterative refinement. Also, the lack of formal verification techniques and the limited design expertise limits the applicability of techniques like experts’ reviews for NGUI applications, except for experimental evaluation through tests with end-users that could be a potential solution. However, a simple implementation and test approach is not viable because the implementation of working prototype is expensive and time consuming, limiting the number of concepts and designs that can be explored. To overcome these problems, we propose an adaptive design pattern, with supplementary concepts specific to virtual and augmented reality systems. This paper briefly illustrates the principal idea of our design approach in section 3 and presents preliminary results in section 4. But we start with a short review of related work in section 2. 2 Related work Design models have been proposed for the development of virtual and augmented reality interfaces in the literature [Emm17, Ishii18, Bur07]. Ishii [Ish08] extended the MVC model [Bur92] to develop tangible interfaces. Stöcklein [Sto09] proposed MVCE model that integrates “Environment” component into the MVC model. In [Tar97], AMF-C design pattern was proposed to design mixed reality collaborative systems. In our previous works [Ben14], we realized three implementations of a fragment of the same application “car engine maintenance assistance using Augmented Reality”. The first implementation used a traditional MVC model [Bur92]. The second implementation used an MVC pattern implemented using a communication pattern proposed by Eckstein [Eck07]. The third implementation was based on an AMF-C architecture model [Tar97]. The programmers estimated that structuring the code into components can be complex using the MVC model. The complexity of components increases when implementing AMF-C model. In the other hand, we observed that the MVCE model [Sto09] increases the complexity of components when the interface contains more 3D information and integrates many processing (e.g. simulation tools, tracking and gesture recognition algorithms). In addition, introducing complex simulation models and heavy calculations in Model (M) and Environment (E) components makes the development tedious and upsets programmers’ practices. In our approach, we integrate an additional component to MVC model that consider all tools, SDK and heavy processing separately. When a technology is modified (simulation toolkits, algorithms and devices), we change only the additional component’s content without affecting the Controller and the Model. Only adaptation can be made for these two components to support the changes. With this approach, we can reduce the complexity and recurrent accesses to MVC components and preserve the programmers’ practices. 1 Design Approach The goal of many augmented and virtual-reality applications is the development of better user interfaces based on an iterative design process. However, most design processes assume that the underlying technology is well defined and stable, a condition that is often not respected in the development of NGUIs which use developing technologies that are still in early experimental stages. Changes in the underlying technologies can be problematic, if a design process does not anticipate such volatility and therefore provides no means to handle it in a systematic and structured way. Also, most of the virtual and augmented reality prototypes are designed without any structuration and using a Trial-and-Error prototyping approach. This leads to an overhead in the programming, since parts of the application cannot be reused and must be re-implemented for each prototype. In our approach, we have developed an iterative design process which can be used for NGUIs based on our proposed MVC improved model. 1.1 MVC-3D Design Pattern We present an adaptive design pattern to implement interaction techniques of virtual and augmented reality interfaces. We propose a structure that extends the (MVC) model-view-controller design pattern with additional components and features that consider independently all processing specific to VR & AR (e.g. tracking algorithms) and integrate the environment (virtual and real) with their corresponding interactive devices. For tangible interfaces we are based on the concepts presented in [Ish 08]. The benefit of the classical MVC pattern is the separation of interaction and visualization aspects of the user interface when designing the application. Also, it enables modular designs with which changes performed on a component doesn’t affect other components. The Model (M) encapsulates the application data and the functionalities of the application. The Controller (C) handles the user actions and plays an intermediate role between View and Model. The View (V) represents the visual items of the application. One of the key elements of virtual and augmented (or mixed) reality user interfaces is the maintaining of coherent relationship between real world and virtual models in real time when incorporating heterogeneous software and hardware technologies. To guarantee this coherence, the structure of the components should be less complex as possible to facilitate the information exchange and reduce processing time. Our vision is to integrate heavy processing of the application in a specific component which could be modular and reusable. Therefore, we introduced an additional component “Library” to MVC model (see Fig.1) that encapsulates all processing using specific algorithms (e.g. tracking techniques, gestures, faces and speech recognition algorithms), complex simulation models (e.g. Matlab/Simulink) and SDK/toolkits to process the amount of data provided by the View via the Controller. In addition, some interaction devices should be in the user’s view field to perform 3D manipulation tasks (gestures interaction, speech control...). From practical point of view, we observed that it is difficult to dissociate physically both the environment and interactive devices from the view of the user. For that purpose, we brainstormed to conceptually merge these elements. We proposed to enhance the View component by introducing a sub-component that captures the real environment model of the application and a sub-component that integrates the sensors module and manages the tangible objects (physical and digital). We obtained then a new component that we call interactive View (iV) (see Fig.1). ![MVC-3D Model](image) Figure1: MVC-3D Model Using the MVC-3D structure, components can be refined independently. One key benefit is the possibility to develop an interactive user interface along the mixed reality continuum [Mil 94] in which dedicated processing is formulated by a component: Library (L). This component involves heterogenous and complexe simulation models, SDKs and algorithms. The benefit of this component is the capability to manage this heterogeneity and the relationship between the different models. If we plan to change an algorithm or a toolkit, we just modify the Library content without completely changing the Model and Controller’s content. A simple adaptation could be made. 1.2 Design Process Our goal is to iteratively develop virtual and augmented reality prototypes that could be refined to correspond the user’s expectations. These prototypes can be then used to evaluate the system with end-users and to validate the technologies used in the system. Using the MVC-3D model, components can be refined independently. The benefit of this approach is the possibility to develop a user interface along the mixed reality continuum [Mil 94] starting from purely virtual environment to real one. Figure 2 describes our design process. We define each MVC-3D component as an 'entity'. An entity can be anything from a model, a visual representation, to a controller. At the beginning of the development process an initial set of entities is identified. For each entity in this set the inputs and outputs are defined. If the development targets a complex mechatronic system, each relevant system component (either hardware or software) is initially represented by an entity. Additional entities represent the elements of the user interface. This initial set of entities can later be extended. It should include all entities needed for a first prototype that provides a rough approximation of the planned system. The first prototype is then composed from these entities, connecting the information flow between the input and output ports as required by the application. For the technical implementation we use our mixed reality environment described in [Ben 14]. ![Diagram](image_url) **Figure2: MVC-3D in the loop** Typically, the first prototype consists of an environment in which all entities are implemented purely in software. The resulting system would therefore be a purely virtual environment. The key benefit is that elements in a virtual environment are faster and cheaper to develop. Depending on the development goals and priorities, entities are selected for refinement. Refinement means that either the behavior or visual representation of an entity is updated (e.g. the 3D-model) or an entity replaced by another version of the same entity (e.g. Game physics vs. Matlab/Simulink). If the entity is concerned with simulating real-world elements, typical refinements would be the replacement of a simple simulation with a more realistic one, or the replacement of a simulation component with its real world equal. This approach allows to move from purely virtual environments to hardware in the loop (or more precisely mixed reality in the loop) systems in a structured way. If an entity is concerned with the implementation of interaction or visualization techniques the replacement could either be a complete exchange of the component (e.g. to compare alternative approaches to system control) or stepwise refinements, in which a user interface is refined according to user feedback as in established iterative user centered design processes. As the development progresses from an initial basic prototype to more complex systems, it can also be necessary to adjust the number of entities, e.g. by splitting the functionality of an entity into two or more, or by adding or removing entities. In all cases the data flow connections between the entities must be checked and corrected consequently. ## 2 Preliminary Results We have applied the MVC-3D model during the implementation phase of the extended 2TUP process [Ben 14] for the fragment of the application "maintenance of a car engine by AR". We have developed three prototypes with different interaction techniques using MVC-3D pattern for each one. Preliminary results are given in Table 1. Prototype 1 shows an AR interface of a real cylinder head of a car engine on which are displayed four 3D pistons (interactive View (iV)). The Model (M) encapsulates 3D pistons’ Data, the transformation matrix of the 3D pistons and the markers’ Data. The Controller (C) manages the images’ data provided by the camera, sends the data to the Library (L) and receives the results. The Library (L) uses ARtoolKit tracking algorithm to process the data provided by the Controller (C). Prototype 2 displays the same augmented scene, but without markers (interactive View (iV)). The Library (L) implement MOBIL algorithm [Bel 14],[Bel 17] for marker-less tracking system. The programmers add to the Model (M), already established for prototype 1, the reference image’ data of the cylinder head. The Controller (C) was adjusted to support exchanging data with the Library supporting MOBIL algorithm. In this case, the content of the Library was modified by implementing MOBIL algorithm. Therefore, the complexity increases only at the Library (L) component in which processing to detect and track the scene is more consistent. Prototype 2 is the refinement of prototype 1 using the design process given in Fig.2. In fact, we have removed an entity (ARtoolKit SDK) and replaced it by a new entity (MOBIL Marker-less technique). So, the MVC-3D is refined along Library (L) axis (see row 2, column 4 of Table 1). Prototype 3 is an extension of prototype 2. It provides an interactive interface where the user manipulates the 3D Table 1: Implementation of three prototypes using MVC-3D model <table> <thead> <tr> <th>Model (M)</th> <th>Controller (C)</th> <th>Interactive View (IV)</th> <th>Library (L)</th> <th>Complexity measure</th> <th>Prototype</th> </tr> </thead> <tbody> <tr> <td>- Transformation matrix of the 3D Pistons.</td> <td>Processing image provided by camera: convert image (pre-processing) and inserts 3D pistons into image (post-processing).</td> <td>- Four 3D pistons. - Environment: cylinder head and square markers. - Sensors module: camera.</td> <td>Recognition of simple square markers using ARToolkit.</td> <td>M IV</td> <td>IV</td> </tr> <tr> <td>- 3D pistons’ Data</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>- Markers’ Data.</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>Processing image provided by camera: convert image (pre-processing) and inserts 3D pistons into image (post-processing).</td> <td>- Four 3D pistons. - Environment: cylinder head. - Sensors module: camera.</td> <td>Detection and recognition of reference image (cylinder head’s image) using MOBIL algorithm. - Camera pose estimation.</td> <td>M IV</td> <td>IV</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>- 3D pistons’ Data.</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>- Reference image data (Cylinder head’s image).</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>- Gestures’ Data.</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Pistons using his hands. The Library encapsulates, besides MOBIL tracking system, algorithms calculating the position and orientation of a hand and its tracking in the AR scene. The processing linked to the gesture recognition [Bel 13] was not integrated into the Controller (C) and the classical View (V), neither in the Model (M), but in the Library (L). The Model (M) is enriched by adding the gestures’ data. The Controller (C) manages both the images and the gestures data which are provided by the Library (L). The interactive View (IV) encapsulates the Kinect module and the user’s hands data besides of the real cylinder head’s reference image data. Therefore, interactive View (IV) component facilitates data access (e.g. the Kinect delivers directly gestures’ data to the classical View (V)). The Controller (C) can easily handle data arising from the real environment. In the same manner of the precedent case, prototype 3 is the refinement of prototype 2. We have added to prototype 2 two entities: (1) a Kinect with 3Gear library that capture the gesture motion and (2) the AR – Head Mounted Display (HMD) to view the 3D piston manipulated by the user. So, the MVC-3D is refined along Model (M), interactive View (IV), Controller (C) and Library (L) axes (see row 3 of Table 1). Comparing prototypes 2 and 3 using our approach, the structure and content of the Model and Controller have not deeply changed; a simple adaptation and/or enhancement have been made. Complex algorithms and models are integrated into the Library instead of Model and Controller comparing to the approaches presented in the literature where all the components increase in complexity. In the other hand, the structure of the interactive View can be relevant since it promotes exchanging data between the classical View, the environment model and interactive devices. 3 Conclusion To consider virtual and augmented reality systems’ specifications in a design process, a design approach is required. In this paper we presented the “virtual and augmented reality in the loop” process, based on structuring the VR & AR applications into Model, interactive View, Controller and Library components that can be refined individually. We have shown how this model was used successfully to refine prototypes. In the other hand, we detailed MVC-3D design model. This model can help programmers to better structure the programming process and focuses specific processing in the Library component with which the implementation of methods and tools can be well organized. MVC-3D can be also adapted for applications involving heterogeneous algorithms and different interaction devices. Using our design approach, we can preserve the programmers’ practices and reduce programming complexity. References
{"Source-Url": "https://hal.science/hal-02052171/document", "len_cl100k_base": 4123, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22818, "total-output-tokens": 5248, "length": "2e12", "weborganizer": {"__label__adult": 0.0007257461547851562, "__label__art_design": 0.0020732879638671875, "__label__crime_law": 0.0006656646728515625, "__label__education_jobs": 0.001552581787109375, "__label__entertainment": 0.00015819072723388672, "__label__fashion_beauty": 0.0003228187561035156, "__label__finance_business": 0.0002682209014892578, "__label__food_dining": 0.0005831718444824219, "__label__games": 0.001544952392578125, "__label__hardware": 0.0034999847412109375, "__label__health": 0.001438140869140625, "__label__history": 0.0006208419799804688, "__label__home_hobbies": 0.00016641616821289062, "__label__industrial": 0.0009031295776367188, "__label__literature": 0.00041961669921875, "__label__politics": 0.0004184246063232422, "__label__religion": 0.000949859619140625, "__label__science_tech": 0.182861328125, "__label__social_life": 0.00013065338134765625, "__label__software": 0.00675201416015625, "__label__software_dev": 0.79150390625, "__label__sports_fitness": 0.0007953643798828125, "__label__transportation": 0.0014276504516601562, "__label__travel": 0.0004131793975830078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22069, 0.04211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22069, 0.61732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22069, 0.85558]], "google_gemma-3-12b-it_contains_pii": [[0, 1138, false], [1138, 5580, null], [5580, 10828, null], [10828, 15589, null], [15589, 20421, null], [20421, 22069, null], [22069, 22069, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1138, true], [1138, 5580, null], [5580, 10828, null], [10828, 15589, null], [15589, 20421, null], [20421, 22069, null], [22069, 22069, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22069, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22069, null]], "pdf_page_numbers": [[0, 1138, 1], [1138, 5580, 2], [5580, 10828, 3], [10828, 15589, 4], [15589, 20421, 5], [20421, 22069, 6], [22069, 22069, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22069, 0.13415]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1ecb1d06a1dbf9d7782496dc04554a4588162daa
[REMOVED]
{"Source-Url": "http://www.pa.icar.cnr.it/cossentino/AOSETF08/docs/Uniscon_keynote_proofs.pdf", "len_cl100k_base": 5251, "olmocr-version": "0.1.49", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26645, "total-output-tokens": 6711, "length": "2e12", "weborganizer": {"__label__adult": 0.00024628639221191406, "__label__art_design": 0.00046706199645996094, "__label__crime_law": 0.0003192424774169922, "__label__education_jobs": 0.0014410018920898438, "__label__entertainment": 4.3451786041259766e-05, "__label__fashion_beauty": 0.0001226663589477539, "__label__finance_business": 0.0003139972686767578, "__label__food_dining": 0.0002486705780029297, "__label__games": 0.00029921531677246094, "__label__hardware": 0.00048470497131347656, "__label__health": 0.00025773048400878906, "__label__history": 0.00021457672119140625, "__label__home_hobbies": 6.502866744995117e-05, "__label__industrial": 0.0003466606140136719, "__label__literature": 0.00026488304138183594, "__label__politics": 0.00019788742065429688, "__label__religion": 0.00033092498779296875, "__label__science_tech": 0.01261138916015625, "__label__social_life": 7.015466690063477e-05, "__label__software": 0.00817108154296875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002033710479736328, "__label__transportation": 0.0003142356872558594, "__label__travel": 0.00015151500701904297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28450, 0.03551]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28450, 0.74631]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28450, 0.90877]], "google_gemma-3-12b-it_contains_pii": [[0, 2806, false], [2806, 4878, null], [4878, 7106, null], [7106, 8937, null], [8937, 12537, null], [12537, 15070, null], [15070, 17934, null], [17934, 20076, null], [20076, 22916, null], [22916, 24798, null], [24798, 25741, null], [25741, 28450, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2806, true], [2806, 4878, null], [4878, 7106, null], [7106, 8937, null], [8937, 12537, null], [12537, 15070, null], [15070, 17934, null], [17934, 20076, null], [20076, 22916, null], [22916, 24798, null], [24798, 25741, null], [25741, 28450, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28450, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28450, null]], "pdf_page_numbers": [[0, 2806, 1], [2806, 4878, 2], [4878, 7106, 3], [7106, 8937, 4], [8937, 12537, 5], [12537, 15070, 6], [15070, 17934, 7], [17934, 20076, 8], [20076, 22916, 9], [22916, 24798, 10], [24798, 25741, 11], [25741, 28450, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28450, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
cc5d5f49eaf5d81b4a4ca252dc74a3ef2f9c584d
Abstract The noexcept function specifier and companion noexcept operator were invented to allow containers such as vector to provide the strong exception-safety guarantee when performing operations that require relocating existing elements. The noexcept specifier has since been utilized regularly to improve code generation and sometimes as a form of documentation. The problems with these off-label uses have been known for a long time; the Lakos Rule, which predated the release of C++11, was intended to prevent such use (or misuse) within the C++ Standard Library. This paper proposes an attribute, [[throws_nothing]], as an alternative to noexcept for annotating nonthrowing functions. Being invisible to the noexcept operator and having implementation-defined semantics, [[throws_nothing]] is a less powerful annotation than noexcept that avoids the issues the Lakos Rule was created to address. Thus, [[throws_nothing]] eliminates the temptation to abuse noexcept by providing a tool better suited for improving code generation and documenting programmer intent. 2 Brief Sketch of the Proposed Feature This minimal description of the [[throws_nothing]] feature is sufficient for understanding the “Motivation” section below. A full description of this proposal is found in the “Proposed Feature” section. A new function attribute, [[throws_nothing]], is proposed for annotating a function that does not throw when called within a correct program. The attribute is not detectable using the noexcept operator: ```cpp [[throws_nothing]] void f(int); static_assert(noexcept(f(0)) == false); ``` Whether the program will terminate if an exception attempts to escape from f, above, is implementation defined (and thus potentially user configurable). 3 Change log Changes in R1 from R0 (after Kona 2023): - Added an optional Boolean argument after compelling rational was presented in Kona. - Added a non-normative note indicating the behavior if a function has both [[throws_nothing]] and noexcept annotations. - Added (in Alternatives considered section) an exploration of annotating statements or expressions in addition to or instead of function declarations. 4 Motivation 4.1 Purpose of noexcept The noexcept specifier was introduced at the end of the C++11 cycle for one purpose: to enable the safe use of move constructors in vector-like containers that offer the strong exception-safety guarantee for certain operations, such as inserting elements at the end. The problem was first described, with noexcept as the proposed solution, in N2855. This proposal was later refined, eventually resulting in the final wording of N3050. Below is one possible implementation of a vector-reallocation function, which must leave the original vector unchanged if an exception is thrown while trying to move elements from the old buffer to the new one. This implementation uses if constexpr instead of the std::move_if_noexcept function to make the two different code paths easier to distinguish. ```cpp template <class T, class A> void vector<T, A>::reallocate(size_type new_capacity) { using alloc_traits = allocator_traits<A>; pointer new_data = alloc_traits::allocate(new_capacity); size_type i = 0; if constexpr (noexcept(T(std::move(m_data[i])))) { for (i = 0; i < size(); ++i) alloc_traits::construct(&new_data[i], std::move(m_data[i])); // efficient } else try { for (i = 0; i < size(); ++i) alloc_traits::construct(&new_data[i], m_data[i]); // copy (less efficient) } catch (...) { while (i) alloc_traits::destroy(new_data[--i]); alloc_traits::deallocate(new_data, new_capacity); throw; } // got here only if no exception was thrown for (i = 0; i < size(); ++i) alloc_traits::destroy(m_data[i]); alloc_traits::deallocate(m_data, m_capacity); m_data = new_data; m_capacity = new_capacity; } ``` The use of T’s move constructor can often yield algorithmic performance advantages over using its copy constructor, sometimes reducing the cost from $O(N)$ or worse per copy to $O(1)$. Such a move constructor, however, typically modifies the original object; if the move constructor might throw, vector must degenerate to using the copy constructor, and thus give up the performance gain to ensure that it can leave the original object in its initial state. Because vector<T, A>::reallocate is a generic function, using std::move and retaining the strong guarantee in the code above would be impossible if we did not have the noexcept operator. 4.2 The Lakos Rule Since the `noexcept` annotation was added late in the C++11 cycle and was thus brand new and not fully understood, applying it appropriately in the Standard Library was a challenge. John Lakos and Alisdair Meredith proposed what has become known as the Lakos Rule (described in N3279 and extended in P0884). Summarized below, the Lakos Rule provided a conservative framework for deciding whether a specific function may safely be declared `noexcept`. - If a function has no preconditions (a "wide contract") and is guaranteed not to throw (via an explicit "Throws: nothing" clause), it may be declared `noexcept`. - If a function has preconditions (a "narrow contract") or if it might throw when called correctly ("in contract"), it must not be declared `noexcept`. The example below shows a subset of the `std::vector` interface. Note that only `size()`, which promises not to throw and has no preconditions, is declared `noexcept`, whereas the others each fail one or both of the Lakos Rule tests and are thus not `noexcept`. ```cpp template <class T, class A> class vector { // ... constexpr size_type size() const noexcept; // wide contract, doesn't throw constexpr reference at(size_type); // wide contract, might throw constexpr reference operator[](size_type); // narrow contract, doesn't throw constexpr reference front(); // narrow contract, doesn't throw }; ``` 4.3 Resistance to the Lakos Rule Although the Lakos Rule is effective and has strong theoretical and practical underpinnings (e.g., enabling certain backward-compatible extensions and conforming wider interfaces, see P2861), two reasons have emerged for violating it. 1. Under many (but by no means all) circumstances, calling a `noexcept` function generates less code. Thus, programmers — both within and outside WG21 — want to use `noexcept` to improve code generation, yet the author has seen no compelling evidence that `noexcept` produces measurably faster code on any modern platforms. 2. Within WG21, concern has been voiced that the distinction between "Throws: nothing" and `noexcept` is unclear (see P1656). As tempting as it might be, violating the Lakos rule is ill advised unless a compelling case can be made that querying the function with the `noexcept` operator is necessary for optimizing or ensuring correctness of a generic algorithm at compile time. As described in P2861, if `noexcept` is added to a function in one version of the Standard, it cannot be removed in a future version without potentially breaking code. Specifically, widening the contract of a function to add new functionality is safe, provided that every program written for the old version has the same observable behavior when compiled with the new version, but if the old version is annotated with `noexcept`, the new version cannot be widened to accept new values that would result in an exception being thrown. Moreover immediate forced termination is not option in some environments. In such an environment, a defensive programming library (or language contract facility) might want to throw an exception on a precondition violation — even within a function that would not otherwise throw — so that the program can shut down gracefully or even soldier on. This ability to continue after a logic error has been detected is especially useful when testing the precondition checks themselves. The `noexcept` specifier interferes with such a throwing defensive-programming facility (see P2831R0). 4.4 Serving the C++ Multiverse The goal of this proposal is to address the constituencies within the different C++ universes (the C++ multiverse) that have been ill served by noexcept alone, such as embedded-software developers who want smaller code and those who need to avoid immediate termination of their programs (e.g., upon detecting contract violations). What is needed is a way to provide the desired code-generation and documentation benefits of noexcept without violating either the spirit or the letter of the Lakos Rule. 4.4.1 The embedded-software development universe The reduction in generated-code size that usually results from annotating a called function with noexcept or [[throws_nothing]] has the most impact in memory-constrained environments. WG21 members often assert that embedded-software developers turn off exceptions in their builds because the code-size cost of leaving them enabled is too large. Unfortunately, this assumption has led to a self-fulfilling prophecy: WG21 does nothing to make exceptions friendlier for embedded programmers, so embedded programmers, in turn, eschew exceptions. Since even an embedded microprocessor may have several megabytes of RAM available to it, completely turning off exceptions is not always necessary. Given an appropriate C++ implementation, judicious use of the [[throws_nothing]] attribute can help an executable stay within its memory budget while following the best design practices, including the Lakos Rule. 4.4.2 The graceful-termination universe In high-data-integrity environments, it is often unacceptable to terminate suddenly when encountering an error. To avoid data corruption, resources must be released, transactions rolled back, and user data saved, before aborting. If any of these actions would normally occur in a destructor of an RAII object, then such a graceful shutdown could not be accomplished readily in a terminate handler. Imagine a defensive-programming library comprising an assert-like macro and a custom function to handle assertion failures: ```cpp void assert_failure_handler(const char* file, unsigned line, const char* func, const char* expression) { std::cerr << file << ':' << line << ': in function ' << func << ': assertion failure ' << expression << std::endl; throw assert_failure_exception(file, line, func, expression); } #ifdef CHECKED_MODE #define ASSERT(cond) cond ? (void) 0 : assert_failure_handler(__FILE__, __LINE__, __FUNCTION__, #cond) #else #define ASSERT(cond) (void) 0 #endif ``` Now, imagine an integer absolute-value function, intAbs, having the precondition that the input is not INT_MIN, because the absolute value of INT_MIN is not representable in an int. When called in contract, intAbs does not throw an exception, so it is declared with [[throws_nothing]]. Within the function body, intAbs checks its precondition using the above ASSERT: ```cpp // Return absolute value of x. Precondition: x is not `INT_MIN`. [[throws_nothing]] int intAbs(int x) { ASSERT(x != INT_MIN); // precondition check ``` Code using this function might have a subtle bug that is detected only during beta testing (when real user data is at stake): ```cpp return x < 0 ? -x : x; ``` To trigger the precondition check without suddenly terminating the program, `[[throws_nothing]]` must allow the assert-failure exception to escape. The organization would thus choose a C++ implementation that ignores `[[throws_nothing]]`, thus allowing exceptions to propagate. By disabling the enforcement of `[[throws_nothing]]`, only the behavior of erroneous code is changed; the essential behavior is unaffected, although larger code size might be observed. ### 4.4.3 The must-not-terminate universe Some programs must not terminate at all, ever. For example, a game engine might continue running its main event loop after an error is detected, even if continuing would result in a momentary glitch on the screen. The universe of such programs has similar requirements to the graceful-termination universe; unexpected exceptions thrown from presumably nonthrowing functions should not terminate the program but release resources in an orderly way, then continue. Test drivers are an important subset of the must-not-terminate universe. A precondition check, like any other aspect of a function, should be tested — i.e., by providing inputs at the boundaries of the precondition — including deliberately violating it. When every precondition violation causes program termination, writing a portable and efficient test driver is not possible, as described in P2831R0. Given the defensive-programming library and `intAbs` function from the “The graceful-termination universe” section above, a white-box unit test for `intAbs` could test that the `ASSERT` correctly encodes the documented precondition. Using a throwing failure handler (as shown in `assert_failure_handler`), the test engineer would write a negative test that deliberately violates the precondition: ```cpp testPreconditionViolation() { try { intAbs(INT_MIN); return false; // failed to catch the precondition violation } catch (const assert_failure_exception&) { return true; // successfully caught the precondition violation } } ``` The test engineer would, as in the previous section, choose a C++ implementation that ignores `[[throws_nothing]]`, thus allowing the precondition check to detect the deliberate error without terminating the test program. Because `[[throws_nothing]]`, unlike `noexcept`, cannot be used to change the program logic within the caller, test engineers can have reasonable confidence that they are fully testing the function, even if the final program is eventually deployed using an implementation that terminates on a `[[throws_nothing]]` violation. ### 4.4.4 The library-specification universe A number of features in C++ that are intended to reduce errors or improve code generation have the side effect of making code more self-documenting. For example, `const` indicates — to both the compiler and human reader — that a variable’s value will not change, and the `assert` macro documents an invariant of an algorithm in a way that is enforceable at run time. Similarly, `[[throws_nothing]]` indicates, at a glance, that a function will not throw when called in contract; both the implementation and the human reader benefit. Within the C++ Standard Library, functions having “Throws: nothing” as part of their description could be annotated with `[[throws_nothing]]`. Whether such a practice would add clarity is a matter for LWG to decide. 5 Proposed Feature A Standard attribute, tentatively named `[[throws_nothing]]` and appertaining to function declarations, is proposed to indicate that a function is specified not to throw when all its preconditions are met (i.e., it is called in contract): ``` [[throws_nothing]] void f(int); ``` The `[[throws_nothing]]` attribute takes an optional argument that is a constant expression contextually convertible to `bool`, where `[[throws_nothing(false)]]` is equivalent to omitting the attribute: ``` // A call to `f2<T>` is declared never to throw if `T` is trivially copyable. template <class T> [[throws_nothing(std::is_trivially_copyable_v<T>)]] void f2(const T&); ``` The presence of the `[[throws_nothing]]` attribute cannot be queried by the program itself at compile time; the result of the `noexcept` operator and the function type are unchanged: ``` [[throws_nothing]] void g1(int); static_assert(noexcept(g1(0)) == false); [[throws_nothing]] void g2(int) noexcept; static_assert(noexcept(g2(0)) == true); [[throws_nothing]] void g3(int) noexcept(false); static_assert(noexcept(g3(0)) == false); [[throws_nothing(false)]] void g4(int); static_assert(std::is_same_v<decltype(g1), decltype(g4)>); ``` Intentionally making `[[throws_nothing]]` invisible to the `noexcept` operator prevents using `[[throws_nothing]]` to select an algorithm at compile time; the attribute does not change the essential behavior\(^1\) of a correct program and can be removed from a subsequent version of a function, provided the behavior of the function does not change for any previously valid inputs. If a `[[throws_nothing]]` function attempts to exit via an exception, then whether `std::terminate` is called or the annotation is ignored (and the exception propagates normally) is implementation defined. The recommended best practice is to make both semantics available to the user. If, however, the function is also annotated with `noexcept` or `noexcept(true)`, `std::terminate` is always called, regardless of the implementation’s semantic for `[[throws_nothing]]`. By making the behavior of an incorrect program — one that attempts to throw from a `[[throws_nothing]]` function — implementation defined, rather than always terminating, the behavior can vary to serve the multiple constituencies of the C++ multiverse. On an implementation that calls `std::terminate`, a call to a function annotated with `[[throws_nothing]]` is likely to result in smaller generated code compared to one with no annotation at all. Conversely, an implementation that ignores the attribute allows for graceful --- \(^1\)Essential behavior comprises the promised behavior of a function when called in contract. The return value, guaranteed side effects, and complexity guarantees are part of essential behavior. The layout of objects, number of instructions executed, and logging are rarely part of a function’s essential behavior. The effects of calling the function out of contract are never part of essential behavior. shutdown, log-and-continue semantics, and effective testing of contract checks in functions that would not otherwise throw. As with noexcept currently, implementations of the Standard Library would be permitted to use [[throws_nothing]] for any nonthrowing function, even though the Standard itself would never mandate its use. In fact, for discretionary use by implementations, [[throws_nothing]] is much better than the noexcept specifier because [[throws_nothing]] cannot inadvertently change the meaning of a correct program and is responsive to the settings used to build the program. 5.1 Feature comparison For functions that promise not to throw, the table below compares [[throws_nothing]] to noexcept and to using no annotation at all (unmarked). If terminate means yes for implementations that terminate on unexpected exceptions and no otherwise. If ignore means yes for implementations that ignore the annotation and no otherwise. The purpose of the table is not to show that one annotation is better than the other, but that, despite some overlap, they serve different purposes and therefore support different use cases, none of which violate the Lakos Rule. <table> <thead> <tr> <th>Makes function self-documenting</th> <th>unmarked</th> <th>noexcept</th> <th>[[throws_nothing]]</th> </tr> </thead> <tbody> <tr> <td>Provides codegen hint to compiler</td> <td>no</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>Terminates on unexpected exception</td> <td>no</td> <td>yes</td> <td>if terminate</td> </tr> <tr> <td>Suitable for wide contracts</td> <td>yes</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>Suitable for narrow contracts</td> <td>yes</td> <td>no</td> <td>yes</td> </tr> <tr> <td>Compatible with graceful shutdown</td> <td>yes</td> <td>no</td> <td>if ignore</td> </tr> <tr> <td>Compatible with log-and-continue</td> <td>yes</td> <td>no</td> <td>if ignore</td> </tr> <tr> <td>Compatible with throwing defensive checks</td> <td>yes</td> <td>no</td> <td>if ignore</td> </tr> <tr> <td>Supports compile-time algorithm selection</td> <td>no</td> <td>yes</td> <td>no</td> </tr> </tbody> </table> 5.2 Syntax and spelling The [[throws_nothing]] annotation fits well with the conventional notion of an attribute: Removing the attribute has no essential effect on a correct program (see P2552R3). Rendering this functionality as a keyword or contextual keyword seems unnecessary. Putting the [[throws_nothing]] attribute in the same location as noexcept would seem logical, but for an attribute to appertain to a function, the attribute must occur either before the function declaration or immediately after the function identifier: ```c [[throws_nothing]] void f(int); // OK do g (int) [throws_nothing]; // OK do h (int) [throws_nothing]; // ERROR: improper attribute placement ``` The original spelling for the attribute was [[does_not_throw]], which (for people who count keystrokes) happens to have the same number of characters as [[throws_nothing]]. The name was changed to [[throws_nothing]] to match the “Throws: nothing” phrasing that LWG uses when documenting functions that do not throw. The optional Boolean argument works like the optional argument to noexcept and enables a function template to make a nothrow promise only when certain conditions are true. 6 Alternatives Considered 6.1 Switching noexcept on and off with a constant expression One use of \([\text{does\_not\_throw}]\) is to allow defensive checks to throw an exception through an otherwise-nonthrowing interface. One proposed way to achieve this behavior for nonthrowing functions is to use \texttt{noexcept} in such a way that it can be turned off when desired. This approach can be implemented with the help of the preprocessor. For example, using the framework described in “The unit testing universe” section, \texttt{noexcept} can be turned off when \texttt{CHECKED\_MODE} is defined: \begin{verbatim} #ifdef CHECKED_MODE inline constexpr bool does_not_throw = false; #else inline constexpr bool does_not_throw = true; #endif void f(int i) noexcept(does_not_throw) // BAD IDEA! { ASSERT(i < 0); // ... } \end{verbatim} With this approach, the expression \texttt{noexcept(f(0))} will yield different results depending on the \texttt{CHECKED\_MODE} macro, possibly resulting in different logic paths for debug and release builds, and will thus violate the principle that essential behavior must not be changed by build modes — a principle convincingly advocated for in P2831R0 and P2834R0 and named, by the latter, Build-Mode Independence. 6.2 Having on attribute on an expression rather than a function Having a syntax to indicate that a statement or an expression does not throw at the point of invocation is intriguing. The same \([\text{throws\_nothing}]\) attribute could conceivably be applied to a statement: \begin{verbatim} std::vector v{ 1, 2, 3 }; for (std::size_t i = 0; i < v.size(); ++i) { int x1; [\text{[throws\_nothing]}] x1 = v.at(i); // Will not throw y = f(i); int x2 = v.at(y); // Might throw } \end{verbatim} This paper does not propose such an extension, as we have no experience with anything similar. Moreover, the abuse of \texttt{noexcept} is a problem we have today and which gets worse with every additional instance. There is clear benefit to being able to annotate function declarations and implementability is not in question; there is much less clarity about the benefits or implementability of statement- or expression-level nothrow annotations. Consideration of these other annotations are orthogonal to the function-declaration attribute, so we should not delay this proposal as they are considered. In the mean time some of the potential benefits can be explored by using lambda expressions: \begin{verbatim} x1 = [&] [\text{[throws\_nothing]}] { return v.at(i) }(); // Will not throw \end{verbatim} 7 Effects on the Standard Library No changes would be needed immediately in the C++23 Standard Library if \([\text{throws\_nothing}]\) were adopted. LWG can discuss whether to replace or augment “\textit{Throws: nothing}” in the description with in the interface of functions having narrow contracts that promise not to throw when called in contract. An immediate change to the C++26 Working Paper might be necessary if any narrow-contract functions targeted for C++26 are currently annotated with noexcept; perhaps those annotations should be changed to [[throws_nothing]] or perhaps the Standard should omit the annotation and leave it up to the implementation to decide whether to use [[throws_nothing]]. Violations of the Lakos Rule already in C++23 could be handled on a case-by-case basis (via DRs). Minimizing such violations would result in greater stability across implementations and versions of the Standard. 8 Implementation Experience At present, no compilers implement this feature. If this paper receives a favorable response in EWGI, we will implement the proposed facility before presenting it to EWG. Implementation is expected to be a fairly simple delta on the existing implementation of noexcept. 9 Formal Wording Changes are relative to the December 2023 Working Paper, N4971. **Note:** This wording is known to be incomplete, Open issues are called out when possible. Insert the following note somewhere within paragraph 5 of [except.spec]: ```markdown [Note: The [[throws_nothing]] attribute is not a non-throwing specification. — end note] ``` Insert the following new paragraph after paragraph 5 of [except.spec]: ```markdown Whenever an exception is thrown and the search for a handler ([except.handle]) encounters the outermost block of a function previously declared with the throws_nothing attribute having no attribute-argument or an attribute-argument that evaluates to true, it is implementation-defined whether the function std::terminate is invoked ([except.terminate]). If the throws_nothing attribute has an attribute-argument that evaluates to false, it has no effect on exception handling. [Note: If the function has a non-throwing exception specification ([except.spec]) std::terminate is invoked regardless of the implementation-specified behavior of throws_nothing — end note] ``` **Open issue:** In Kona, there was not much support for allowing [[throws_nothing]] on function pointers. Implicitly, then, an indirect call cannot take advantage of [[throws_nothing]] unless the compiler can prove that the function pointer points to a [[throws_nothing]] function. Does anything (normative or non-normative) need to be said about a function called indirectly via a function pointer? Insert the following new subsection within the [dcl.attr] section: **Throws nothing attribute [dcl.attr.throwsnothing]** The attribute-token throws nothing specifies whether a function can exit via an exception. An attribute-argument clause may be present and, if present it shall have the form ```markdown ( constant-expression ) ``` The constant-expression shall be contextually convertible to bool. The absence of an attribute-argument is treated as equivalent to ( true ). The attribute may be applied to a function or a lambda call operator. The first declaration of a function shall specify the throws nothing attribute if any declaration of that function specifies the throws nothing attribute. If a function is declared with the throws nothing attribute in one translation unit and the same function is declared with a different throws nothing attribute or without the throws nothing attribute in another translation unit, the program is ill-formed, no diagnostic required. The effects of the throws nothing attribute is described in [except.spec]. [Note 1: Unlike the exception specification of a function ([except.spec]), whether a function is marked with throws_nothing has no effect on the function’s type and is not observable through the noexcept operator ([expr.unary.noexcept]). — end note] Recommended practice: An implementation should provide to users the ability to translate a program such that all instances of throws_nothing result in std::terminate being invoked as described above. An implementation should further provide to users the ability to translate a program such that all instances of throws_nothing are ignored. The value of a has-attribute-expression for the throws_nothing attribute should be 0 if, for a given implementation, the throws_nothing attribute never causes std::terminate to be invoked. Rationale: The Recommended Practice wording is consistent with proposed wording for the Contracts facility; see P2877R0. [Example 1: ```cpp template <class Tp> [[ throws_nothing(std::is_unsigned_v<Tp>) ]] void f(Tp x) { if (x < 0) throw "negative"; // Normal exception for signed Tp else if (x > 100) throw "too big"; // Normal for signed Tp, else implementation-defined } static_assert(noexcept(f(-1) == false)); // OK, attribute is not queryable. — end example] 10 Conclusion The noexcept specifier is problematic because it can be queried via the noexcept operator, which means that it cannot be changed without changing the meaning of a client program. Moreover, the consequence of violating a noexcept specification is immediate program termination. By creating similar feature, [[throws_nothing]], that differs only in that it (1) cannot be queried and (2) can be ignored without violating its semantics, we enable optimizing for multiple distinct universes; adopting this proposal achieves the wants and needs of the multiverse. 11 Acknowledgments Thanks to John Lakos, Joshua Berne, Brian Bi, Mungo Gill, Timur Doumler, and Lori Hughes for reviewing this paper and offering useful improvements. Thanks to Timur Doumler for providing most of the formal wording and Nina Ranns and Joshua Berne for reviewing the wording.
{"Source-Url": "https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p2946r1.pdf", "len_cl100k_base": 6363, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 26015, "total-output-tokens": 6811, "length": "2e12", "weborganizer": {"__label__adult": 0.0003902912139892578, "__label__art_design": 0.00029087066650390625, "__label__crime_law": 0.0003445148468017578, "__label__education_jobs": 0.0002677440643310547, "__label__entertainment": 4.780292510986328e-05, "__label__fashion_beauty": 0.00012552738189697266, "__label__finance_business": 0.0001252889633178711, "__label__food_dining": 0.0003459453582763672, "__label__games": 0.00042319297790527344, "__label__hardware": 0.0006814002990722656, "__label__health": 0.0003478527069091797, "__label__history": 0.00016999244689941406, "__label__home_hobbies": 7.390975952148438e-05, "__label__industrial": 0.00025010108947753906, "__label__literature": 0.00019931793212890625, "__label__politics": 0.0002384185791015625, "__label__religion": 0.0004172325134277344, "__label__science_tech": 0.003993988037109375, "__label__social_life": 7.051229476928711e-05, "__label__software": 0.0031604766845703125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0002999305725097656, "__label__transportation": 0.0004677772521972656, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29244, 0.01349]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29244, 0.40742]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29244, 0.87206]], "google_gemma-3-12b-it_contains_pii": [[0, 2174, false], [2174, 4592, null], [4592, 8118, null], [8118, 11176, null], [11176, 14404, null], [14404, 17749, null], [17749, 20730, null], [20730, 23565, null], [23565, 27118, null], [27118, 29244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2174, true], [2174, 4592, null], [4592, 8118, null], [8118, 11176, null], [11176, 14404, null], [14404, 17749, null], [17749, 20730, null], [20730, 23565, null], [23565, 27118, null], [27118, 29244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29244, null]], "pdf_page_numbers": [[0, 2174, 1], [2174, 4592, 2], [4592, 8118, 3], [8118, 11176, 4], [11176, 14404, 5], [14404, 17749, 6], [17749, 20730, 7], [20730, 23565, 8], [23565, 27118, 9], [27118, 29244, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29244, 0.03984]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6e0ee3f68316fc2d7784d7a7d8b8ca065f30cef4
A task scheduler for ROS Cédric Pradalier UMI 2958 GT-CNRS - GeorgiaTech Lorraine Metz, France October 14, 2016 1 Introduction Developing a complete robotic system often requires combining multiple behaviours into a complex decision grid, with elements running in sequence or in parallel, eventually interrupting each others. To solve this “age-old” problem, ROS provides two main tools: Actionlib: a client-server architecture that provides a way to specify results to be achieved. While the server works on these results, it should report progresses and ultimately report when the task is completed. Smach: a python API to define complex state machines. It can interact with ROS services and actions to define a complex behaviour, including nesting, interruptions and concurrence. Combining Smach and Actionlib, one could build arbitrarily complex systems. Hence, why would another task management system be necessary? The main argument in favour of our task scheduler is the simplicity of its use, particularly in comparison with Actionlib. As usual, simplicity is a trade-off against expressiveness. Simplicity can be sacrificed by linking our task scheduler with Actionlib and/or Smach to exploit the best of both worlds. This task scheduling framework is the culmination of about 10 years of experience developing robotic applications in the academic context. Most applications we designed could be handled using the task scheduling framework we will present in this document. We provide the source code for this project at: https://github.com/cedricpradalier/ros_task_manager. 2 Concepts 2.1 What is a task? In our framework a task is seen as a behaviour that can be started, run for some amount of time, possibly infinite, and then terminates either on completion or on interruption. A task requires some context such as system variables, ROS topics, and services, but it can also store its own internal variables and state. Programmatically, a task is a C++ class that must implement an iterate function and may implement an initialize and terminate function. We distinguish two concepts: Task Definition: the main description of a task such as its name, help text and parameter description, as well as some status information. It acts as a factory to instantiate the task for specific parameters and is used to communicate with the scheduler clients. For a given task name, a single task definition is possible. Task Instance: a task instance is created from the task definition when the task needs to be run. This is the class that needs to implement the initialize, iterate, and terminate function as well as store any relevant variable. Multiple task instances with the same name and possibly different parameters may be launched from a single task definition. 2.2 A basic example For an initial example, we create a simple task for reaching a desired destination (let us ignore the template parameters for now). ```cpp class TaskFactoryGoTo : public TaskDefinition<TaskGoToConfig, TurtleSimEnv, TaskGoTo> { public: TaskFactoryGoTo(TaskEnvironmentPtr env) : Parent("GoTo","Reach a desired destination",true,env) {} virtual ~TaskFactoryGoTo() {} }; ``` Using this task definition, we next create a specific task instance: ```cpp class TaskGoTo : public TaskInstance<TaskGoToConfig, TurtleSimEnv> { public: TaskGoTo(TaskDefinitionPtr def, TaskEnvironmentPtr env) : Parent(def,env) {} virtual ~TaskGoTo() {} virtual TaskIndicator iterate(); virtual TaskIndicator terminate(); }; ``` The implementation of this particular task just need to focus on the specifics of the behaviour, taking advantage of some parameters available in the `cfg` variable. ```cpp TaskIndicator TaskGoTo::iterate() { const turtlesim::Pose & tpose = env->getPose(); // distance to target double r = hypot(cfg.goal_y-tpose.y,cfg.goal_x-tpose.x); // completion condition if (r < cfg.dist_threshold) { return TaskStatus::TASK_COMPLETED; } // angle to target double alpha = remainder(atan2((cfg.goal_y-tpose.y),cfg.goal_x-tpose.x)-tpose.theta,2*M_PI); // Saturated proportional control law. if (fabs(alpha) > M_PI/6) { double rot = ((alpha>0)?+1:-1)*M_PI/6; env->publishVelocity(0,rot); } else { double vel = cfg.k_v * r; double rot = cfg.k_alpha*alpha; if (vel > cfg.max_velocity) vel = cfg.max_velocity; env->publishVelocity(vel, rot); } return TaskStatus::TASK_RUNNING; } ``` Finally, we implement the terminate function to ensure the last requested velocity is always zero. ```cpp TaskIndicator TaskGoTo::terminate() { env->publishVelocity(0,0); return TaskStatus::TASK_TERMINATED; } ``` 2.3 Purpose of the task system The purpose of the task system is to facilitate the development and testing of complex applications. In such an application, the system (i.e. the task server) will have a set of pre-implemented behaviours that can be instantiated by a client application. In the simplest case, the combination of behaviours is done in sequence: go to point A, take picture, go to point B, deliver coffee, go back to charging station. We define such a sequence as a mission. The simplest missions are statically defined but branching on some conditions could also be required: deliver coffee, if water is empty, go refill, go back to charging station. As more and more conditions are required, a proper state machine management could be implemented using Smach. In an academic setting though, most demonstrations can be implemented with simple, human-readable, quasi-linear missions. Our missions are typically implemented in python and look like the following script: ```python #!/usr/bin/python import roslib; roslib.load_manifest('task_manager_turtlesim') import rospy from task_manager_lib.TaskClient import * rospy.init_node('task_client') tc = TaskClient("/task_server", 0.2) while True: tc.Wait(duration=1.) tc.GoTo(goal_x=1.0, goal_y=1.0) tc.Wait(duration=2.) tc.GoTo(goal_x=5.0, goal_y=5.0) ``` In this listing, it is important to notice that the tasks are called as member functions of the TaskClient, using the name defined in the TaskDefinition class. These functions are dynamically generated from the information received from the task server. Because the server also provides the parameters that the function can accept, parameters are order-independent and explicitly named in the mission. This also allows checking and enforcing type compatibility before trying to execute a task. ### 2.4 Task parameters and Dynamic Reconfigure To allow introspection, the task parameters are defined using the dynamic_reconfigure framework. This allows specifying each parameter with a name, a help string, a type, a default value and a range where appropriate. The dynamic_reconfigure framework defines parameters in a config file similar to the following: ```python #! /usr/bin/env python import roslib; roslib.load_manifest('task_manager_turtlesim') from dynamic_reconfigure.parameter_generator import * from task_manager_lib.parameter_generator import * gen = TaskParameterGenerator() # Name Type Description Default Min Max # Name Type Description Default Min goal_x double_t, 0, "X coordinate of destination", 0.0 goal_y double_t, 0, "Y coordinate of destination", 0.0 k_v double_t, 0, "Gain for velocity control", 1.0 k_alpha double_t, 0, "Gain for angular control", 1.0 max_velocity double_t, 0, "Max allowed velocity", 1.0 dist_threshold double_t, 0, "Distance at which target is considered reached", 0.1 exit(gen.generate(PACKAGE, PACKAGE, "TaskGoTo")) ``` Note that the ParameterGenerator class has been overloaded with the TaskParameterGenerator to make sure default parameters common to all tasks are always present in the list. A secondary benefit of using the dynamic_reconfigure framework is that it is possible to use the reconfigure gui to check the values of the parameters while a task is running and eventually change them. This is particularly useful to tune control parameters at run-time. In practice dynamic_reconfigure generates a Python and a C++ class containing the parameter data as class members. This class is one of the template parameters of the TaskDefinition and TaskInstance classes. It is available in a variable named `cfg` in every instance, as can be observed in the `iterate` function above. --- 2.5 Task Environment In most applications, a common set of functions, variables and topics of interest will be needed by most if not all classes. These could be robot dimensions, velocity commands, sensor measurements, etc. As mentioned above, it is possible for each task to subscribe to its own topics when started and unsubscribe when terminating. To simplify the development of functions shared between tasks the task server shares a common variable called the task environment with every task. There are no constraints regarding what can be included in the environment because the parent class only provides a common mutex: ```cpp class TaskEnvironment { public: boost::shared_mutex environment_mutex; public: TaskEnvironment() {} virtual ~TaskEnvironment() {} }; ``` 2.6 Templated Parents To further simplify the implementation of new classes, common functions are written into a templated class, which is later inherited for specific applications. TaskDefinition<Config,Environment,Instance> defines a new task factory, specialized on a specific task parameter of type Config, a specific environment, and specific instances. TaskInstance<Config,Environment> specializes a task instance for an application. The Environment class is used to create a member variable env pointing to the shared environment. The Config class is used to create a member variable cfg, which is filled with the task parameters when the task is instantiated and updated as appropriate by the dynamic reconfigure framework. 2.7 The Task Server The task server has multiple responsibilities in this framework. First, it loads the description of all tasks (TaskDefinition objects). It then provides a service to start or stop a given task, and keeps track of the status of all tasks currently running or recently terminated. It is also responsible for instantiating the task scheduler that manages the threads in which tasks actually run. Because none of this is application-specific, most of the task server can be made generic. As a result, the main function for a specific task server will often be as simple as the following listing: ```cpp #include "task_manager_lib/TaskServerDefault.h" #include "task_manager_turtlesim/TurtleSimEnv.h" using namespace task_manager_lib; class TaskServer : public TaskServerBase { public: TaskServer(TaskEnvironmentPtr _env) : TaskServerBase(_env,true) { start(); } }; int main(int argc, char *argv[]) { ros::init(argc,argv,"turtlesim_tasks"); ``` 3 Creating a new task framework To instantiate our framework through an example, we will start with the turtlesim simulator from the ROS tutorial. We will then create a minimal set of tasks to implement part of the LOGO functionalities\(^2\). The complete implementation is available in the example package task_manager_turtlesim. 3.1 Required files Although the implementation-specific details will be given in the next sections, it is already possible to list what is required to create a new task framework: - Define a specialized task environment subscribing to commonly required topics and providing common tools for all classes. - Define a new task server main by mostly copy-pasting the above example. - For each task, create a .cfg file to declare its parameters and define the task description (inheriting from TaskDefinition) and the task instance (inheriting from TaskInstance). Once a proper CMakeLists.txt has been created, all the tasks will be compiled as a shared library and the task server will be able to dynamically load as many of them as it may find. 3.2 The Environment We set the environment up by adding function relevant to the application. For instance, it is a fairly safe guess that any task working with the turtlesim environment will need to read the turtle pose (turtlesim/ Pose) and potentially publish velocity commands (turtlesim/Velocity). These functions must then be part of the environment. We also add a helper function that uses the turtlesim service to set the pen colour. This could arguably be subscribed to only in the task that needs it. The TurtleSimEnv class is declared in the listing (header file) below: \(^2\)LOGO is an old programming language used to programmatically draw curves on screen in the early days of computers: [http://en.wikipedia.org/wiki/Logo_(programming_language)](http://en.wikipedia.org/wiki/Logo_(programming_language)) public: TurtleSimEnv(ros::NodeHandle & nh); ~TurtleSimEnv() {}; const turtlesim::Pose & getPose() const; void publishVelocity(double linear, double angular); void setPen(bool on, unsigned int r=0xFF, unsigned int g=0xFF, unsigned int b=0xFF, unsigned int width=1); }; #endif // TURTLE_SIM_ENV_H Additionally, the required functions are defined in the following implementation file: 3.3 The task server As mentioned earlier, the task server is mostly a copy-paste from the example task server: ros::spin(); return 0; } 3.4 Task Idle When no task is required to run, the task server must ensure a well defined behaviour of the system. To this end, the task server is instantiated with an Idle task. The default one, which is instantiated in the above example, just does nothing forever. When writing the Idle class, one has to be careful that it may be run between two tasks: the server has no way to know that another task will be required when a specific task terminates. As an example, on an industrial vehicle, it could make sense for the Idle task to engage the parking brakes, or for an underwater vehicle to start coming back to the surface if no task is required. In both cases, it would probably be safer to wait for a small but relevant duration before acting. This would give time to the mission executive to request the execution of a new task. 3.5 Task GoTo The purpose of this task is mostly to have the simulated turtle reach a given destination. In addition to the goal parameters, it will have parameters for its control law (gains, saturation) and for its completion condition (distance from the goal). The config file, header and source for this task have been used for the examples above. We nonetheless copy them here for completeness. First the config file that will define the different task parameters. ```python #!/usr/bin/env python PACKAGE='task_manager_turtlesim' import roslib; roslib.load_manifest(PACKAGE) from dynamic_reconfigure.parameter_generator import * from task_manager_lib.parameter_generator import * gen = TaskParameterGenerator() # Name Type Description Default Min gen.add("goal_x", double_t, 0, "X coordinate of destination", 0.) gen.add("goal_y", double_t, 0, "Y coordinate of destination", 0.) gen.add("k_v", double_t, 0, "Gain for velocity control", 1.0) gen.add("k_alpha", double_t, 0, "Gain for angular control", 1.0) gen.add("max_velocity", double_t, 0, "Max allowed velocity", 1.0) gen.add("dist_threshold", double_t, 0, "Distance at which the target is considered reached", 0.1) exit(gen.generate(PACKAGE, "task_manager_turtlesim", "TaskGoTo")) ``` The following headers defines the task definition and instance. ```c++ #include "task_manager_turtlesim/TurtleSimEnv.h" #include "task_manager_turtlesim/TaskGoToConfig.h" using namespace task_manager_lib; namespace task_manager_turtlesim { class TaskGoTo : public TaskInstance<TaskGoToConfig, TurtleSimEnv> { public: TaskGoTo(TaskDefinitionPtr def, TaskEnvironmentPtr env) : Parent(def, env) {} virtual ~TaskGoTo() {}; virtual TaskIndicator initialise(); virtual TaskIndicator iterate(); virtual TaskIndicator terminate(); }; class TaskFactoryGoTo : public TaskDefinition<TaskGoToConfig, TurtleSimEnv, TaskGoTo> ``` An important point to notice is the boolean true in the TaskFactoryGoTo constructor. This declares the class as periodic and instructs the scheduler to take care of calling the iterate function at an approximately constant rate. This rate is one of the default task parameters and a default value is provided to the task server on initialization. The implementation of the task follows. Note that it can focus on the mathematics of the task and does not need to be aware of any infrastructure such as threading, mutexes, data storage and callbacks, etc... Another important remark is that the initialise function is unnecessary in this case and could have been omitted in this code. It is only included here for illustration. 3.6 Implementation Constraints 3.6.1 Constructors For a given task, the task instance and task definition constructor must respect the profile used in the above example and initialize their parent class properly by transferring them the environment pointer. It is not possible to add other arguments to these constructors since they will always be called by the task scheduler which knows nothing about additional arguments. The class inheriting from TaskDefinition (task factory) is the one responsible for naming the task and specifying its help string. The name must be specified and unique for a given application. The task scheduler will output a warning when trying to register twice a task with the same name. The help string could be ignored, but it comes in handy when running a task from the command line. ### 3.6.2 Initialization Variable initialization should be implemented in the `initialise` function. In the context of a task class inheriting from the templated `TaskInstance`, the variable `cfg` has already been affected with the values read from the task parameters. The `cfg` variable is an instance of the `Config` class given as a template parameter. The `env` variable points towards the task environment and could be used, for instance, to recover a global ROS `NodeHandle`. For classes inheriting directly from `TaskInstanceBase` (not recommended), the `parseParameters` function should be overloaded to recover the task parameters and process them as required. The `initialise` function is the right place to create ROS publishers or subscribers, and to affect initial variable values. In particular, any task intending to implement a relative motion, e.g. rotate over 360 degrees, should use this function to record the current system state. There are no constraints regarding the duration of the `initialise` function, it could include blocking calls or return immediately. However, the task scheduler does not publish the updated task status while a task is initializing. The `initialise` function should return `TaskStatus::TASK_INITIALIZED` on success, and `TaskStatus::TASK_INITIALIZATION_FAILED` otherwise. Using the `setStatusString` function allows setting an error status that will then be published by the task scheduler. If a task does not report itself as initialised, the iterate and termination function are not executed. If a task has no need for initialisation, it should just not overload the `initialise` function and use the default one that just returns `TaskStatus::TASK_INITIALIZED`. ### 3.6.3 Iterations The `iterate` function is, by default, the place to implement the control-loop aspect of a task as well as its termination conditions. For classes inheriting from `TaskInstance`, the `cfg` variable contains the task parameters and the `env` variable points towards the task environment. There are two types of tasks, and they condition the way the `iterate` function should behave. **Periodic tasks** are used when the `iterate` function should be called at a constant frequency (up to the OS scheduler precision). In this case, the `iterate` function should be relatively short. If the task is deemed completed, the function returns `TaskStatus::TASK_COMPLETED`, otherwise, it returns `TaskStatus::TASK_RUN`. In the former case, the `iterate` function is not called anymore and the task transitions to termination. Example of this type of task include control loops and tasks waiting for specific events to happen. **Non-periodic tasks** are called once and take an undefined time to complete. They update their status as they see fit and return `TaskStatus::TASK_COMPLETED` when done. Examples of this type of tasks include path planning functions, service calls, large pre-processing tasks, etc... The selection between one type of task and the other is made in the `TaskDefinition` constructor, with its third argument (`is_periodic`). A value of `true` marks the task as periodic. The `iterate` function **must** be overloaded for a task class to be valid (the parent function is pure virtual). ### 3.6.4 Termination The `terminate` function is called once when the task is completed, whether it completes successfully or not. Its role is to leave the system in a consistent and safe state. For instance, on some platforms, it might be a good habit to set the vehicle speed to zero (assuming it is not flying) when completing any task. There are no constraints on the duration of the `terminate` function. If a class does not require any specific code on termination, there is no need to overload the parent function. The `terminate` function usually returns `TaskStatus::TASK_TERMINATED`. If it needs to report a failure, it can return `TaskStatus::TASK_FAILED`, but this is unusual at this stage. The task instance object will be destroyed once a task has terminated, which will close any publisher or subscriber it owns. This destruction occurs a few seconds after the termination class, to ensure the task status is correctly updated and published. One should not rely on the instance destruction to implement system related clean-up. As a result, the task instance destructor is empty most of the time. ### 3.6.5 Dynamic Loading It is assumed that tasks will be loaded dynamically by the task server. To this end, they have to be compiled as dynamic libraries (or plugins), and they have to define a common handle that the task scheduler will use to instantiate the task definition. The `DYNAMIC_TASK` macro creates the required code and should be included in any task intended for dynamic loading (default behaviour). ### 3.6.6 Dynamic reconfiguration When a task instance is created, it launches a ROS `dynamic_reconfigure` server, even before calling the `initialise` function. Dynamic reconfigure allows the modification of the task parameters at run-time using a practical graphical user interface. Besides introspection, this function is one of the reasons why the Config files are used to define task parameters. This use of dynamic reconfigure has two caveats: 1. First, one should not assume that the value in the `cfg` variable will stay constant over the task life. For most parameters (control gains, set-points), this is not an issue. However, some parameters are used during initialisation to allocate memory or computational structures. In this case, it is recommended to store the initial value and ignore the `cfg` value later. Even better, issue a ROS_WARN if the value gets modified to prevent giving the impression that the reconfiguration server is not working. 2. Second, task parameters have to be single values: float, booleans, strings, integer. Dynamic reconfigure does not offer a way to encode arrays in parameters. ### 4 Using the task framework #### 4.1 Compilation and linking The task framework is dependent on the ROS application building framework. It uses `catkin` (even though traces of rosbuild can still be found) and it is governed by a standard `catkin CMakeLists.txt` file. The task framework requires the following elements of the `CMakeLists.txt` file: - Dependency on package `task_manager_lib` and `task_manager_msgs`. - Processing of the config files for each task with the `generate_dynamic_reconfigure_options`. See the `dynamic_reconfigure` package documentation for details. - Compile and link the task server and its environment (assumed to be in `src/Environment.cpp`): ``` ADD_EXECUTABLE(task_server src/task_server.cpp src/Environment.cpp) TARGET_LINK_LIBRARIES(task_server ${catkin_LIBRARIES} dl) ``` • Making sure that the tasks plugins are generated in a well defined place that we can link from the package launch files: ```cmake set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CATKIN_DEVEL_PREFIX}/${CATKIN_PACKAGE_SHARE_DESTINATION}/tasks) ``` • Adding the tasks as shared library with the following code: ```cmake ADD_LIBRARY(TaskName SHARED tasks/TaskName.cpp) TARGET_LINK_LIBRARIES(TaskName ${catkin_LIBRARIES} dl) ADD_DEPENDENCIES(TaskName ${${PROJECT_NAME}_EXPORTED_TARGETS}) ``` This assumes that a source file `TaskName.cpp` implements the task classes in the `tasks` directory. The `ADD_DEPENDENCIES` is important to inform CMake that the task actually cares about the generation of the config classes. 4.2 Launch file The tasks and task server having been compiled by the previously listed CMakeLists.txt, the task server can be launched with the following command. Note that the `lib_path` parameter is absolutely necessary to let the task server know where to look for task plugins. With the task ```xml <?xml version="1.0" encoding="UTF-8" ?> <launch> <node name="task_server" pkg="my_task_package" type="task_server" output="screen"> <param name="lib_path" value="$(find my_task_package)/tasks"/> </node> </launch> ``` In case a task is removed from the list of tasks, one will need to be careful to delete the corresponding dynamic library by hand. Otherwise, the task server will keep trying to load it. This should not have any detrimental effect so long as the old class is not instantiated. If it is, it might trigger a segmentation fault if the environment or configuration definition changed since it was compiled. 4.3 Console The console application is a small layer over the Python client to the task server, instantiated through `ipython`. It implements a command line application with the following functionalities: • List the existing tasks (`index()`). • Display current task status (`status()`). • Get the help string for the tasks and its parameters (`help(task name)`). • Run a task. For instance, for the task GoTo described above, one can use console to run: `GoTo(goal_x=5.0,goal_y=5.0)`. To launch the console, assuming the package `task_manager_lib` is in the current ROS workspace, one can use: ```bash rosrun task_manager_lib console -s /server_node ``` where `server_node` is the name of the ros node instantiating the task server. 4.4 Simple missions Simple missions also use the Python client to the task server and resemble the following script: #!/usr/bin/python import roslib; roslib.load_manifest('task_manager_turtlesim') import rospy from task_manager_lib.TaskClient import * rospy.init_node('task_client') tc = TaskClient("/task_server",0.2) while True: tc.Wait(duration=1.) tc.GoTo(goal_x=1.0,goal_y=1.0) tc.Wait(duration=2.) tc.GoTo(goal_x=5.0,goal_y=5.0) Up to the creation of the tc variable, missions are simple ROS nodes implemented in python. The task client class is instantiated in the variable tc and requires a first argument which is the name of the node implementing the task server and the default control loop frequency (0.2s or 5Hz in this example). Based on this information the task client connects to the task server, gets the list of tasks, and the descriptions of their parameters. It then creates virtual member functions with the name of the tasks with parameters passed as dictionaries. Because the task client gets the task list from the server, there is no need to specialize it when implementing a new task framework. There is also no need to change anything but the mission when a new task has been added to the server. An important advantage of using Python to define missions is the possibility to use the normal Python control flow in the mission definition (here a while statement). Furthermore, because a mission is a normal ROS node, one can add subscribers to the mission script and possibly take mission decision based on variables received over ROS. If a task fails while being executed (real failure, time out, ...), a TaskException will be generated. It can be caught within the mission script with a standard try ... catch statement and acted upon as appropriate. try: tc.Wait(duration=0.2) tc.GoTo(goal=Andromeda) except TaskException, e: # This means a task failed. We need to react to it rospy.loginfo("Task fail with error status %d: %s" % (e.status, str(e))) 4.5 Background tasks In some situations, it may be useful to start a task in the background while another one is running. It may also be necessary to start two tasks simultaneously and wait for both of them to complete. Obviously, this only makes sense if the two tasks do not control the same actuators. By default, tasks are started in the foreground. Only a single task can be run in the foreground, so when a foreground task starts, it first terminates any existing foreground task (typically the Idle task). To start a task in the background, one just needs to add an argument foreground=False to its parameter list. In this case, the task will be run in its own thread and will not be killed by starting a foreground task concurrently. The function call then returns the task id: id = tc.WaitForROI(foreground=False,roi_x=1.,roi_y=6.,roi_radius=1.0) To wait for a background task to complete, the task client class provides several helper function: • tc.waitTask(id) wait for the completion of a single task. • tc.waitAnyTasks([id1,id2,...]) wait for the completion of at least one task within the provided list. • tc.waitAllTasks([id1,id2,...]) wait for the completion of all the tasks within the provided list. A background task can be terminated with tc.stopTask(id) and tc.stopAllTasks() terminates all tasks currently running. 4.6 Interruptions There are often well-defined situations where a mission should be interrupted. This is particularly true in monitoring or service missions where low battery or an unexpected event require aborting the routine mission and starting a specific action. The task framework defines these events as Condition variable. Such a Condition class must have a member function names \texttt{isVerified()}. Currently, the most useful condition is \texttt{ConditionIsCompleted}, which is True when the task it monitors is completed (or terminated). To add such a condition, the following syntax is available: \begin{verbatim} 1 id = tc.WaitForROI(foreground=False, roi_x=1., roi_y=6., roi_radius=1.0) 2 tc.addCondition(ConditionIsCompleted("ROI:\texttt{detector}, \texttt{tc}, id)) \end{verbatim} With such a condition, the mission can be written within a try-catch statement. On the condition being verified, a TaskConditionException is raised. \begin{verbatim} 1 try: 2 tc.Wait(duration=0.2) 3 tc.GoTo(goal_x=0.0, goal_y=1.0) 4 # Clear the conditions if we reach this point 5 tc.clearConditions() 6 tc.stopTask(id) 7 except TaskConditionException, e: 8 # This means the conditions were triggered. We need to react to it 9 # Conditions are cleared on trigger 10 DoSomething() \end{verbatim} Note that if the code reaches a point where a specific condition is not required anymore, it should use the \texttt{tc.clearConditions()} function to remove the current condition set. If the condition was waiting on the completion of a specific task, then this one must still be running and should probably be terminated before moving further. 4.7 Integration with Smach Smach\textsuperscript{3} is a Python framework within ROS to create complex state machines. In comparison with the missions written with the task manager, a Smach state machine lives completely within the Python scripts it is instantiated from. Each state is a Python class and Smach provides a nice framework to define potential transitions based on the outcome of a state. Smach also provides a graphical visualisation tool that displays the full state machine, its transitions and the state currently active. All this allows combining behaviours in a less linear and more complex way. To take advantage of the strength of Smach while keeping our tasks as well defined behaviours, a wrapper class MissionStateMachine is provided in the TaskSmach module. This class provides three main functions to create containers (check Smach documentation for details on the type of containers): \begin{itemize} \item \texttt{createStateMachine()}: creates a state machine with parameters compatible with the task framework. \item \texttt{createSequence()}: create a container for states to be executed as a sequence, which is the most common setup for a mission. \item \texttt{createConcurrence(fg\_task)}: create a container for at least two states (or nested state-machines) to be executed concurrently. The concurrence terminates when one of the states or nested state-machines terminates. \end{itemize} It also provides three functions to create tasks as state-machine states: \begin{itemize} \item \texttt{task(name, **params)}: create a generic state to be added to the state machine. \item \texttt{seq\_task(name, **params)}: create a state to be inserted in a sequence. \item \texttt{concurrent\_task(name, **params)}: create a state to be inserted in a concurrence. \end{itemize} \textsuperscript{3}http://wiki.ros.org/smach/Documentation - **epsilon_task(label,transitions)**: create a state that does nothing but can be used as a common branching points for real state. Finally, the MissionStateMachine provides a run function that also instantiates the introspection server and handles ROS shutdown in a clean way. An example of Smach based mission can be seen below: ```python #!/usr/bin/python # ROS specific imports import roslib; roslib.load_manifest('task_manager_turtlesim') import rospy from math import * from task_manager_lib.TaskSmach import * rospy.init_node('task_client') # Create a SMACH state machine mi = MissionStateMachine() sm = mi.createSequence() # Add states to the container with sm: init = mi.seq_task("Wait",duration=1.0) mi.seq_task("GoTo",goal_x=1.0,goal_y=1.0) # Create a concurrence state to handle background tasks sm_con = mi.createConcurrence('normal_seq') with sm_con: # First background task mi.concurrent_task("WaitForROI",foreground=False,roi_x=9.,roi_y=6.,roi_radius=1.0) # The second concurrent state is actually a sequence sm_sub = mi.createSequence() with sm_sub: # Execute the three following task in sequence (assumes p is defined) mi.seq_task("Wait",duration=0.2) mi.seq_task("GoTo",goal_x=p[0],goal_y=p[1]) # Add the sequence to the concurrence smach.Concurrence.add('normal_seq',sm_sub) smach.Sequence.add('Concurrence',sm_con) # Add the final task, and force it to transition to the initial state. mi.seq_task("GoTo",goal_x=5.0,goal_y=5.0,transitions={'TASK_COMPLETED':init}) mi.run(sm) ``` The resulting state machine can be visualized with `smach_viewer`: ![State Machine Visualization](image-url)
{"Source-Url": "http://dream.georgiatech-metz.fr/sites/default/files/task_manager_0.pdf", "len_cl100k_base": 7837, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 36615, "total-output-tokens": 8745, "length": "2e12", "weborganizer": {"__label__adult": 0.0002906322479248047, "__label__art_design": 0.00026988983154296875, "__label__crime_law": 0.00020194053649902344, "__label__education_jobs": 0.00057220458984375, "__label__entertainment": 5.835294723510742e-05, "__label__fashion_beauty": 0.00011426210403442384, "__label__finance_business": 0.00016319751739501953, "__label__food_dining": 0.00029087066650390625, "__label__games": 0.0008363723754882812, "__label__hardware": 0.0016431808471679688, "__label__health": 0.0002384185791015625, "__label__history": 0.00018990039825439453, "__label__home_hobbies": 0.00011777877807617188, "__label__industrial": 0.0005350112915039062, "__label__literature": 0.00012183189392089844, "__label__politics": 0.00015652179718017578, "__label__religion": 0.0003190040588378906, "__label__science_tech": 0.018798828125, "__label__social_life": 6.490945816040039e-05, "__label__software": 0.00740814208984375, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.0002796649932861328, "__label__transportation": 0.00067901611328125, "__label__travel": 0.00017893314361572266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35610, 0.01634]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35610, 0.68671]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35610, 0.82849]], "google_gemma-3-12b-it_contains_pii": [[0, 2437, false], [2437, 5078, null], [5078, 8552, null], [8552, 11062, null], [11062, 12964, null], [12964, 13460, null], [13460, 16269, null], [16269, 17620, null], [17620, 21412, null], [21412, 24553, null], [24553, 27058, null], [27058, 30301, null], [30301, 33852, null], [33852, 35610, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2437, true], [2437, 5078, null], [5078, 8552, null], [8552, 11062, null], [11062, 12964, null], [12964, 13460, null], [13460, 16269, null], [16269, 17620, null], [17620, 21412, null], [21412, 24553, null], [24553, 27058, null], [27058, 30301, null], [30301, 33852, null], [33852, 35610, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35610, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35610, null]], "pdf_page_numbers": [[0, 2437, 1], [2437, 5078, 2], [5078, 8552, 3], [8552, 11062, 4], [11062, 12964, 5], [12964, 13460, 6], [13460, 16269, 7], [16269, 17620, 8], [17620, 21412, 9], [21412, 24553, 10], [24553, 27058, 11], [27058, 30301, 12], [30301, 33852, 13], [33852, 35610, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35610, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3d089513a3100e0137fcad12959f6cccc7de9c74
AN EXERCISE ASSISTANT FOR PRACTICAL NETWORKING COURSES Jens Haag\textsuperscript{1,3}, Christian Witte\textsuperscript{1}, Stefan Karsch\textsuperscript{1}, Harald Vranken\textsuperscript{2} and Marko van Eekelen\textsuperscript{2,4} \textsuperscript{1}Cologne University of Applied Sciences, Steinmüllerallee 1, Gummersbach, Germany \textsuperscript{2}Open Universiteit, Heerlen, The Netherlands \textsuperscript{3}Work has been done as a PhD student of the Open Universiteit, Heerlen, The Netherlands \textsuperscript{4}Marko van Eekelen is also affiliated with Radboud University Nijmegen, The Netherlands \{jens.haag, christian.witte, stefan.karsch\}@fh-koeln.de, \{harald.vranken, marko.vaneekelen\}@ou.nl Keywords: Virtual Lab, E-Learning, Exercise Assistant, Networking Exercises, Description Logic Abstract: Supporting students with feedback and guidance while they work on networking exercises can be provided in on-campus universities by human course advisors. A shortcoming however is that these advisors are not continuously available for the students, especially when students are working on exercises independently from the university, e.g. at home using a virtual environment. In order to improve this learning situation we present our concept of an exercise assistant, which is able to provide feedback and guidance to the student while they are working on exercises. This exercise assistant is also able to verify solutions based on expert knowledge modelled using description logic. 1 INTRODUCTION Computer science curricula for students at universities nowadays include courses on networking and information technology security. Teaching theory on networking and IT security is usually done by means of textbooks and classes (either face-to-face classes or virtual classes, which are popular at universities for distance education). To anchor and deepen the acquired theoretical knowledge, a commonly used teaching method is to hand out practical exercises. The exercises can be worked out in a computer lab, which can be either a traditional on-campus lab or a virtual lab. Recent evaluation shows that students of a traditional on-campus networking course deem it crucial for their learning success to be able to get support from a course advisor (Haag & Witte & Karsch & Vranken & van Eekelen 2013). While an on-campus university will be able to provide course advisors which can support students in so-called guided learning hours, this support is no longer feasible if students work e.g. at home in the evening hours using a virtual lab. In this paper we introduce an exercise assistant for networking courses which is able to support students while they work on networking exercises. Equipped with a formal model of an exercise, the exercise assistant can be run on a student’s computer whenever and wherever support is needed. The effort to author such an exercise has to be done once while instances of the exercise assistant equipped with this exercise will then be able to support any number of students. The paper is organized as follows: First we introduce our current learning environment in chapter 2 and an example exercise in chapter 3. In chapter 4 we explain our formal model of an exercise. This formal model can be processed by our exercise assistant, whose software architecture we introduce in chapter 5. After giving a guiding example in chapter 6 we conclude our work in chapter 7. 2 VIRTUAL LAB The virtual computer security lab (VCSL) is a stand-alone environment that each student can install on his or her local computer (Vranken & Koppelmann 2009). It is composed of two virtualization layers, as shown in Figure 1. The host machine is the student’s computer, which runs an arbitrary operating system, i.e. the host operating system. The first virtualization layer creates the virtual host machine. It consists of virtualization software such as VMware Player or Oracle VirtualBox, which runs on the host machine just like an ordinary application. Virtualization software in general introduces an additional software layer with corresponding interface, which creates a logical abstraction from the underlying system software and hardware (Smith & Nair 2005). Versions of this software are available for free for a large range of platforms and therefore run on nearly all student computers, regardless of the hardware and the host operating system. The virtual host machine runs the guest operating system. For the VCSL, Linux was selected, since it is open source and can also be distributed to students without licensing costs. ![Figure 1: Architecture of the VCSL](image) The second virtualization layer is a Linux application, called Netkit (Pizzonia & Rimondini 2008), which runs inside the virtual host machine. This layer allows to instantiate multiple virtual machines that all run Linux. Netkit applies virtualization based upon User Mode Linux (UML). A UML virtual machine is created by running a Linux kernel as a user process in the virtual host machine (Dike 2006). Multiple UML virtual machines can easily be run simultaneously, while using minimal resources. The file system is shared by all UML virtual machines using the copy-on-write (COW) mechanism. Hence, the file system is shared read-only by all UML virtual machines. Each UML virtual machine has a second, separate file system in which only the local changes to the shared file system are stored. This saves both disk space and memory, and simplifies management of multiple UML virtual machines. Restoring an initial clean system means to simply remove the second file system. The VCSL was further developed (Vranken & Haag & Horsmann & Karsch 2011), (Haag & Horsmann & Karsch & Vranken 2011) into a distributed VCSL (DVCSL). This DVCSL enables students to work together in a virtual lab by connecting their labs, even if they are physically distant from each other by using an interface to the Netkit environment. This interface consists of a Ghost Host and a Remote Bridge. While the Ghost Host was developed to extract and inject network packets when connected to an existing Netkit virtual network, the Remote Bridge is able to send and receive this packets using an intermediate connection network, e.g. the internet. Using this interface, local Netkit networks can be connected in a transparent and secure manner although they reside on different, distant students’ computers. This decentralized approach is suited to accommodate any number of students and offers students freedom to run the lab whenever and wherever they want, while preserving the properties of a conventional computer lab (e.g. the isolated network). Therefore, this approach is not limited to distance teaching but could also be useful for universities using a conventional computer lab. ### 3 EXAMPLE EXERCISE An example assignment of a practical networking course to be solved using the VCSL environment is: “Setup and configure a scenario with at least three hosts (client, router, server). Client and server should be located within different subnets. The client should be able to intercommunicate with the server by using the intermediate router. The routing should be based on static routing tables.” The minimal requirement for this setup is shown in Figure 2, consisting of at least three hosts. The client and the server have one network interface card (NIC); the router is equipped with two NICs; one for the client network named n1 and one for the server network n2. Each NIC of each host has to be configured with a valid network configuration. 4.1 Activities Typically, exercises will start with an empty lab. Students have to perform activities that result in a working network environment, configured according to the requirements of the given exercise. While Table 1 shows the commands needed to solve the exercise in Netkit, the minimal conceptual activities needed for solving this exercise are listed in Table 2. Table 2: Activities needed to solve the example exercise. <table> <thead> <tr> <th>Activity</th> <th>ID</th> </tr> </thead> <tbody> <tr> <td>The client network has to be created.</td> <td>A1</td> </tr> <tr> <td>The server network has to be created.</td> <td>A2</td> </tr> <tr> <td>The client has to be connected to the client network and assigned an IP address.</td> <td>A3</td> </tr> <tr> <td>The server has to be connected to the server network and assigned an IP address.</td> <td>A4</td> </tr> <tr> <td>One NIC of the router has to be connected to the client network.</td> <td>A5</td> </tr> <tr> <td>One NIC of the router has to be connected to the server network and assigned an IP address from the client network.</td> <td>A6</td> </tr> <tr> <td>The client has to be configured to use the router’s NIC in the client network as default gateway.</td> <td>A7</td> </tr> <tr> <td>The server has to be configured to use the router’s NIC in the server network as default gateway.</td> <td>A8</td> </tr> <tr> <td>Routing has to be enabled on the router.</td> <td>A9</td> </tr> <tr> <td>Client and server must intercommunicate via the intermediate router using the IP protocol.</td> <td>A10</td> </tr> </tbody> </table> While A10 is the final activity, the order of the activities A1 through A9 shows only one possible sequence. The order can vary because some activities are independent from each other (e.g. A1 and A2), while some other activities have interdependencies (e.g. A1 is a precondition for A3). These activities and their interdependencies can be modelled as an acyclic, directed graph with exactly one sink (node N with outdegree(N) = 0) and at least one source (node N with indegree(N) = 0). Activities are represented by nodes. A precondition is modelled as a directed edge from the predecessor to the successor, seamlessly indicating the order of the activities. The final activity will be represented by a sink. Activities without a precondition will be represented by sources. A valid graph for our example exercise is shown in Figure 3. This graph is based on the activities stated in Table 2. The interdependencies and thus possible sequences of activities show a valid example that we created. These can of course vary, depending on the exercise and the author’s intent, too. 4.2 Conditions In order to process the graph, the activities have to be verifiable. That means that a condition is needed to detect or to decide, whether an activity is deemed passed, i.e. whether the student has successfully solved a part of the exercise. In (Haag & Karsch & Vranken & van Eekelen 2012) we showed, that network packets, obtained from the student’s Netkit lab, can be used to detect and verify network properties and behaviour of an Ethernet based network. By modelling network specific expert knowledge as predicates and verifying these predicates using the captured network packets, it is possible to detect e.g. the presence of certain hosts and also routing behaviour. While the prototype in (Haag & Karsch & Vranken & van Eekelen 2012) demonstrated the technical feasibility of that approach by using SQL queries to model predicates, we improved on it by using description logics (Baader & Calvanese & McGuinness & Nardi & Patel-Schneider 2003). For the terminological box (TBox) we created a network ontology for Ethernet based networks, representing the network layers 2 and above (Tanenbaum 1985), including but not limited to the header and payload fields of the most common used protocols, e.g. Ethernet (RFC1042), ARP (RFC826), IP (RFC791), TCP (RFC793) and UDP (RFC768). In addition, we added a unique identifier for each packet and the network origin. An excerpt of our ontology for Ethernet networks is shown in Figure 4. Using this ontology it is possible to model expert knowledge as predicates using a logic programming language, e.g. Prolog (Colmerauer & Roussel 1993). For example, the expert knowledge to describe the network behaviour “routing” according to (Haag & Karsch & Vranken & van Eekelen 2012) is: “Routing occurs if an OSI layer 3 IP transmission of a network packet between two hosts is based on more than one OSI layer 2 transmissions”. The technical background is shown in Figure 5. The client wants to communicate with the server using the IP protocol, but the server is located in a different network segment. Direct intercommunication between client and server is not possible because the underlying Ethernet protocol does not support communication over network borders. The client has to use a known router located in the same network as itself, and thus reachable by Ethernet. The client now sends an IP packet addressed to the IP address of the server, but the underlying Ethernet packet will be addressed to the router. When the router does receive such a packet, it will forward it to the server. While the two packets that the client and the router send do not differ on the IP layer (both are sent from the client, and addressed to the server), both differ on the Ethernet layer, with different source and destination MAC addresses. Based on the Ethernet network ontology, this behaviour can be expressed as the following Prolog predicate: ``` routing :- ip_packet(X,A,B), ip_packet(Y,A,B), ethernet_packet(X,M1,M2), ethernet_packet(Y,M3,M4), M1 \= M3, M2 \= M4. ``` This predicate can be read as “routing occurs, when there are two IP layer packets X and Y, both sent from IP address A to IP address B, for which the source and destination addresses differ on the Ethernet layer.” Predicates can be used as conditions to detect activities. E.g. the predicate ‘routing’ can be used to verify the activity A10. We extended the graph, so that every activity can be associated with a condition to verify that activity. Routing is only one example. We successfully created predicates describing e.g. the presence of hosts and networks, the network behaviour NAT or routing and also higher level usage. E.g. an ARP spoofing behaviour can be detected if two hosts within the same subnet having different MAC addresses pretend to own the same IP address using the ARP protocol. However, this behaviour can also be caused by a misconfiguration of the hosts. For that reason this condition requires precoditions to verify a valid and error-free setup. We also found a trade-off between the shape of an assignment and the capabilities to design predicates. If the assignment is more tightly controlled (e.g. predefined network names and IP addresses), more precise predicates can be designed to detect activities. If the assignment is more broadly speaking, the predicates also have to be designed in a more generalized manner. ### 4.3 Feedback There are various types of feedback strategies which can be used to support students working on the exercise, e.g. suggestions, complete guiding or an exam mode. The specific shape will be either customized to match the author’s aims or customized to the learning style of the learner or a combination. Usually recent progress the student has made in the exercise graph should trigger interaction with the student according to the feedback strategy. Therefor we extended the graph with feedback attributes. The graph as a whole can be associated with an attribute containing the exercise description; all activities can be associated with different attributes for feedback control, i.e. text messages that give hints about what the next activity might involve (pre messages), or text messages that give feedback about detected activities (post messages). An example for activity A1 from our example exercise look like this: ``` pre_message = "You will need at least one host connected to network 'n1'." post_message = "Network 'n1' detected." ``` While our message mechanism provides the technical means for the implementation of various feedback strategies, the evaluation and choice of an appropriate strategy resides with the exercise author. ### 4.4 Probing While the verification of activities based on passively observed network packets works for many activities, there still are limitations. One such limitation occurs, when an activity needs to be verified, that does not have immediate results in the form of network packets. An example for that would be A9 from our example exercise: the routing functionality has to be activated on the router. Students can do that by setting the appropriate kernel flag on the router if this flag is not enabled by default. This however will not result in the occurrence of observable network packets, until packets are sent to the router for being routed. A possible solution would be to ask the student to send appropriate network packets himself. We followed a different approach. For detecting certain activities we inject special predefined... network packets into the Netkit environment to provoke a certain predictable behaviour. This behaviour can also be expressed as a predicate. In the routing example we inject an Ethernet packet addressed to the router into the client network that is addressed to a host in the server network (which does not have to exist) on the IP level. If routing is enabled in the router, the router will try to reach that host in the server network using ARP requests. These packets can be used to verify, that routing is indeed enabled on the router. Such a “probing” packet can be assembled by strictly following the network stack, starting with an Ethernet frame. The destination MAC address must be the routers interface connected to network n1. In Netkit, the MAC address of a network interface is bound to the name of the client, resulting in a predictable MAC for router’s first interface eth0 0a:ab:64:91:09:80. The source MAC can be virtual, e.g. ee:ba:7b:99:bc:a5, followed by an IPv4 ethertype identifier (0x0800). The encapsulated IP packet starts with the version identifier (0x4), followed by mandatory header fields, e.g. length and checksum. The IP source address can be virtual but should be located within the IP range of network n1. The destination IP can also be virtual but must be part of the subnet n2. The IP packet encapsulates an ICMP echo request just to get a complete and valid network packet. This customized packet layout can be represented by a hexadecimal character array, e.g. 0aab649109b00ebe7ba99ebca508004500001 c12344000ff0f01549c0a0000000000000008 00f7fd00010001. We extended the graph, so that every activity can be associated with a custom network “probing” packet to be sent once before verifying its condition. While that actively alters the environment, it enables the verification of additional activities. 5 EXERCISE ASSISTANT In order to support a student while working on an exercise, we developed an exercise assistant, which can be used in the VCCL. As shown in Figure 6, the exercise assistant is composed of the three components reasoning engine, feedback engine, and an interface to the student’s working environment called Netkit interface. ![Figure 6: Architecture of the Exercise Assistant.](image) The reasoning engine itself is composed of a reasoner and a knowledge base, which contains a TBox („terminology box“) and an ABox („assertion box“). The TBox contains knowledge about the domain, i.e. our ontology, in the form of predefined predicates that can be extended by the author with exercise specific extensions, while the ABox contains the concrete instantiations. The data in the ABox is obtained through an interface to the „real world“, in our case the Netkit interface. The Netkit interface consists of one or more Ghost Hosts (Vranken & Haag & Horsmann & Karsch 2011) that record network packets from their respective Netkit network, extract the information in them and store that information in the ABox. The Ghost Hosts can also be used to inject special network packets into the environment. The feedback engine is the part where the activity graph will be processed. Our exercise assistant is able to read an exercise graph stored in the GraphML (Brandes & Eiglsperger & Herman & Himsolt & Marshall 2002) format. Once read, the activities are continuously processed according to their interdependencies, starting at the source nodes which represent activities without preconditions. Processing the activities in this case means verifying their conditions and giving the student feedback according to the feedback attributes of that activity. Once the activity is completed it will be removed from the graph and thus as a precondition for its successors. The feedback engine can also use the Netkit interface, respectively the Ghost Hosts, to insert custom network packets into the environment in order to provoke certain network behaviour to verify an activity’s condition using the reasoning engine. The Exercise Assistant is a software program written in the programming language C using SWI-Prolog (Wielemaker 2009) as the reasoning engine. 6 EXAMPLE Using the VCSL, the window layout of the desktop presented to the students looks like Figure 7. The exercise assistant shell is a window where the student can keep track of the feedback generated by the feedback engine. The linux shell is a window where the student is able to administrate and use Netkit in order to e.g. create hosts and networks. Once a host is started, it will open a respective shell enabling the student to administrate the host itself. Further hosts, e.g. the router and the server will open respective shells, too. The following figures are screenshots taken from the exercise assistant shell guiding the example exercise. We authored the activities of table 2 according to the exercise graph of figure 3 and added verbose feedback. The introduced routing predicate is used to verify the final activity (A10). The intermediate activities too have been modelled using our ontology, partially by utilizing probing packets. Once started, the exercise assistant introduces the exercise by displaying the exercise description. Starting with the activities without precondition (A1 and A2), the exercise assistant will prompt the student using the respective pre_messages. The student can start solving the exercise according to Table 1. After the first command `vstart client --eth0=n1` is entered using the linux shell, the exercise assistant is able to confirm this valid activity. While A1 is being marked as verified, using the respective post_message of A1, the remaining independent activities without preconditions will be displayed again, superseding the preceding messages. According to the exercise graph, the student is now able to choose A2, A3 or A5 as the next activity. Starting the router connected to network n1 and n2 results in a verified presence of n2. While the presence of the two networks is verified now, the exercise assistant is not able to detect whether the student has started the server, unless its network interface card gets assigned an IP address. Therefore the pre_messages are authored to prompt the student properly. Choosing to assign the client’s IP address as next activity, using the command `ifconfig eth0 10.0.0.1 up` in the client shell, will result in a verified activity A3. Still missing IP addresses of router’s and server’s NICs, the student can proceed to configure the router’s NICs. Having verified that the two NICs of the router are present, the exercise assistant is able to verify A9 using a probe packet. For the simple reason that routing is enabled per default for hosts in the Netkit environment, the condition of A9 can be verified immediately. After assigning an IP address to the remaining NIC of the server, the student has to alter the routing table on the client and on the server. The exercise assistant is also able to verify these activities by using probing packets. Finally, the student is asked to demonstrate the routing functionality by sending packets between the client and the server using the intermediate router. One valid solution is to use the command ping. Once the final activity is verified, the exercise assistant congratulates the student and then quits. 9 CONCLUSION We presented an exercise assistant which improves the learning situation of students solving practical exercises in a networking course. Even when human course advisors are not available, our exercise assistant can recognize learning progress and provide appropriate feedback and support. This significantly improves the learning situation for students working remotely in a virtual environment, which is common at universities for distance education. Besides this automatic support, the exercise assistant can verify intermediate and complete solutions of an exercise. We also presented an approach to formally model exercises in a manner processable by the exercise assistant. For that purpose the exercise author can define possible activities and sequences using a graph structure. Description logic is used to define conditions for the verification of these activities. The exercise author is also able to define a feedback strategy by adding feedback attributes to the graph. Especially for courses with many participants, our experience shows that teaching staff can benefit from utilizing the exercise assistant. While the teaching method of tutors personally and individually supporting students is certainly one of the most effective for knowledge transfer, it is not feasible for courses of sufficient size. In such scenarios, the exercise assistant can e.g. be used to offer all students a basic guided tutoring support not only wherever and whenever they want, but also at the speed that best suits their own learning style and their own abilities. REFERENCES Dike, J 2006, User Mode Linux, Prentice Hall, Upper Saddle River, NJ, USA.
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/129381/129381.pdf?sequence=1", "len_cl100k_base": 5390, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35244, "total-output-tokens": 6663, "length": "2e12", "weborganizer": {"__label__adult": 0.0006632804870605469, "__label__art_design": 0.001277923583984375, "__label__crime_law": 0.0008192062377929688, "__label__education_jobs": 0.348388671875, "__label__entertainment": 0.0002639293670654297, "__label__fashion_beauty": 0.0004363059997558594, "__label__finance_business": 0.0009918212890625, "__label__food_dining": 0.000957489013671875, "__label__games": 0.0012149810791015625, "__label__hardware": 0.004650115966796875, "__label__health": 0.0016040802001953125, "__label__history": 0.0009469985961914062, "__label__home_hobbies": 0.0004162788391113281, "__label__industrial": 0.0013017654418945312, "__label__literature": 0.0009126663208007812, "__label__politics": 0.000568389892578125, "__label__religion": 0.0011310577392578125, "__label__science_tech": 0.18212890625, "__label__social_life": 0.0006542205810546875, "__label__software": 0.05914306640625, "__label__software_dev": 0.388671875, "__label__sports_fitness": 0.00074005126953125, "__label__transportation": 0.0016536712646484375, "__label__travel": 0.0005655288696289062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28408, 0.02352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28408, 0.68299]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28408, 0.90931]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3551, false], [3551, 7576, null], [7576, 9910, null], [9910, 13013, null], [13013, 16732, null], [16732, 20846, null], [20846, 23103, null], [23103, 26312, null], [26312, 28408, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3551, true], [3551, 7576, null], [7576, 9910, null], [9910, 13013, null], [13013, 16732, null], [16732, 20846, null], [20846, 23103, null], [23103, 26312, null], [26312, 28408, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28408, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28408, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3551, 2], [3551, 7576, 3], [7576, 9910, 4], [9910, 13013, 5], [13013, 16732, 6], [16732, 20846, 7], [20846, 23103, 8], [23103, 26312, 9], [26312, 28408, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28408, 0.10345]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
2046859b050871d73c18b09260a2a937e5cd406f
SealPK: Sealable Protection Keys for RISC-V Leila Delshadtehrani, Sadullah Canakci, Manuel Egele, and Ajay Joshi Department of Electrical and Computer Engineering, Boston University {delshad, scanakci, megele, joshi}@bu.edu Abstract—With the continuous increase in the number of software-based attacks, there has been a growing effort towards isolating sensitive data and trusted software components from untrusted third-party components. Recently, Intel introduced a new hardware feature for intra-process memory isolation, called Memory Protection Keys (MPK). The limited number of unique domains (16) provided by Intel MPK prohibits its use in cases where a large number of domains are required. Moreover, Intel MPK suffers from the protection key use-after-free vulnerability. To address these shortcomings, in this paper, we propose an efficient intra-process isolation technique for the RISC-V open ISA, called SealPK, which supports up to 1024 unique domains. Additionally, we devise three novel sealing features to protect the allocated domains, their associated pages, and their permissions from modifications or tampering by an attacker. We demonstrate the efficiency of SealPK by leveraging it to implement an isolated secure shadow stack on an FPGA prototype. Index Terms—Intra-Process Memory Isolation, Memory Protection Keys, RISC-V, Isolated Shadow Stack I. INTRODUCTION With the ever-increasing complexity of software applications, today’s software code consists of both trusted components designed in-house and untrusted components such as third-party libraries and application plugins. The coexistence of trusted components with potentially malicious or vulnerable untrusted components in the same address space could compromise the security of the system. While the user-space inter-process isolation protects processes from one another, the intra-process isolation of various software components has been a challenge. Recently, Intel proposed a hardware feature, called Memory Protection Keys (MPK) [10], to efficiently support intra-process memory isolation. Intel MPK allows the user to create a protection domain by assigning a protection key (pkey) to a group of memory pages, and it provides a user-space instruction (WRPKRU) to update the associated permission of a domain. However, Intel MPK suffers from security and scalability issues. In terms of security, Intel MPK suffers from pkey use-after-free vulnerability [9]. Once a pkey gets freed, the kernel does not update the pkey bits of its associated pages. The same freed pkey can later on be allocated to a new domain; as a result, the old pages and the new ones will unintentionally share the same pkey. Additionally, if an attacker tampers with a protection domain, its associated pages, or its corresponding permission, the protection keys serve no purpose. In particular, since Intel MPK allows a user-space instruction to modify the pkey permissions, a malicious component might containWRPKRU instructions or inject those instructions at run-time to update the permission bits of a domain and attain access to a protected domain. In terms of scalability, Intel MPK provides only 16 pkeys. However, some real-world use cases such as OpenSSL [9] require more than 1000 pkeys. In this paper, we propose an efficient intra-process memory isolation capability, called SealPK, leveraging the Open RISC-V Instruction Set Architecture (ISA) [13]. SealPK provides a per-page protection key and supports up to 1024 domains (64 × more than Intel MPK). We eliminate the pkey use-after-free problem at Operating System (OS) level by keeping track of the number of pages belonging to the same domain and a lazy de-allocation approach. We propose three novel sealing features to prevent an attacker from modifying sealed domains, their corresponding sealed pages, and their permissions. In particular, our hardware-assisted permission sealing feature enables the software developer to restrict the access to WRPKRU within a specific contiguous range of memory addresses, e.g., a trusted component. Any attempt to execute a WRPKRU instruction from outside of the specified range would lead to a hardware exception. To summarize, our contributions are as follows: - We present an efficient intra-process isolation capability, called SealPK, which supports up to 1024 unique isolated domains. We propose an OS-level solution to avoid the pkey use-after-free issue. We devise three novel sealing features to protect the domains, their associated pages, and their permissions from unauthorized modifications. - We implement SealPK on a RISC-V Rocket processor [1] and extend the Linux kernel to support the protection keys for the RISC-V ISA. We evaluate a prototype of our hardware design on an FPGA with a full Linux software stack. - We demonstrate the efficiency of our design by implementing an isolated shadow stack leveraging SealPK. We open-source our design at https://github.com/bu-icsg/SealPK. In the rest of this paper, we discuss SealPK’s design, sealing features, and evaluation in Section II, III, and IV, respectively. Section V discusses the related work and Section VI concludes the paper. We provide a more detailed description of the SealPK design, implementation, and evaluation in [5]. II. SEALPK: DESIGN A. Hardware Design Scalability is one of the limitations of Intel MPK, as it cannot support more than 16 pkeys. In RISC-V, we leverage the 10 unused bits of the Sv39 PTE to store the pkey. Figure 1 demonstrates our hardware modifications to support SealPK. We add a new entry to each line of the Data Translation Lookaside Buffer (DTLB) to store the corresponding 10-bit pkey of each virtual page. Hence, SealPK supports up to 1024 domains. We store the permission bits of the pkeys separately. In our design, we use 2 bits, i.e., (Read Disable (RD), Write Disable (WD)), to specify the access permission of each protection key. Following the principle of the least privilege, unlike Intel MPK and previous works, our design enables a write-only page, which can in turn reduce the attack surface. We support 1024 pkeys in our design; hence, unlike Intel MPK, we cannot simply use a single register to store all the pkey permission bits. To provide fast access to these bits, we use a 2Kb on-chip SRAM memory to store the permission bits. This memory, called PKR (Figure 1), consists of 32 rows, where each row stores the permission bits of 32 pkeys. We utilize the custom instruction extension of RISC-V ISA [13] to define two new instructions, RDPKR and WRPKR, to read from and write to PKR. We provide a control logic to determine the effective permission bits of each data memory access. Consider the example shown in Figure 1, where there is an incoming write request to the virtual page #87. In addition to reading the page’s read/write permission bits stored in DTLB (11), the control logic reads the corresponding 2-bit permission bits of the pkey (11110000001) stored in PKR. The control logic uses the upper 5 bits of the pkey to index into a specific 64-bit row of PKR and the lower 5 bits to select the 2 permission bits (01). The effective permission is the intersection of the DTLB’s and pkey’s permission bits. In this example, the effective permission is 10; hence, the write access is not allowed. This leads to a load/store page fault; the processor triggers an exception, and the OS handles the page fault. B. Kernel Support At the OS level, we add the support to store each page’s pkey in the 10 unused bits of the PTE. Our RISC-V kernel support is built upon the existing Linux kernel support for MPK. 1) Lazy de-allocation: To keep track of the allocated pkeys, we implement a 1024-bit allocation bitmap. To efficiently address the pkey use-after-free problem of Intel MPK, we leverage a lazy de-allocation approach. We implement a 1024-bit dirty map to indicate whether each pkey has been lazily deallocated. We also keep track of the number of pages currently associated with each pkey using a counter map. If a pkey’s corresponding counter is not zero, pkey_free updates the permission bits of the pkey in PKR to (0, 0); hence, the pageable permissions determine the effective permission of the corresponding pages. Rather than clearing the corresponding bit of the pkey in the allocation map, pkey_free sets the dirty bit and pkey_alloc would not allocate a dirty pkey. Whenever a memory page with a dirty pkey gets freed, we update the number of pages associated with the dirty pkey in the counter map, accordingly. Once the counter becomes zero, we erase the dirty bit of the corresponding pkey; hence, it can safely be allocated afterwards. If pkey_alloc cannot find a free non-dirty pkey, it returns an allocation error to indicate no free pkey is available. 2) Per thread OS support: We modify the task_struct in the Linux kernel to maintain the contents of PKR for each thread during the context switches (with negligible performance overhead). Furthermore, we modify the RISC-V page fault handler in the Linux kernel to identify a page fault caused by a pkey permission violation. III. SEALPK: SEALING FEATURES As mentioned before, Intel MPK does not protect the allocated domains, their associated pages, and their permission bits from tampering by an attacker. In this section, we describe three novel sealing features to protect against such tampering. To clarify the defensive capabilities of these features, consider the example shown in Figure 2. In this example, a software developer writes a program that handles sensitive financial records. The Main function (written in-house) initially allocates the memory pages for the financial record (log) as readable-writable and assigns a protection key to these pages. Following the principle of the least privilege, the initial value of the pkey restricts the permission to read-only pages. In this example, Func-A updates the contents of the log. We assume that this function is developed in-house and has access to the pkey. Prior to writing the sensitive financial information into the log, Func-A modifies the domain permission of the log to write-only. For performance reasons, the software developer leverages third-party untrusted libraries in the implementation of Func-B, Func-C, and Func-D. Func-B reads the log and returns a sorted copy of the log. Func-C does not have access to the log, instead it receives a list of prices and converts them to a different currency. Func-D reads the log and prints all the transactions of a specific account. Hence, Func-B and Func-D, can only access the log as read-only memory. In the rest of this section, we explain how each sealing feature protects the log against potential attacks originating from the untrusted components. Sealing the domain: In this scenario, Func-B is a malicious third-party component, which receives the log as a read-only input. Func-B is supposed to read the log and return a sorted copy of it. However, this untrusted component allocates a new readable-writable pkey, invokes the mprotect system call and assigns the new pkey to the log. In this way, Func-B can falsify the financial records stored in the log. Intel MPK is not capable of preventing this malicious modification to the log within the same thread. To prevent such unauthorized modifications, we provide a domain sealing option by adding a sealed_domain map to the kernel. We modify the pkey_mprotect system call to check the sealed_domain map prior to modifying a domain’s pkey. Once a domain is sealed, pkey_mprotect prevents any further modifications to PTE permissions as well as the pkey value, efficiently throwing such attacks. Sealing pages: We assume that after the initialization step in the Main function, no more pages will be added to the The main function allocates the log and configures the memory ```c int log = mmap(NULL, N*getpagesize(), PROT_READ|PROT_WRITE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0); ``` We also provide the software support for sealing the permissions. We provide two new custom instructions, i.e., `seal_start` and `seal_end`, to specify the contiguous permissible range of each pkey. Although these instruction can be added to the source code (Figure 2), the more efficient way of using them is by a compiler pass or through run-time mechanisms such as ld-preload. After specifying the start and end addresses of a permissible range for WRPKR, the developer has to invoke a newly added system call (`pkey_perm_seal`) to seal the permissions. This system call leverages a custom instruction, which is only accessible to the supervisor mode, to seal the permission bits by updating the `SealReg` and PK-CAM. We modify the Linux kernel to maintain the `SealReg` information as well as permissible range of each pkey during context switches for each process. Note that `SealReg` and the permissible range of a pkey are implemented similar to a one-time fuse, i.e., they can only be written once for each process. Hence, after configuration, the permission sealing feature cannot be modified. By leveraging SealPK’s sealing features, the software developer can implement a tamper-proof log of financial records in the face of buggy and malicious third-party components. ### IV. Evaluation #### A. Experimental Setup We use the Chisel HDL [2] to implement SealPK on a RISC-V Rocket core [1]. We add the OS support for SealPK to the Linux kernel v4.15. As a case study, we implement an isolated shadow stack using LLVM front-end (Clang v.7) and back-end (Clang v.8) passes. We prototype our hardware design with the full software stack on a Xilinx Zedboard FPGA. For performance evaluation, we use RISC-V LLVM to cross-compile 6 applications (out of 12) from SPECint2000 [7] and 4 applications (out of 12) from SPECint2006 [8] benchmark suites. Due to compilation issues and memory limitations of our FPGA, we were not able to successfully cross-compile and run all the applications from these benchmark suites. #### B. Case Study: An Isolated Shadow Stack As a case study, we use SealPK to protect an isolated shadow stack that prevents Return-Oriented Programming attacks. A shadow stack protects the return addresses by storing them in a separate memory. It is imperative to guarantee the integrity of the shadow stack [3], i.e., the shadow stack area should be an isolated area within the process’ address space to prevent attackers from modifying it. We isolate the shadow stack memory in a protection domain. Once the shadow stack memory is allocated and assigned to a domain, no more pages will be added and the protection domain stays the same during the process execution. We leverage the domain and page sealing features to protect the allocated domain and pages of the shadow stack from further modifications (similar to scenarios described in Section III) after the initial configuration. For the shadow stack implementation, we first implement a baseline front-end pass LLVM plugin. This front-end pass Fig. 3. Performance overhead of LLVM-based shadow stack implementations (test inputs). Func. implementation uses a function call in the front-end pass rather than an inline code (Inline). SealPK-RD+RW is implemented as a back-end pass built upon Func., where it writes the new value of pkey permission bits without maintaining the rest of the permission bits. SealPK-RD+RW adds the support to read the corresponding row of the pkey before updating it. allocates a memory area for the shadow stack and instruments the prologue and epilogue of each function to push the original return address into the shadow stack memory and pop the shadow return address from that memory, respectively. To isolate the shadow stack, we modify the front-end pass to allocate a pkey and to assign it to the shadow stack memory pages. To protect the shadow stack from modifications, we initialize the pkey as read-only. We implement a RISC-V back-end pass to temporarily update the pkey permission to readable-writable in the prologue, where we push the return address into the shadow stack. Right after pushing the return address, the back-end pass disables the pkey write permission. Our back-end pass inserts the required RD and WR instructions to update the pkey’s permission bits. We can leverage our permission sealing feature to restrict the WRKRU occurrences to the memory range of the back-end pass. To evaluate the performance overhead of SealPK, in our experiments, we used the total execution time of an application as our performance metric. We ran each application three times and report the geometric mean of the execution times. Figure 3 shows the performance overhead of various shadow stack implementations compared to the baseline. Inline and Func are front-end LLVM passes that cannot guarantee the integrity of the shadow stack; hence, the shadow stack memory remains unprotected. SealPK-WR and SealPK-RD+RW are isolated shadow stack implementations, leveraging SealPK in a back-end pass. mprotect is our comparison point, an isolated shadow stack implemented by leveraging the mprotect system call. As expected, using mprotect incurs considerable performance overhead, i.e., 2875.62% and 1982.70%, on average, for SPEC2000 and SPEC2006, respectively, which makes it an infeasible option. SealPK-RD+RW, has an average of 21.00% and 14.81% performance overhead for SPEC2000 and SPEC2006 applications, respectively. In terms of area overhead, enhancing Rocket core with SealPK increases the LUT and FF utilization of our FPGA by 5.62% and 2.72%, respectively. V. RELATED WORK To address Intel MPK’s limitations, Hodor [6] and ERIM [12] combine Intel MPK with binary inspection to prevent reusing of WRKRU instruction by an attacker. The sealing permission feature of SealPK provides a similar capability by restricting valid WRKRU instructions to a contiguous range of memory addresses for each pkey. Although our sealing feature is limited to one valid memory range for each pkey, its simplicity and efficiency distinguishes our work form Hodor and ERIM. To allow the occurrence of WRKRU instructions in more than one trusted component, we can rely on a CFI technique for the RISC-V Rocket core [4] to protect PKR from manipulation by an attacker. libmpk [9] and Xu et al. [14] provide a software-based and a hardware-based virtualization technique, respectively, to address the limited number of pkeys. We can leverage such virtualization techniques to support more than 1024 domains for SealPK. Donky [11] provides a secure user-space software framework to protect the domain permissions against manipulations without relying on binary inspection or CFI. Donky proposes a pkey extension for RISC-V ISA implemented on Ariane core. Similar to SealPK, Donky uses the 10 unused bits of Sv39 PTEs to store the pkeys but it relies on a 64-bit CSR (managed by a software library) to store the permission bits of only 4 pkeys at a time. If the pkey of the accessed memory address is not loaded into that CSR, Donky requires extra cycles for the software library to load the missing pkey and its permission into the register. The permission sealing feature of SealPK allows us to protect a domain against CFI attacks in cases where the valid WRKRU instructions occur in contiguous memory addresses. In addition to this feature, SealPK provides two other novel sealing features to prevent a domain and its associated pages from tampering. VI. CONCLUSION In this paper, we proposed an efficient intra-process memory isolation technique (SealPK) for a RISC-V processor, which supports up to 1024 domains. In our design, we provided three novel sealing features to protect a domain, its associated pages, and its permission bits from unauthorized modifications. To address the pkey use-after-free problem, we used an OS-level lazy de-allocation approach. We prototyped RISC-V Rocket + SealPK on an FPGA with full software stack, and demonstrated the efficiency of SealPK by securing a shadow stack. ACKNOWLEDGMENTS This material is based upon work supported by the National Science Foundation under Grant No. CNS-1916393. REFERENCES
{"Source-Url": "https://people.bu.edu/joshi/files/SealPK_DATE_2021.pdf", "len_cl100k_base": 4291, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 14147, "total-output-tokens": 5122, "length": "2e12", "weborganizer": {"__label__adult": 0.0007481575012207031, "__label__art_design": 0.0006055831909179688, "__label__crime_law": 0.0013017654418945312, "__label__education_jobs": 0.0004987716674804688, "__label__entertainment": 0.00012129545211791992, "__label__fashion_beauty": 0.00031280517578125, "__label__finance_business": 0.00044417381286621094, "__label__food_dining": 0.0005741119384765625, "__label__games": 0.0012416839599609375, "__label__hardware": 0.034637451171875, "__label__health": 0.00090789794921875, "__label__history": 0.0004055500030517578, "__label__home_hobbies": 0.00023174285888671875, "__label__industrial": 0.00203704833984375, "__label__literature": 0.00023543834686279297, "__label__politics": 0.0004892349243164062, "__label__religion": 0.0006971359252929688, "__label__science_tech": 0.231201171875, "__label__social_life": 8.368492126464844e-05, "__label__software": 0.01183319091796875, "__label__software_dev": 0.70947265625, "__label__sports_fitness": 0.0005350112915039062, "__label__transportation": 0.0012302398681640625, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21703, 0.02255]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21703, 0.21681]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21703, 0.85395]], "google_gemma-3-12b-it_contains_pii": [[0, 5973, false], [5973, 11796, null], [11796, 14995, null], [14995, 21703, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5973, true], [5973, 11796, null], [11796, 14995, null], [14995, 21703, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21703, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21703, null]], "pdf_page_numbers": [[0, 5973, 1], [5973, 11796, 2], [11796, 14995, 3], [14995, 21703, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21703, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
dda358566c37d362eda5fb8067b0bd4a33bd2141
Course "Empirical Evaluation in Informatics" **Controlled Experiments** Prof. Dr. Lutz Prechelt Freie Universität Berlin, Institut für Informatik http://www.inf.fu-berlin.de/inst/ag-se/ - Example 1: flow charts - Control and constancy - Threats to constancy - Techniques for achieving constancy - Example 2: design pattern documentation "Empirische Bewertung in der Informatik" **Kontrollierte Experimente** Prof. Dr. Lutz Prechelt Freie Universität Berlin, Institut für Informatik http://www.inf.fu-berlin.de/inst/ag-se/ - Beispiel 1: Flussdiagramme - Kontrolle und Konstanz - Probleme für Konstanz - Techniken zum Erreichen von Konstanz - Beispiel 2: Entwurfsmuster-Dokumentation Example 1: Flowcharts vs. Pseudocode • Question: Is an algorithm easier to comprehend if presented as a flow chart or if presented as pseudocode? • Study format: Controlled experiment Flowchart, Pseudocode • (These examples are not equivalent!) ``` PROC IF GREEN THEN BAKE ELSE BOIL IF CRISPY THEN STEAM ELSE FRY END IF END IF END PROC ``` Experiment rationale • Earlier experiments by Shneiderman et al. on the same question had not found any differences • Scanlan criticizes these experiments: • Have measured only correctness, not work time • Some questions could not be answered from flowchart alone • Program was too simple • Scanlan attempts to create experiments without these flaws Experiment setup - Subjects: 82 MIS majors (junior to graduate) - Independent variables (inputs): - program complexity (length): simple, medium, complex - presentation type: flowchart, pseudocode - therefore, there are $3 \times 2 = 6$ experiment groups - Subjects study an algorithm and answer a fixed set of comprehension questions - $6 \times 2$, $9 \times 4$, $10 \times 6$ questions for simple, medium, complex alg. - Example questions: - "What are the values (true/false/unknown) at all decisions in the algorithm when the vegetable is boiled?" - "What are the values at all decisions in the algorithm when the vegetable is both boiled and steamed?" - (all questions are of this type) - Experiment is run fully automatically - by a computer with speech output Experiment setup (2) - Flowcharts and pseudocodes are each printed on a single sheet of paper - A mechanical machine switches between algorithm sheet and question/answer sheet - only one is visible at any time - subject can switch as s/he pleases - Dependent variables (outputs): - algorithm view time - question answering time - number of algorithm views - percentage of correct answers - subjective confidence in the answers Experiment setup (3) - Each subject is part of all six groups - leads to $6 \times 82 = 492$ data points overall - This is possible because the algorithms use randomized combinations of verbs and adjectives - (What would be the problem otherwise?) Complex algorithm BEGIN ELSE IF HARD THEN ELSE IF TALL THEN ELSE IF LEAFY THEN ELSE IF CRISPY THEN FRY ELSE IF RED THEN GRILL ELSE IF JUICY THEN CHOP PEEL ELSE IF LEAFY THEN ELSE IF HARD THEN IF CRISPY THEN IF JUICY THEN PEEL ELSE GRILL END IF ELSE BOIL ELSE ROAST IF LEAFY THEN CHOP ELSE BAKE END IF END IF END IF END IF END PROC EXIT Results - The subjects in the flowchart groups - require less algorithm view time - require much fewer algorithm views - provide more correct answers - have higher confidence in their answers - The differences tend to become more pronounced with increasing algorithm complexity Table A. Percentage of correct answers to all question parts. <table> <thead> <tr> <th>Complexity level</th> <th>IV</th> <th>Total parts</th> <th>% correct (means)</th> <th>s</th> <th>t</th> <th>df</th> <th>p</th> </tr> </thead> <tbody> <tr> <td>Simple</td> <td>FC</td> <td>12</td> <td>97.97</td> <td>8.50</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>12</td> <td>93.80</td> <td>10.90</td> <td>2.77</td> <td>81</td> <td>.0035</td> </tr> <tr> <td>Medium</td> <td>FC</td> <td>36</td> <td>98.81</td> <td>3.40</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>36</td> <td>94.92</td> <td>10.30</td> <td>4.05</td> <td>81</td> <td>.0000</td> </tr> <tr> <td>Complex</td> <td>FC</td> <td>60</td> <td>98.68</td> <td>3.50</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>60</td> <td>91.71</td> <td>14.40</td> <td>4.82</td> <td>81</td> <td>.0000</td> </tr> </tbody> </table> FC = flowchart; PC = pseudocode; IV = independent variable; s = standard deviation (in seconds); t = correlated t-test result; df = degrees of freedom; p = probability. Discussion: Internal validity / credibility - The internal validity of this experiment is very high - We can be confident to find similar results if we repeated the experiment - Problems avoided by this experiment setup: - accidental group differences - by using large groups and an intra-subject design - measurement errors - by fully automatic measurement mechanism - accidental experimenter influence on subject motivation - by fully automatic experiment guidance (speech output etc.) - and more - e.g. by using a shielded room, by having practice sessions - The only remaining question: - Are the subjects equally well trained in both notations? Discussion: External validity / credibil.+relevance - The external validity of this experiment is very problematic: - Issues with the structure of the algorithms - Issues with the meaning of the algorithms - Issues with the size of the algorithms - Issues with the number of questions (in relation to algorithm size) - Issues with the type/content of questions External validity: Task too simple <table> <thead> <tr> <th>Complexity level</th> <th>IV</th> <th>Total parts to questions</th> <th>Means (sec./part)</th> <th>s</th> <th>t</th> <th>df</th> <th>p</th> </tr> </thead> <tbody> <tr> <td>Simple</td> <td>FC</td> <td>12</td> <td>7.83</td> <td>5.09</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>12</td> <td>13.44</td> <td>7.75</td> <td>6.47</td> <td>81</td> <td>.0000</td> </tr> <tr> <td>Medium</td> <td>FC</td> <td>36</td> <td>6.19</td> <td>3.02</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>36</td> <td>11.71</td> <td>6.50</td> <td>9.43</td> <td>81</td> <td>.0000</td> </tr> <tr> <td>Complex</td> <td>FC</td> <td>60</td> <td>6.33</td> <td>2.37</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>60</td> <td>15.80</td> <td>10.98</td> <td>8.45</td> <td>81</td> <td>.0000</td> </tr> </tbody> </table> FC = flowchart; PC = pseudocode; IV = independent variable; s = standard deviation (in seconds); t = correlated t-test result; df = degrees of freedom; p = probability. External validity: Too many questions (2) Table D. Mean number of times the algorithm was viewed when answering each part of all questions. <table> <thead> <tr> <th>Complexity level</th> <th>IV</th> <th>Total parts</th> <th>Times/part</th> <th>$s$</th> <th>$t$</th> <th>df</th> <th>$p$</th> </tr> </thead> <tbody> <tr> <td>Simple</td> <td>FC</td> <td>12</td> <td>1.30</td> <td>.275</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>12</td> <td>1.41</td> <td>.344</td> <td>3.25</td> <td>81</td> <td>.0008</td> </tr> <tr> <td>Medium</td> <td>FC</td> <td>36</td> <td>.86</td> <td>.239</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>36</td> <td>.92</td> <td>.289</td> <td>2.84</td> <td>81</td> <td>.0030</td> </tr> <tr> <td>Complex</td> <td>FC</td> <td>60</td> <td>.72</td> <td>.229</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>PC</td> <td>60</td> <td>.82</td> <td>.296</td> <td>4.55</td> <td>81</td> <td>.0000</td> </tr> </tbody> </table> Methodology of controlled experiments - "Experiment": Latin 'experimentum' (attempt, trial, experience) - means to try something out, to manipulate the situation - Control refers to the construction of a repeatable situation - rather than one that has many arbitrary or even unknown attributes - Assume the situation can be fully characterized by N attributes - Then we want to experiment with k of them (often k=1) - And manipulate it - To understand its effects, the other N-k attributes have to be kept constant - The purpose of control is achieving constancy Constancy in the natural sciences - In basic physics or chemistry it is often relatively easy to achieve constancy - Although it may be difficult to set the experimental attributes to the values one wishes to explore - e.g. temperature and pressure for nuclear fusion - The most difficult problem historically is finding out what attributes are relevant - e.g. understanding the nature of infectious diseases Constancy with human beings • In contrast, whenever human beings are part of the experiment, constancy becomes extremely difficult: • No two human beings are the same • No one human being is the same over time (memory!) • The only known approach to obtain constancy for the human-related attributes of an experiment is averaging: • Pick a large number of humans ("subjects") at random • Assign each to an experiment condition at random • Perform the experiment with each one • Use the average results per group: differences balance out • It works, except for one problem: • Subject motivation may depend on the value of the experimental variable • e.g. design method A is considered more 'sexy' than B Threats to constancy - Individual differences - The largest and most important effect in most human-related informatics experiments - e.g. capability, endurance, motivation - History - Long-running experiments are influenced by outside events - Maturation - Subjects learn and change during an experiment - Instrumentation - Human observers change during an experiment - Technical measurement infrastructure may also change - Mortality - Not all subjects stay until the end of the experiment Threats to constancy (2) • Experimenter influence • Experimenter handles subjects of different groups (or the data collected about them) in a biased way • Sequence effects • The influence if the same subject solves more than one task • The order can influence the results • E.g. learning, tiring, boredom • Sophistication • If subjects understand what the experiment is trying to find out, that can influence the result • e.g. unrealistic focus on one aspect of a task Constancy in medicine: double blind testing - The averaging method for achieving constancy can be applied to perfection in drug testing - We want to compare two medicines A and B - Or even A to doing nothing: use a placebo - A subject does not know which one s/he receives ("blinding") - The doctor does not know which one s/he applies ("blinding") - This is called a "double blind" experiment - But mortality can still be a big problem - Unfortunately this approach is almost never applicable in informatics - You cannot apply a technique without knowing - So we almost always need to consider motivation differences as a threat to constancy and hence to internal validity Techniques for achieving constancy - Randomization - balances individual differences - Matching - reduces individual differences - Counterbalancing - compensates sequence effects Randomization - Subjects must not assign themselves to the experiment conditions based on personal preferences - May produce bias - e.g. the more capable subjects may be more interested in the design method that appears more 'modern' - Experimenters also must not assign subjects based on whatever kinds of preferences - May produce bias - e.g. may assign the more capable subjects to his/her favorite method – even unconsciously - Random assignment is the only method for avoiding bias - But may be very difficult, e.g. because not all subjects have the required knowledge for all experiment conditions - Without random assignment, the study becomes a quasi-experiment Matching - Random assignment needs not make each single assignment from the whole pool of remaining subjects - Instead, we may pre-group 'similar' subjects into tuples of j (for j experiment conditions) and randomize over one tuple at a time - This is called matching - Matching may increase group similarity and may effectively reduce individual variation across the groups - Example: - Order the subjects by expected design capability - Take the next best 2 at each time - Assign one to method A and one to B randomly - Matched samples can improve the sensitivity of statistical analysis Counterbalancing - Often subjects need to perform more than one task - because suitable subjects are rare, because instructing them is expensive, etc. - This will produce sequence effects - learning, tiring, etc. - To compensate these effects: - Have the same number of subjects perform the tasks in each of all possible task orders - for each of the experiment conditions or orders of experiment conditions - usually realistic only for 2 tasks Counterbalancing: example A typical experiment plan in informatics is as follows: - We want to compare design methods A and B - We use two different tasks 1 and 2 - Each subject solves both tasks - Solving one task twice (once with each method) makes no sense - due to learning (sequence effect) - Experiment groups: - (group: first task, second task) - G1: A1, B2 - G2: A2, B1 - G3: B1, A2 - G4: B2, A1 Example 2: Design pattern documentation - Prechelt, Unger, Philippsen, Tichy: "Two Controlled Experiments Assessing the Usefulness of Design Pattern Documentation in Program Maintenance", IEEE Transactions on Software Engineering, June 2002 - Situation: You have programs that use/contain design patterns. The programs (source code) are well commented, but no separate design documentation exists. Now the programs must be modified. - Question: Does understanding and modifying the programs become easier if the design pattern usage is documented explicitly? Experiment variable • The independent variable of this is whether or not PCLs were added to an already well-documented program • PCL: Pattern Comment Line A comment section that explicitly describes how a particular program element participates in a pattern • Example: lines 484 and 485 are PCLs ```java 477 /** 478 NTTupleDisp2 displays NTTuple, where 479 1. Tuples with an empty telephone number are left out and 480 2. Tuples are sorted by (last)name 481 Using Tuple objects of other Tuple types results in 482 ClassCastException. 483 *** DESIGN PATTERN: *** 484 NTTupleDisp2 completes the **Template Method** newTuple() 485 of TupleDispA 486 */ 487 final class NTTupleDisp2 extends TupleDispA { ``` Experiment tasks - The subjects worked on two different programs - Phonebook: A trivial phonebook management application with two different views of the data - Uses the 'Observer' and 'Template Method' design patterns - And/Or tree: A library (plus simple application) for handling AND/OR trees of Strings - Uses the 'Composite' and 'Visitor' design patterns - For each program they solved a set of 4 small comprehension and modification tasks - for which the patterns were relevant Dependent variables - The observed variables were: - time: The total time for solving one task - quality: A grading (in points) of the submitted solution according to well-defined criteria Experiment design • Nomenclature: • A: And/Or tree, P: Phonebook, • +: with PCL added, -: without • Counterbalanced design: • 4 groups: A+ P- A- P+ P+ A- P- A+ • Randomized assignment of subjects to groups • No matching Subjects The experiment was performed twice: - **UKA**: 74 diploma students of University of Karlsruhe; programs in Java - prepared solutions on paper - incorrect answers produce no feedback → harder to detect - **WUSTL**: 22 undergraduate students of Washington University, St. Louis; programs in C++ - implemented solutions on Unix workstations - All had taken a laboratory course on Java/C++ including design patterns Results And/Or tree (difficult task) - UKA: '+' is slower but much more often correct - Reason: wrong answers produce no feedback (work is on paper!) - WUSTL: '+' is much faster <table> <thead> <tr> <th>Variable</th> <th>mean with PCL</th> <th>mean w/o PCL</th> <th>means difference (90% confid.)</th> <th>significance p</th> </tr> </thead> <tbody> <tr> <td><strong>UKA, program And/Or-tree:</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>1 relevant points</td> <td>8.5</td> <td>7.8</td> <td>−7.7%... + 23%</td> <td>0.20</td> </tr> <tr> <td>2 #corr. solutions</td> <td>15 of 38</td> <td>7 of 36</td> <td>−3.0%... + 24%</td> <td>0.094</td> </tr> <tr> <td>3 time (minutes)</td> <td>58.0</td> <td>52.2</td> <td>−11%... + 41%</td> <td>0.17</td> </tr> <tr> <td>4 — corr. only</td> <td>52.3</td> <td>45.4</td> <td></td> <td></td> </tr> <tr> <td><strong>WUSTL, program And/Or-tree:</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>5 relevant points</td> <td>6.7</td> <td>6.5</td> <td>−12%... + 19%</td> <td>0.28</td> </tr> <tr> <td>6 #corr. solutions</td> <td>4 of 8</td> <td>3 of 8</td> <td></td> <td>1</td> </tr> <tr> <td>7 time (minutes)</td> <td>52.1</td> <td>67.5</td> <td>−43%... − 0.5%</td> <td>0.046</td> </tr> </tbody> </table> Results phonebook (simple task) - **UKA**: '+' is faster - **WUSTL**: results were discarded - subjects lacked Observer knowledge - C++ version had no GUI, hence was unintuitive <table> <thead> <tr> <th>Variable</th> <th>mean with PCL $D^+$</th> <th>mean w/o PCL $D^-$</th> <th>means difference (90% confid.)</th> <th>significance $p$</th> </tr> </thead> <tbody> <tr> <td><strong>UKA, program Phonebook:</strong></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>8 relevant points</td> <td>16.1</td> <td>16.3</td> <td>$-8.0% \ldots + 4.0%$</td> <td>0.35</td> </tr> <tr> <td>9 #corr. solutions</td> <td>17 of 36</td> <td>15 of 38</td> <td>$-22% \ldots + 0.3%$</td> <td>0.64</td> </tr> <tr> <td>10 time (minutes)</td> <td>51.5</td> <td>57.9</td> <td></td> <td>0.055</td> </tr> </tbody> </table> Discussion of internal validity • Extraneous variables are controlled well by the counterbalanced design • even if groups were unequal, differences contribute equally to the experiment condition and the control condition Problem: • Quite some mortality in the WUSTL experiment • Very last event of the semester • "I have to catch my plane home" • Fortunately, mortality in experiment and control groups is about equal • Has therefore probably not distorted the results Threats to external validity Differences to professional SW engineering contexts: • Subject experience/capabilities: • Professionals may - have less need for PCL (would decrease effect) or - may make better use of PCL information (would increase effect) • Team work: • May increase effect because patterns provide a common terminology; PCL allows for exploiting it • Program size: • Larger programs may show a larger effect, as PCL provides program slicing information • Program and task representativeness: • is unclear Is 'no PCL' a good control group? - It is surprisingly unclear what would be a valid experiment design for finding out whether "having design pattern information is useful" for maintenance: - Giving somebody program structure information (which somebody else does not have) will often help - but may have nothing to do with design patterns - Can the given comparison be considered fair? Analysis of documentation content - Analyzed which pieces of information are present how often in the documentation - here: for And/Or tree - Identified 18 pieces (A-R), 4 of them crucial for solving the given tasks - PCL is redundant: 17 pieces are present in non-PCL comments - incl. the 4 crucial ones A, B, L, M - Therefore, the comparison is fair: - redundant information could also have hurt! Description of some information pieces <table> <thead> <tr> <th>id</th> <th>Design Information Unit (UKA And/Or-tree)</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>There is an element/container structure</td> </tr> <tr> <td>B</td> <td>Element is the superclass of the element/container structure</td> </tr> <tr> <td>C</td> <td>Element is abstract</td> </tr> <tr> <td>D</td> <td>AndElement is a part of the element/container structure</td> </tr> <tr> <td>E</td> <td>OrElement is a part of the element/container structure</td> </tr> <tr> <td>F</td> <td>StringElement is a part of the element/container structure</td> </tr> <tr> <td>G</td> <td>There are multiple container classes</td> </tr> <tr> <td>H</td> <td>AndElement is a container class</td> </tr> <tr> <td>I</td> <td>OrElement is a container class</td> </tr> <tr> <td>J</td> <td>There is only one element class</td> </tr> <tr> <td>K</td> <td>StringElement is an element class*</td> </tr> <tr> <td>L</td> <td>There is an iterator structure*</td> </tr> </tbody> </table> Summary • Controlled experiments apply the scientific method in its purest form: • Test whether an effect predicted by some theory is observed • Control is for achieving constancy in the attributes that are not investigated (extraneous variables) • Constancy is difficult to obtain with human subjects • They just differ so much! • The only way is repetition and averaging • Other threats to constancy are history, maturation, instrumentation or experimenter effects, mortality, sequence effects, and sophistication • Methods for improving constancy are randomization, matching, and counterbalancing Thank you!
{"Source-Url": "http://www.inf.fu-berlin.de/inst/ag-se/teaching/V-EMPIR-2011/13_controlled_experiment.pdf", "len_cl100k_base": 6012, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 52190, "total-output-tokens": 6918, "length": "2e12", "weborganizer": {"__label__adult": 0.0004496574401855469, "__label__art_design": 0.0009784698486328125, "__label__crime_law": 0.0006513595581054688, "__label__education_jobs": 0.01326751708984375, "__label__entertainment": 0.00012052059173583984, "__label__fashion_beauty": 0.000263214111328125, "__label__finance_business": 0.00027489662170410156, "__label__food_dining": 0.000518798828125, "__label__games": 0.0007309913635253906, "__label__hardware": 0.00180816650390625, "__label__health": 0.0006680488586425781, "__label__history": 0.0005903244018554688, "__label__home_hobbies": 0.0002830028533935547, "__label__industrial": 0.0009164810180664062, "__label__literature": 0.0006303787231445312, "__label__politics": 0.00032830238342285156, "__label__religion": 0.000701904296875, "__label__science_tech": 0.1485595703125, "__label__social_life": 0.00037789344787597656, "__label__software": 0.01355743408203125, "__label__software_dev": 0.81298828125, "__label__sports_fitness": 0.0004184246063232422, "__label__transportation": 0.0008335113525390625, "__label__travel": 0.0002267360687255859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21575, 0.017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21575, 0.7197]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21575, 0.84808]], "google_gemma-3-12b-it_contains_pii": [[0, 340, false], [340, 688, null], [688, 1005, null], [1005, 1185, null], [1185, 1544, null], [1544, 2335, null], [2335, 2779, null], [2779, 3036, null], [3036, 3401, null], [3401, 3689, null], [3689, 4610, null], [4610, 5291, null], [5291, 5663, null], [5663, 6673, null], [6673, 7471, null], [7471, 8054, null], [8054, 8473, null], [8473, 9196, null], [9196, 9703, null], [9703, 10191, null], [10191, 10887, null], [10887, 11075, null], [11075, 11762, null], [11762, 12366, null], [12366, 12830, null], [12830, 13261, null], [13261, 13823, null], [13823, 14565, null], [14565, 15064, null], [15064, 15258, null], [15258, 15494, null], [15494, 15927, null], [15927, 17363, null], [17363, 18318, null], [18318, 18798, null], [18798, 19340, null], [19340, 19736, null], [19736, 20142, null], [20142, 20958, null], [20958, 21565, null], [21565, 21575, null]], "google_gemma-3-12b-it_is_public_document": [[0, 340, true], [340, 688, null], [688, 1005, null], [1005, 1185, null], [1185, 1544, null], [1544, 2335, null], [2335, 2779, null], [2779, 3036, null], [3036, 3401, null], [3401, 3689, null], [3689, 4610, null], [4610, 5291, null], [5291, 5663, null], [5663, 6673, null], [6673, 7471, null], [7471, 8054, null], [8054, 8473, null], [8473, 9196, null], [9196, 9703, null], [9703, 10191, null], [10191, 10887, null], [10887, 11075, null], [11075, 11762, null], [11762, 12366, null], [12366, 12830, null], [12830, 13261, null], [13261, 13823, null], [13823, 14565, null], [14565, 15064, null], [15064, 15258, null], [15258, 15494, null], [15494, 15927, null], [15927, 17363, null], [17363, 18318, null], [18318, 18798, null], [18798, 19340, null], [19340, 19736, null], [19736, 20142, null], [20142, 20958, null], [20958, 21565, null], [21565, 21575, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21575, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21575, null]], "pdf_page_numbers": [[0, 340, 1], [340, 688, 2], [688, 1005, 3], [1005, 1185, 4], [1185, 1544, 5], [1544, 2335, 6], [2335, 2779, 7], [2779, 3036, 8], [3036, 3401, 9], [3401, 3689, 10], [3689, 4610, 11], [4610, 5291, 12], [5291, 5663, 13], [5663, 6673, 14], [6673, 7471, 15], [7471, 8054, 16], [8054, 8473, 17], [8473, 9196, 18], [9196, 9703, 19], [9703, 10191, 20], [10191, 10887, 21], [10887, 11075, 22], [11075, 11762, 23], [11762, 12366, 24], [12366, 12830, 25], [12830, 13261, 26], [13261, 13823, 27], [13823, 14565, 28], [14565, 15064, 29], [15064, 15258, 30], [15258, 15494, 31], [15494, 15927, 32], [15927, 17363, 33], [17363, 18318, 34], [18318, 18798, 35], [18798, 19340, 36], [19340, 19736, 37], [19736, 20142, 38], [20142, 20958, 39], [20958, 21565, 40], [21565, 21575, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21575, 0.13126]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
33d9c69a3ee1c276206a8a5a84be3ffa4c3add1f
Automatic Configuration of Opaque Network Functions in CMS Original Availability: This version is available at: 11583/2572147 since: Publisher: IEEE Published DOI:10.1109/UCC.2014.122 Terms of use: openAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright (Article begins on next page) Automatic Configuration of Opaque Network Functions in CMS Serena Spinoso*, Marco Leogrande†, Fulvio Risso*, Sushil Singh† and Riccardo Sisto* *Department of Control and Computer Engineering, Politecnico di Torino, Italy †PLUMgrid, Sunnyvale, CA, USA Abstract—Cloud Management Systems (CMS) such as OpenStack are commonly used to manage IT resources such as computing and storage in large datacenters. Recently, CMS are starting to offer customers also the possibility to customize their network infrastructure, allowing each tenant to build his virtual network made of elementary blocks such as traffic monitors, switches, routers, firewalls, and more. However, tenants have to choose those network services among the list of services made available by the CMS and have no possibilities to customize the applications they want. This paper examines some of the modifications required in CMS to support a tenant-centric network service model, in which each tenant can install and configure their preferred network functions, without being limited to use only the list provided by the CMS. A prototype implementation validates the proposed approach and demonstrates the extent of the modifications in terms of languages and software components. I. INTRODUCTION The concepts of Software Defined Network (SDN) [1] and Network Function Virtualization (NFV) [2] allow Network Service Providers (NSPs) and companies to give more freedom to their customers. Unfortunately today any changes about location and settings of customers Virtual Machines (VMs) in data center networks have to be managed by the operator. In addition, tenants can use just the functions provided by their NSP. For these reasons, network operators are looking at new possible scenarios where tenants are offered the possibility to create Virtual Networks [3] managed and configured by the tenants themselves without requiring operators action. In this way a tenant could define how his traffic should be processed using a set of network functions chosen by himself. This could allow also a tenant to decide how to connect his resources (VMs), without having directly control of the physical network. In particular if these functions are distributed in the operator network or if they are all located in a data center, this service does not change from a tenant point of view. Today NSPs, which offer cloud-based solutions, leverage a Cloud Management System (CMS) to manage computing and storage in their data center, and another component, called Network Operating System (NOS), for network management. NOS and CMS interact to guarantee a multi-tenant environment: the NOS receives from the CMS a virtual network definition for each tenant and configurations for each function of that network. This interaction is limiting, because the tenant is allowed to build his virtual network only by choosing from network functions provided by his NSP. Hence if a tenant would like to insert a different function, the operator has to modify his system, taking care of the integration of the new function in terms of configuration and communication with the other components (like other network functions or the NOS). In this paper, we propose a possible solution that enables configuration of network functions that are opaque from the operator point of view. In particular a network function is a module that processes traffic in a specific manner and could be implemented in software or deployed into a physical network element (e.g., firewall, DPI, NAT, router, etc). In our vision, NSPs could allow tenants to insert new functions, written by any programmer, in their virtual network, but the operator should not know how those functions work and which type of functions they are. Thus operators would handle these opaque network functions like black boxes, assuring however their total integration in the operator’s network. This means that tenants have to be able to configure any functions in one of the ways supported by the functions themselves: taking the example of a tenant that uses a firewall, he has to be able to load a set of protection policies on the firewall, similarly a traffic’s pattern in a DPI to check possible attacks. The remainder of this paper is composed as follows: in Section II, we describe the different works that have completed our background; Section III presents an overview of our architecture; in Section IV, a prototype of our solution is described in detail; in Section V, we demonstrate the validity of our implementation through two use cases; finally Section VI concludes presenting possible future works. II. RELATED WORK The research world has presented different works somehow related to ours. Among these works, we can find possible architectures for managing Network Service Chains (NSCs). One of these architectures was described in the work made by Beliveau [4], while the NSC Architecture (NSCA) is presented in [5]. However, such architectures do not have any mechanism to extend the set of functions allowed, and hence to introduce new functions neither to configure them. In addition, the concept of chain is more static than a virtual network: traffic can follow just one path chosen based on tenants policies, rather than being able to follow any arbitrary path in the network. Another proposal related to virtual service chains is being developed within the European project UNIFY. The approach taken by this project is close enough to ours, because, in UNIFY, NSPs can distribute network functions in the whole network, locating management aspects in an automated orchestration engine [6] [7]. The UNIFY project has also expressed the need to have a service abstraction model for defining and programming service chains: however, at the best of our knowledge, it seems that the configuration of the single network functions that compose a service chain is overlooked, leaving the configuration issue an open topic. A service description is needed by the CMS to understand the basic requirements of an opaque network function. H. Song has noted in [8] the need of a standardization of the information model, in order to represent the user’s functional and resource requirements, and to map and apply these requirements to the underlying infrastructure. Literature helps us with different solutions, which address description at service level and at resources level, from both the hardware (physical and virtual) and software points of view. One of these proposals is VXDL [9] that is defined as a language for describing a virtual network topology, including storage, computing and links, and a virtual timeline, to specify when a certain resource is needed. Unfortunately this temporal constraint is difficult to synchronize with the orchestration engine. In this context, another example is the network-centric cloud architecture proposed in [10], where a centralized control layer should manage the resources available for all network services. Finally there have been several approaches in literature for configuring network functions like the NETCONF [11] and SNMP [12] protocols. However from an operator point of view, the use of such protocols is quite limiting because tenants can use just those network functions which support such configuration protocols, while we are envisioning an architecture flexible as much as possible. III. THE PROPOSED ARCHITECTURE In our architecture, the main actors are: operator (or Network Service Provider, NSP), tenant and programmer. The main objective of our architecture is to give flexibility to tenants, by allowing the set of functions available to a tenant to be extended according to the tenants needs. Reaching this goal by progressively increasing the overall number of network functions offered by the NSP is not trivial, because any requirement coming from a tenant might imply a huge integration cost; also, different tenants might request support for different network functions. This is why our proposal focuses on giving the possibility for a tenant to introduce any new network functions implemented by third parties (we refer to them as programmers) in his virtual network, and be able to configure them through a unified API provided by his network operator. We also would like to relieve the programmer from the burden of integrating his own network functions, implemented as Virtual Network Functions (VNFs)\(^1\), in every specific NSP architecture. The VNFs should be readable usable in any present and future architecture, without the need of specific integration efforts. Finally the network operator should be able to load into his own network any third-party VNF without additional complications. Furthermore, we would like to avoid the insertion of any VNF-specific configuration plug-ins inside the network operator’s CMS: this avoids the problem of supporting arbitrary front-ends inside the unified view offered by the CMS. A. Challenges There are challenges to be solved both when inserting such VNFs inside a virtual network and when configuring them. With respect to the insertion problem, there should be a way to load a VNF into a virtual network and link it to the other ones; furthermore, the spectrum of VNF configuration methods is very wide and, even if we can categorize them in common types, every function has its own quirks. The insertion problem can be solved already by many CMS. If a programmer can provide a disk image of his VNF, a CMS can treat it like a regular Virtual Machine; also, since many of their network plug-ins already support stitching VMs inside a virtual network, a basic level of insertion can be achieved today. Many of the outstanding issues are related to the configuration phase instead; hence we focused our attention on them. We also believe that, by having a rich configuration service, less complexity is needed in the insertion phase. As an example, let us consider the case of a third-party router deployed into a virtual network: in a traditional scenario, a tenant is required to deploy the router into a virtual network, then access its configuration interface through a virtual console (or similar mechanism) to configure the network interfaces of the router in terms of IP address, routing protocols, etc. In our vision, there should be no need to access this VNF-specific interface, and the tenant should be able to configure the router through the same API that he used to deploy the router in the network. In addition, having a suitable configuration service, an automatic configuration service could be enabled, for example, in the case of tenant’s configuration errors. Considering the same router and a third-party web cache connected to the same subnet, if the tenant changes the subnet prefix and reconfigures just the IP address of the router interface, the NOS could be able to recognize such misconfiguration and hence should have the means to fix this error configuring properly the web cache. Inserting opaque functions might bear possible high risks for NSPs: due to the lack of relationship between the programmer and the NSP that is installing an unknown function \(^1\)We use indifferently the terms “network function” and “VNF”. into his network element, the NSP could take precautions verifying that this function respects certain parameters. This problem, however, is out of our scope and it was also taken into consideration by other works, like [13], that addressed the possibility to run software modules in network elements. B. Architecture overview Figure 1 shows a high-level view of the whole system architecture. Each tenant can control his virtual network through a global interface, that is an operator-defined API. For each of the VNFs in the tenant’s network, the NOS will receive configuration messages from the API and will interact properly with the actual VNF. Since each VNF could be ultimately configured through different configuration methods (e.g., file, CLI, REST, etc), and with specific details (formats, commands, etc), it is important to make able the NOS to configure a VNF regardless of their respective intrinsic details. Having a unified description format helps all the actors involved: the programmer can define the VNF configuration format and supported methods in a way that is recognized by any network operator, that, in turn, is able to insert and use any VNF that adheres to the unified description format. This format also simplifies the projection of the configuration of the VNF through the tenant-visible API, since it is independent from the actual configuration methods used. The unified description format allows the transparent use of a configuration method among a list of standard ones. To be inserted opaquely, however, a VNF should support one or more of those methods, and the operator should support any of them on his system (according to any specific policies that might arise). C. Configuration translators Each configuration method could require specific parameters: for example, for a configuration through CLI, it is necessary to know which command enables administrative authorization. For this reason the architecture includes, for each configuration method, a specific configuration translator that is aware of all the particular techniques and parameters needed for that method. As shown in Figure 1, each translator configures the VNF directly. Having separate translators also makes the system more extensible and manageable, as it allows an easier insertion, replacement and removal of configuration methods: when the operator wants to support a new configuration method, the operator has just to make available a new translator. Configuration translators receive multiple inputs (Figure 1) : (i) the tenants configuration received from the operator-defined API and saved into an object model to know the actual values that should be set inside the VNF; (ii) the VNF configuration rules, to know the format required to deploy those configuration values into the VNF; (iii) a set of VNF access parameters required to connect to the VNF (e.g., IP address of the VNF, root password, etc...) and to load the configuration into it. The structure of the object model and the VNF configuration rules are VNF-specific; they are both provided by the programmer through a description file, written in the unified description format. This allows the programmer to write the description file only once, and use the same file even across different NSPs. The VNF access parameters are, instead, translator-specific and VNF-independent: the number and type of these parameters is standardized for each translator, but their actual run-time values are set by either the network operator or the programmer, depending on the specific case. D. Configuration translators inputs An instance of the object model, specific for a VNF, collects the configuration parameters of that VNF, provided by the tenant. The object model instance is self-descriptive: in other words, one can discover its structure from the instance itself. This is important because when the configuration translator receives the object model instance, it can derive the structure of the model that was used by the programmer in the description file; this is crucial to generate the VNF configuration in the right format. Using an object model also makes easier to change in a transparent way the global API provided by the operator and avoids data-structure formats specific for translator to collect the VNF configuration chosen by the tenant. The VNF configuration rules are a set of directives used to drive the translator in generating the VNF configuration in the right format (Figure 1). They express the way to translate the structure and content of the object model instance into the specific structure required by the VNF configuration method. If a specific VNF supports multiple configuration methods, the programmer can include VNF configuration rules for all of them in the same description file. The VNF access parameters are used to instruct the configuration translator about how to connect to the VNF and load the configuration provided by tenant. As explained before, the programmer does not set all of these parameters, because some of them might be tied to some management aspect internal to the NOS, like VNF location. All of these inputs will be used to generate the final VNF configuration, following the workflow shown in Figure 1. Taking the example of a firewall, a user would like to define the network policy rules. In this case, the object model instance contains the set of policy rules themselves; the VNF configuration rules specify the format of policy rules in the particular VNF architecture; VNF access parameters describe how to program the policy rules inside the firewall (e.g., the IP address, port and protocol required to connect to the firewall to deploy the configuration). IV. ARCHITECTURE IMPLEMENTATION This section describes a prototype implementation of the architecture presented. We have also validated its workflow using two use cases described in the next section. We start to present some details, which have been left out of the description to keep the architecture more generic, about the choice of the languages used for the description file and the VNF access parameters. Then we describe our prototype and its validation. Listing 1: YANG language example. ```yml module router { import ietf-inet-types { prefix inet; } import ietf-yang-types { prefix yang; } list interfaces { //api:file:header "//Beginning of the Config File"; //api:file:list_format "%%NAME \n"; //api:file:separators ";n\n"; //api:file:footer "\n//End of the Config File"; key name; leaf name { type string; } list ethernet { //api:file:list_format "%%NAME %VALUE \n"; //api:file:separators ";n\n"; //api:file:footer "\n"; key name; leaf name { type string; } leaf address { //api:file:leaf_format "%%NAME %VALUE\n"; type inet:ipv4-address; } leaf hwid { //api:file:leaf_format "hw-id %VALUE\n"; type yang:mac-address; } } } } ``` A. Languages Choices The YANG language [14] has been chosen for the description file. YANG is a data modeling language developed by IETF to model configuration and state data manipulated by the NETCONF protocol. In particular YANG was chosen for several reasons: it is orthogonal to network protocols and it is implementation-independent and human-readable; it is also a language developed with network configuration in mind and extensible, as it allows creation of user-defined statements. In our case, the configuration data for a VNF is modeled in YANG by creating an object model specific for that VNF. An example of a possible YANG description file for a router is shown in Listing 1, where we define a structure to save the state of Ethernet interfaces. The idea is to have a data structure to enumerate all interfaces of a given router and, for each of them, store all of the network and physical addresses associated with that interface\(^2\). Accordingly, a top-level interfaces list is defined to include the names for all the interfaces to be configured; a nested ethernet list contains all addresses specific for an Ethernet interface. YANG provides by default a number of directives to validate some proprieties of its statements. Examples of directives provided by YANG are: type checking; a default value for a leaf statement; definition of mandatory or optional statements (like leaf, list, leaf-list and others). Other simple validations are possible through the definition of new YANG types. A more complex validation system would require an extension of the YANG language\(^3\). Since, in the proposed solution, the description file includes both the structure of the object model and the VNF configuration rules, it means that those rules have to be specified in the YANG language as well. B. VNF configuration rules syntax VNF configuration rules take the form of special comments in the description file (Listing 1). These rules are defined in a particular statement with the following structure: ```yml <Translator_N>:<Rule_N> <Rule_V> ``` where <Translator_N> specifies which configuration translator the rule belongs to and <Rule_N> and <Rule_V> represent the rule name and value. This allows us to group all the rules for a specific translator under a specific prefix: we can consider them similar to a programming language namespace, that allows us to reuse a rule name across translators, if we need to. <Translator_N> can assume values like “file”, “cli”, “rest”, etc that denote the translators created in our system. As an example, let us consider a translator to configure VNFs using files: each rule for this translator is preceded by the prefix “//api:file:”. We can see some of them in Listing 1: separators, list_format, leaf_format, header and footer. All rule values are interpreted as strings. When generating the configuration file, header and footer are printed respectively before and after the current element (e.g., list or leaf), while separators is used to separate child nodes of the current element (of course it is not applicable to a leaf statement, which does not have child nodes by definition). Furthermore \(^2\)Usually a network interface is assigned only one network and physical address, but this is not true in the general case. \(^3\)In fact could be needed to have existence constraints: this is case when a parameter could exist only if another one was set or if this one has a particular value. list_format and leaf_format work like a printf of the C language, in which %NAME and %VALUE are expanded with values depending on the context. In particular, %NAME and %VALUE represent respectively the name of their YANG node (e.g., “ethernet” for the list ethernet and “address” for the leaf address) and its actual value (in the case of a list, it will be the value of its key). None of the keywords is mandatory. C. VNF access parameters syntax The default features of the YANG language are enough to define the VNF access parameters: what is needed is a configuration-oriented language. To keep the system definition uniform, YANG has been used for the VNF access parameters as well. In addition, it is interesting to note that many parameters stored in the VNF access parameters represent networking parameters (e.g., IPv4 or IPv6 address, MAC address and others). Hence, in addition to the built-in types, it is possible to leverage the YANG derived type statements defined by IETF in [15]. D. Prototype Implementation In our prototype, a C++ library, called Config_API, has been designed to implement different configuration methods, one per translator. In particular to validate the architecture, we implemented a translator, called Config_File_API, to configure a VNF using files, regardless of their format (e.g., XML, text or more). This translator receives as inputs: (1) the YANG object model instance of a VNF (that contains the tenant’s configuration); (2) the VNF configuration rules (specified in the YANG description file); (3) the VNF access parameters (defined in a different file, as shown in Figure 1). With respect to the VNF access parameters, the translator expects the NOS to convert the configuration received from the tenant through REST into another object model, which is instead specific for the Config_File_API (dashed line in Figure 2). For the sake of simplicity, in our implementation we have set all the VNF access parameters as configurable by the tenant. In a real world scenario, however, some of these parameters (e.g., the IP address where the VNF is located) should be managed just by the operator. Finally we can note that our solution supports functions that require multiple configuration files. The Config_File_API library can be instructed to write different portions of the same YANG file into different configuration files, so that VNFs that require it can dump different parts of their data into different locations. This can be done because of the object model abstraction: for the purpose of the Config_File_API library, a YANG list at topmost level of the YANG file is no different from another list nested under it. ```yaml module bind9 { list zone { // api: file: list_format "%NAME \"%VALUE\"" \n; // api: file: separators ";\n\n"; // api: file: footer ";\n\n"; key name; leaf name { type string; } leaf type { // api: file: leaf_format "%NAME %VALUE\n") type string; } leaf file { // api: file: leaf_format "%NAME \"%VALUE\n") type string; } leaf master { // api: file: leaf_format "%NAME \"%VALUE\n") type string; } } } ``` V. TESTING Our prototype was validated using two network functions: Bind9 and Vyatta Core. Bind9 is an implementation of a DNS server and we have defined a YANG description file for this VNF collecting all the information needed to guarantee its correct behavior regardless of the role it is configured to act as: an excerpt of this description file is shown in Listing 2. For our test, we have started manually, in our prototype, an instance of Bind9 and we have configured it to act as Secondary Master (which gets the zone data from another Name-server that is the Primary Master for that zone) by editing its object model through the REST interface. To better understand the test, we show an excerpt of the final configuration file, automatically generated by our system, where we have defined a zone in Bind9 syntax (Listing 4). In particular our test first uses a bash script to send HTTP messages to the NOS through the REST interface. After that, the Bind9 instance is interrogated directly to validate that the expected configuration was created and was loaded correctly. The workflow of our test is shown in Figure 2, as well as the structure of our prototype: first of all, we have sent two messages to set the VNF access parameters and the configuration parameters for Bind9; the Config_File_API has read its three inputs already explained, to generate and load the configuration file into the VNF; we have interro- gated directly the Bind9 VNF to verify that all the process worked fine. We have done a similar test for the second use case, Vyatta Core, which is a software router. Listing 1 shows an excerpt of YANG description file for this router. For our test, we configured an Ethernet interface, defining its IP address and the other main parameters, as shown in Listing 3. As in the previous case, we have validated our configuration with another bash script. This test, as in the previous case, has created an instance of an ethernet list in the YANG object model and has set its parameters. Then we have validated the Level 3 configuration of the Vyatta Core instance by testing its reachability through an ICMP request. Listing 3: Vyatta configuration file. ```yYang interfaces { ethernet eth0 { address dhcp duplex auto hw−id 00:0c:29:64:66:1c mtu 1500 smp affinity auto speed auto } } ``` Listing 4: BIND9 configuration file. ```bind zone "example.com" { type slave; file "db.example.com"; masters { 192.168.1.10; } } ``` VI. CONCLUSION AND FUTURE WORK This paper focuses on opaque network function configuration inside NSP’s networks. After illustrating the type of services that NSPs provide to their customers, the need of the tenant-centric model was motivated and it was illustrates how to extend the typical CMS architecture to integrate third-party VNFs. To do this, we leverage the use of a VNF description file that allows the NOS to know the main aspects of an external VNF. Finally we presented a prototype of our solution. This prototype was validated by implementing VNF configuration through configuration files, using a solution that is independent from the specific format used by the VNF for its configuration files (e.g., XML, text file or proprietary). Our tests produced a successful validation and it consisted of a specific translator to create configuration files, which interacted with two different network functions: Bind9 (a DNS server) and Vyatta Core (a software router). Possible future extensions could be the addition of more intensive validation mechanisms, since currently we leverage only the validation instruments provided by YANG. In particular this type of work could regard both the validation of configuration output (e.g., more complex constrains checking) and validation of the correct integration in the system (e.g., guarantee that all requirements defined by a final user are respected or guarantee the expected behavior of the virtual network). Also our solution could be tested with other types of VNFs to validate different configuration file formats. ACKNOWLEDGMENT The authors would like to thank PLUMgrid, Inc, a startup based in California, USA, which has supported this work. REFERENCES
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2572147/63675/14NVSDN-ServiceInsertion.pdf", "len_cl100k_base": 5949, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22049, "total-output-tokens": 7439, "length": "2e12", "weborganizer": {"__label__adult": 0.00030732154846191406, "__label__art_design": 0.0003972053527832031, "__label__crime_law": 0.0003306865692138672, "__label__education_jobs": 0.0004215240478515625, "__label__entertainment": 0.00010734796524047852, "__label__fashion_beauty": 0.00014734268188476562, "__label__finance_business": 0.0004761219024658203, "__label__food_dining": 0.00031065940856933594, "__label__games": 0.00045680999755859375, "__label__hardware": 0.00278472900390625, "__label__health": 0.0004668235778808594, "__label__history": 0.0002994537353515625, "__label__home_hobbies": 8.535385131835938e-05, "__label__industrial": 0.0006279945373535156, "__label__literature": 0.0002294778823852539, "__label__politics": 0.0002968311309814453, "__label__religion": 0.0004279613494873047, "__label__science_tech": 0.1409912109375, "__label__social_life": 7.396936416625977e-05, "__label__software": 0.038116455078125, "__label__software_dev": 0.8115234375, "__label__sports_fitness": 0.0002307891845703125, "__label__transportation": 0.0006132125854492188, "__label__travel": 0.00024819374084472656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32105, 0.02122]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32105, 0.46749]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32105, 0.88508]], "google_gemma-3-12b-it_contains_pii": [[0, 777, false], [777, 6081, null], [6081, 11991, null], [11991, 17028, null], [17028, 22449, null], [22449, 27101, null], [27101, 32105, null]], "google_gemma-3-12b-it_is_public_document": [[0, 777, true], [777, 6081, null], [6081, 11991, null], [11991, 17028, null], [17028, 22449, null], [22449, 27101, null], [27101, 32105, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32105, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32105, null]], "pdf_page_numbers": [[0, 777, 1], [777, 6081, 2], [6081, 11991, 3], [11991, 17028, 4], [17028, 22449, 5], [22449, 27101, 6], [27101, 32105, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32105, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
d811b887e795ba53e3c347e377f1b55048a44baf
Chapter 3: Program Statements Now we will examine some other program statements. Chapter 3 focuses on: - program development stages - the flow of control through a method - decision-making statements - expressions for making complex decisions - repetition statements - drawing with conditionals and loops Program Development - The creation of software involves four basic activities: - establishing the requirements - creating a design - implementing the code - testing the implementation - The development process is much more involved than this, but these are the four basic development activities. Requirements - Software requirements specify the tasks a program must accomplish (what to do, not how to do it). - They often include a description of the user interface. - An initial set of requirements often are provided, but usually must be critiqued, modified, and expanded. - Often it is difficult to establish detailed, unambiguous, complete requirements. - Careful attention to the requirements can save significant time and expense in the overall project. ### Design - A *software design* specifies how a program will accomplish its requirements. - A design includes one or more *algorithms* to accomplish its goal. - An *algorithm* is a step-by-step process for solving a problem. - An algorithm may be expressed in *pseudocode*, which is code-like, but does not necessarily follow any specific syntax. - In object-oriented development, the design establishes the classes, objects, methods, and data that are required. ### Testing - A program should be executed multiple times with various input in an attempt to find errors. - *Debugging* is the process of discovering the causes of problems and fixing them. - Programmers often think erroneously that there is "only one more bug" to fix. - Tests should consider design details as well as overall requirements. ### Implementation - *Implementation* is the process of translating a design into source code. - Most novice programmers think that writing code is the heart of software development, but actually it should be the least creative step. - Almost all important decisions are made during requirements and design stages. - Implementation should focus on coding details, including style guidelines and documentation. ### Flow of Control - Unless specified otherwise, the order of statement execution through a method is linear: one statement after the other in sequence. - Some programming statements modify that order, allowing us to: - decide whether or not to execute a particular statement, or - perform a statement over and over, repetitively - These decisions are based on a *boolean expression* (also called a *condition*) that evaluates to true or false. - The order of statement execution is called the *flow of control*. Conditional Statements - A conditional statement lets us choose which statement will be executed next. - Therefore they are sometimes called selection statements. - Conditional statements give us the power to make basic decisions. - Some conditional statements in Java are: - the if statement - the if-else statement The if Statement - An example of an if statement: ```java if (sum > MAX) delta = sum - MAX; System.out.println("The sum is " + sum); ``` First, the condition is evaluated. The value of sum is either greater than the value of MAX, or it is not. - If the condition is true, the assignment statement is executed. - If it is not, the assignment statement is skipped. Either way, the call to println is executed next. - See Age.java (page 130) Logic of an if statement - The if statement has the following syntax: ```java if (condition) { statement; } ``` - The condition must be a boolean expression. - It must evaluate to either true or false. - If the condition is true, the statement is executed. - If it is false, the statement is skipped. Boolean Expressions - A condition often uses one of Java's *equality operators* or *relational operators*, which all return *boolean results*: - `==` equal to - `!=` not equal to - `<` less than - `>` greater than - `<=` less than or equal to - `>=` greater than or equal to - Note the difference between the equality operator (`==`) and the assignment operator (`=`). The if-else Statement - An *else clause* can be added to an *if* statement to make an *if-else statement*: ```java if ( condition ) statement1; else statement2; ``` - If the *condition is true*, `statement1` is executed; if the condition is false, `statement2` is executed. - One or the other will be executed, but not both. - See `Wages.java` (page 134) Block Statements - Several statements can be grouped together into a *block statement*: ```java { ... } ``` - A block is delimited by braces (`{ ... }`) - A block statement can be used wherever a statement is called for by the Java syntax - For example, in an *if-else* statement, the *if* portion, or the *else* portion, or both, could be block statements - See `Guessing.java` (page 136) Nested if Statements - The statement executed as a result of an if statement or else clause could be another if statement. - These are called nested if statements. - See MinOfThree.java (page 138). - An else clause is matched to the last unmatched if (no matter what the indentation implies). - Braces can be used to specify the if statement to which an else clause belongs. Logical Operators - Boolean expressions can use the following logical operators: - ! Logical NOT - && Logical AND - || Logical OR - They all take boolean operands and produce boolean results. - Logical NOT is a unary operator (it operates on one operand). - Logical AND and logical OR are binary operators (each operates on two operands). Logical NOT - The logical NOT operation is also called logical negation or logical complement. - If some boolean condition a is true, then !a is false; if a is false, then !a is true. - Logical expressions can be shown using truth tables. <table> <thead> <tr> <th>a</th> <th>!a</th> </tr> </thead> <tbody> <tr> <td>true</td> <td>false</td> </tr> <tr> <td>false</td> <td>true</td> </tr> </tbody> </table> Logical AND and Logical OR - The logical AND expression \[ a \&\& b \] is true if both a and b are true, and false otherwise. - The logical OR expression \[ a \|\| b \] is true if a or b or both are true, and false otherwise. Truth Tables - A truth table shows the possible true/false combinations of the terms. - Since && and || each have two operands, there are four possible combinations of conditions a and b. | a | b | a && b | a || b | |-----|-----|--------|--------| | true| true| true | true | | true| false| false | true | | false| true| false | true | | false| false| false | false | Logical Operators - Conditions can use logical operators to form complex expressions. ```java if (total < MAX && total/count > MAX) System.out.println("Testing."); ``` - Logical operators have precedence relationships among themselves and with other operators. - all logical operators have lower precedence than the relational or arithmetic operators. - logical NOT has higher precedence than logical AND and logical OR. Short Circuited Operators - The processing of logical AND and logical OR is "short-circuited". - If the left operand is sufficient to determine the result, the right operand is not evaluated. ```java if (count != 0 && total/count > MAX) System.out.println("Testing."); ``` - This type of processing must be used carefully. Truth Tables - Specific expressions can be evaluated using truth tables. <table> <thead> <tr> <th>total &lt; MAX</th> <th>Found</th> <th>!Found</th> <th>total &lt; MAX &amp;&amp; !Found</th> </tr> </thead> <tbody> <tr> <td>false</td> <td>false</td> <td>true</td> <td>false</td> </tr> <tr> <td>false</td> <td>true</td> <td>false</td> <td>false</td> </tr> <tr> <td>true</td> <td>false</td> <td>true</td> <td>true</td> </tr> <tr> <td>true</td> <td>true</td> <td>false</td> <td>false</td> </tr> </tbody> </table> Comparing Characters - We can use the relational operators on character data - The results are based on the Unicode character set - The following condition is true because the character + comes before the character J in the Unicode character set: ```java if ('+' < 'J') System.out.println("+ is less than J"); ``` - The uppercase alphabet (A-Z) followed by the lowercase alphabet (a-z) appear in alphabetical order in the Unicode character set. Lexicographic Ordering - Because comparing characters and strings is based on a character set, it is called a lexicographic ordering - This is not strictly alphabetical when uppercase and lowercase characters are mixed - For example, the string "Great" comes before the string "fantastic" because all of the uppercase letters come before all of the lowercase letters in Unicode - Also, short strings come before longer strings with the same prefix (lexicographically) - Therefore, "book" comes before "bookcase" Comparing Strings - Remember that a character string in Java is an object - We cannot use the relational operators to compare strings - The equals method can be called with strings to determine if two strings contain exactly the same characters in the same order - The String class also contains a method called compareTo to determine if one string comes before another (based on the Unicode character set) Comparing Float Values - We also have to be careful when comparing two floating point values (float or double) for equality - You should rarely use the equality operator (==) when comparing two floats - In many situations, you might consider two floating point numbers to be "close enough" even if they aren't exactly equal - Therefore, to determine the equality of two floats, you may want to use the following technique: ```java if (Math.abs(f1 - f2) < 0.00001) System.out.println("Essentially equal."); ``` More Operators ➢ To round out our knowledge of Java operators, let’s examine a few more ➢ In particular, we will examine • the increment and decrement operators • the assignment operators Increment and Decrement ➢ The increment and decrement operators are arithmetic and operate on one operand ➢ The increment operator (++) adds one to its operand ➢ The decrement operator (--) subtracts one from its operand ➢ The statement count++; is functionally equivalent to count = count + 1; Assignment Operators ➢ Often we perform an operation on a variable, and then store the result back into that variable ➢ Java provides assignment operators to simplify that process ➢ For example, the statement num += count; is equivalent to num = num + count; Assignment Operators ➢ There are many assignment operators, including the following: <table> <thead> <tr> <th>Operator</th> <th>Example</th> <th>Equivalent To</th> </tr> </thead> <tbody> <tr> <td>+=</td> <td>x += y</td> <td>x = x + y</td> </tr> <tr> <td>-=</td> <td>x -= y</td> <td>x = x - y</td> </tr> <tr> <td>*=</td> <td>x *= y</td> <td>x = x * y</td> </tr> <tr> <td>/=</td> <td>x /= y</td> <td>x = x / y</td> </tr> <tr> <td>%=</td> <td>x %= y</td> <td>x = x % y</td> </tr> </tbody> </table> Assignment Operators - The right hand side of an assignment operator can be a complex expression - The entire right-hand expression is evaluated first, then the result is combined with the original variable - Therefore \[\text{result} /= (\text{total-MIN}) \% \text{num};\] is equivalent to \[\text{result} = \text{result} / ((\text{total-MIN}) \% \text{num});\] Repetition Statements - Repetition statements allow us to execute a statement multiple times - Often they are referred to as loops - Like conditional statements, they are controlled by boolean expressions - The text covers two kinds of repetition statements: • the while loop • the for loop - The programmer should choose the right kind of loop for the situation The while Statement - The while statement has the following syntax: \[ \text{while ( condition )} \\ \text{statement;} \] If the condition is true, the statement is executed. Then the condition is evaluated again. The statement is executed repeatedly until the condition becomes false. Logic of a while Loop The while Statement - Note that if the condition of a while statement is false initially, the statement is never executed. - Therefore, the body of a while loop will execute zero or more times. - See Counter.java (page 147) - See Average.java (page 148) - A sentinel value indicates the end of the input - The variable `sum` maintains a running sum. - See WinPercentage.java (page 151) - A loop is used to validate the input, making the program more robust. Infinite Loops - The body of a while loop eventually must make the condition false. - If not, it is an infinite loop, which will execute until the user interrupts the program. - This is a common logical error. - You should always double check to ensure that your loops will terminate normally. - See Forever.java (page 152) Nested Loops - Similar to nested if statements, loops can be nested as well. - That is, the body of a loop can contain another loop. - Each time through the outer loop, the inner loop goes through its full set of iterations. - See PalindromeTester.java (page 155) Iterators - An iterator is an object that has methods that allow you to process a collection of items one at a time. - The hasNext and next methods are used to loop through the collection: ```java while (myCollection.hasNext()) { System.out.println(myCollection.next()); } `` - Several classes in the Java class library define iterator objects, including Scanner. - See URLDissector.java (page 158) The for Statement - A for loop is functionally equivalent to the following while loop structure: ```java initialization; while (condition) { statement; increment; } `` Logic of a for loop - The for statement has the following syntax: ```java for (initialization; condition; increment) { statement; } ``` - The increment portion is executed at the end of each iteration. - The condition-statement-increment cycle is executed repeatedly. - The initialization is executed once before the loop begins. - The statement is executed until the condition becomes false. Reserved word The for Statement - Like a while loop, the condition of a for statement is tested prior to executing the loop body. - Therefore, the body of a for loop will execute zero or more times. - It is well suited for executing a loop a specific number of times that can be determined in advance. - See Counter2.java (page 161) - See Multiples.java (page 163) - See Stars.java (page 165) Iterators and for Loops - A variation of the for loop, called the foreach loop, allows us to process collections just like iterators, but without the complicated syntax. - If bookList is an iterator object that manages Book objects, we can do the following: ```java for (Book myBook : bookList) { System.out.println(myBook); } ``` - See IceCreamShop.java (page 167) Choosing a Loop Structure - When you can't determine how many times you want to execute the loop body, use a while statement. - If you can determine how many times you want to execute the loop body, use a for statement. The for Statement - Each expression in the header of a for loop is optional: - If the initialization is left out, no initialization is performed. - If the condition is left out, it is always considered to be true, and therefore creates an infinite loop. - If the increment is left out, no increment operation is performed. - Both semi-colons are always required in the for loop header. Program Development - We now have several additional statements and operators at our disposal - Following proper development steps is important - Suppose you were given some initial requirements: - accept a series of test scores - compute the average test score - determine the highest and lowest test scores - display the average, highest, and lowest test scores Program Development - Requirements Analysis – clarify and flesh out specific requirements - How much data will there be? - How should data be accepted? - Is there a specific output format required? - After conferring with the client, we determine: - the program must process an arbitrary number of test scores - the program should accept input interactively - the average should be presented to two decimal places - The process of requirements analysis may take a long time Program Development - Design – determine a possible general solution - Input strategy? (Sentinel value?) - Calculations needed? - An initial algorithm might be expressed in pseudocode - Multiple versions of the solution might be needed to refine it - Alternatives to the solution should be carefully considered Program Development - Implementation – translate the design into source code - Make sure to follow coding and style guidelines - Implementation should be integrated with compiling and testing your solution - This process mirrors a more complex development model we'll eventually need to develop more complex software - The result is a final implementation - See ExamGrades.java (page 170) Program Development - Testing – attempt to find errors that may exist in your programmed solution - Compare your code to the design and resolve any discrepancies - Determine test cases that will stress the limits and boundaries of your solution - Carefully retest after finding and fixing an error Summary - Chapter 3 has focused on: - program development stages - the flow of control through a method - decision-making statements - expressions for making complex decisions - repetition statements - drawing with conditionals and loops More Drawing Techniques - Conditionals and loops can greatly enhance our ability to control graphics - See Bullseye.java (page 173) - See Boxes.java (page 175) - See BarHeights.java (page 177)
{"Source-Url": "http://www.mrleecomputing.org/wp-content/uploads/2014/11/Ch03_pptSlides.pdf", "len_cl100k_base": 4191, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 30938, "total-output-tokens": 4813, "length": "2e12", "weborganizer": {"__label__adult": 0.00043487548828125, "__label__art_design": 0.00026106834411621094, "__label__crime_law": 0.00033211708068847656, "__label__education_jobs": 0.0011529922485351562, "__label__entertainment": 4.89354133605957e-05, "__label__fashion_beauty": 0.00016427040100097656, "__label__finance_business": 0.000156402587890625, "__label__food_dining": 0.000400543212890625, "__label__games": 0.0005068778991699219, "__label__hardware": 0.0006356239318847656, "__label__health": 0.0003147125244140625, "__label__history": 0.0001704692840576172, "__label__home_hobbies": 9.000301361083984e-05, "__label__industrial": 0.00030231475830078125, "__label__literature": 0.00020325183868408203, "__label__politics": 0.00023555755615234375, "__label__religion": 0.0005040168762207031, "__label__science_tech": 0.00122833251953125, "__label__social_life": 8.249282836914062e-05, "__label__software": 0.002407073974609375, "__label__software_dev": 0.9892578125, "__label__sports_fitness": 0.000469207763671875, "__label__transportation": 0.00045680999755859375, "__label__travel": 0.00025272369384765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17806, 0.00624]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17806, 0.81079]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17806, 0.87694]], "google_gemma-3-12b-it_contains_pii": [[0, 1081, false], [1081, 2823, null], [2823, 3905, null], [3905, 5068, null], [5068, 6339, null], [6339, 7919, null], [7919, 9827, null], [9827, 10965, null], [10965, 12004, null], [12004, 13084, null], [13084, 14112, null], [14112, 15492, null], [15492, 17061, null], [17061, 17806, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1081, true], [1081, 2823, null], [2823, 3905, null], [3905, 5068, null], [5068, 6339, null], [6339, 7919, null], [7919, 9827, null], [9827, 10965, null], [10965, 12004, null], [12004, 13084, null], [13084, 14112, null], [14112, 15492, null], [15492, 17061, null], [17061, 17806, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17806, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 17806, null]], "pdf_page_numbers": [[0, 1081, 1], [1081, 2823, 2], [2823, 3905, 3], [3905, 5068, 4], [5068, 6339, 5], [6339, 7919, 6], [7919, 9827, 7], [9827, 10965, 8], [10965, 12004, 9], [12004, 13084, 10], [13084, 14112, 11], [14112, 15492, 12], [15492, 17061, 13], [17061, 17806, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17806, 0.06069]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
cc1a986b4ceb677bdc0810ae344fcfc02c3efc45
A comprehensive study of Software Risk Management Akshay Sharma1, Deepak Basora2, Nikita Chhillar3 & Deepika Yadav4 Department of Computer Science, Dronacharya college of engineering Gurgaon, India sharma.akshay781@gmail.com1, deepak1990basora@gmail.com2, nikitachhillar@yahoo.com3, dipssaggi16@gmail.com4 Abstract: The challenges and realities in applying effective software risk management processes are difficult, in particular integrating the risk management processes into software development organizations. However, the benefits of implementing effective risk management tools and techniques in software development project are equally great. It provides a disciplined environment for proactive decision-making to assess continuously what can go wrong; determine what risks are important to deal with; and implement actions to deal with those risks. Current perceptions and emerging trends of various software risk management practices are reviewed and risks specific to software development projects are identified. Risk management planning addresses the strategy for risk management, the risk management process, and the techniques, methods, and tools to be used to support the risk management process. If risk management process is in place for each and every software development process then future problems could be minimized or completely eradicated. This paper addresses lessons learned from implementing project risk management practices in software development environment and recognizes the increasing role of risk management in present software projects and aims at providing more support in this area. Keywords: Risk management tools, Risk management planning, Risk management process, Software development I. INTRODUCTION The software industry is one of the largest manufacturing industries in the world, with $350 billion in off-the-shelf software sold each year and over $100 billion in customized code on top of that. Project failures are the result of the multiplicity of risks inherent in software project environment. Software development projects are collections of larger programs with many interactions and dependencies. It involves a creation of something that has never been done before although the development processes are similar among other projects [1]. Risk management is an investment; that is, there are costs associated with identifying risks, analyzing those risks, and establishing plans to mitigate those risks. Software risk management is a software engineering practice with processes, methods, and tools for managing risks in a project. The main objective of Risk Management is to identify potential problems before they occur so that risk handling activities can be planned and invoked as needed across the life of the product or project to mitigate adverse impacts on achieving objectives. It should begin at the earliest stages of project planning and continue throughout the total life cycle of the project [2]. Different types of risks are found that will affect budget, user satisfactions, and system performance. Studies indicate that 15 to 35% of all software projects are cancelled outright, and the remaining projects suffer from schedule slippage, cost overrun, or failure to meet their project goals [3, 4]. Software project risk management is an ethic in which the project team continually assesses what may negatively impact the project, determines the probability of such events occurring, and determines the impact of such events. It provides a disciplined environment for proactive decision-making to assess continuously what can go wrong; determine what risks are important to deal with; and implement actions to deal with those risks. Risk management planning addresses the strategy for risk management, the risk management process, and the techniques, methods, and tools to be used to support the risk management process [2]. However the project success is difficult to predict because project scope is changed by continuous market requirements and resources are constantly being reallocated to accommodate latest market conditions. Projects for specific customers also have a large degree of uncertainty for requirements due to the customized technical attributes. As a result, software development engineers have high turnover rates among software development firms. For example, software managers in India perceived personnel turnover as their biggest source of risk [5]. Many software projects and programs involve many entities such as companies, divisions, etc., that may have certain interests. There is often a feeling of disconnection between software developers and their management, each believing that the others are out of touch with reality resulting in misunderstanding and lack of trust. Research shows that 45% of all the causes of delayed software deliverables are related to managerial issues [6]. II. RISK A risk is a potential future harm that may arise from some present action, such as, a schedule slip or a cost overrun. The loss is often considered in terms of direct financial loss, but also can be a loss in terms of credibility, future business, and loss of property or life. Risk in itself is not bad; risk is essential to progress, and failure is often a key part of learning. But we must learn to balance the possible negative consequences of risk against the potential benefits of its associated opportunity. (Van Scoy, 1992) [7] Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the In IT systems, risk can be introduced from the internet, servers, networks, malicious insiders and even lapses in physical security. Risk is the possibility of loss. It is a function of both the probability of an adverse event occurring and its impact; the impact manifests itself in a combination of financial loss, time delay, and loss of performance. A risk is the precursor to a problem; the probability that, at any given point in the software life cycle, the predicted goals cannot be achieved within available resources. Risk cannot be eliminated from a software project, but it can be managed. Risk management in software engineering is related to the various future harms that could be possible on the software due to some minor or non-noticeable mistakes in software development project or process. Risk management is critical to the success of any software effort and is a strategic aspect of all software projects. This issue is generally managed by Software Project Management. During the life cycle of software projects, various risks are associated with them. These risks in the software project is identified and managed by software risk management. Some of the important aspects of risk management in software engineering are software risk management, risk classification and strategies for risk management. [9] A. Factors creating risks: Current perceptions about risk management from majority of software project organizations contributes to the lack of project stability in addition to the inherent challenges posed by the nature of software projects. Forces that contribute to loss or damage constitute elements of risk. Some influences are external to the enterprise and other influences are internal to the enterprise. These forces cannot be completely eliminated, and, hence, the enterprise has to take a calculated risk on its IT investment. Risk can be classified into systematic and unsystematic risks [10]. Systematic risk refers to that portion of risk caused by external factors; this is common and may affect all firms. Virus, hacking, fire, natural disasters and power loss are sources of systematic risk. Their effect is felt by many of the companies that are placed in the same position. For example, a loophole in the Internet browser that is vulnerable for hacking affects all of the firms that use the browser. Unsystematic risk is the portion of total risk that is unique to the firm. The factors such as misuse of data, loss of data, application error, and human interaction, inside attack and equipment malfunction can be cited for unsystematic risk. Unsystematic factors are largely independent of factors affecting the IT industry in general. The proportion of systematic and unsystematic risk denotes degree of vulnerability of the firm to the external or internal factors. Systematic risk is also known as generic risk, and unsystematic risk is also known as specific risk. Even though systematic risk is common for all firms of similar nature, its effect is not the same across all firms. This may be due to differences in the level of exposure and counter measures taken by firms. a. Further there are three key software risk factors and concerns of both executives and software managers. a) Estimation errors: Some tasks are harder to estimate than others because of the lack of experience of similar tasks or because of the nature of the task. Producing a set of user manuals is reasonably straightforward and, given that we have carried out similar tasks previously, we should be able to estimate with some degree of accuracy how long it will take and how much it will cost. Estimation can be improved by analysing historic data for similar activities and similar systems. b) Planning assumptions: At every stage during planning, assumptions are made which, if not valid may put the plan at risk. In the planning process it is important to list explicitly all the assumptions that have been made and identify what effect they might have on the plan. c) Eventualities: Some eventualities might never be foreseen and we can only resign ourselves to the fact that unimaginable things do. They are however very rare. The majority of unexpected can be identified-the requirements specification might be altered after some of the modules have been coded, the required hardware might not be delivered on time. Such events do happen from time to time. B. Risk classification: The key purpose of classifying risk is to get a collective viewpoint on a group of factors. These are the types of factors which will help project managers to identify the group that contributes the maximum risk. Risk classification is considered as an economical way of analyzing risks and their causes by grouping similar risks together into classes. Some of most important risks in software engineering project described in Table I are categorized as software requirement risks, software cost risks, requirements scheduling risk, software quality risks, and software business risks. [4] Software risks could be classified as internal or external. Those risks that come from risk factors within the organization are called internal risks whereas the external risks come from out of the organization and are difficult to control. Internal risks are project risks, process risks, and product risks. External risks are generally business with the vendor, technical risks, customers’ satisfaction, political stability and so on. In general, there are many risks in the software engineering which is very difficult or impossible to identify all of them. Table I Types of Risks <table> <thead> <tr> <th>RISKS</th> <th>DESCRIPTION</th> </tr> </thead> <tbody> <tr> <td>Software requirement risks</td> <td>Lack of analysis for change of requirements.</td> </tr> <tr> <td></td> <td>Change extension of requirements</td> </tr> <tr> <td></td> <td>Lack of report for requirements</td> </tr> <tr> <td></td> <td>Poor definition of requirements</td> </tr> <tr> <td></td> <td>Ambiguity of requirements</td> </tr> <tr> <td></td> <td>Change of requirements</td> </tr> <tr> <td></td> <td>Invalid requirements</td> </tr> <tr> <td>Software cost risks</td> <td>Lack of good estimation in projects</td> </tr> <tr> <td></td> <td>Unrealistic schedule</td> </tr> <tr> <td></td> <td>The hardware does not work well</td> </tr> <tr> <td></td> <td>Lack of testing</td> </tr> <tr> <td></td> <td>Lack of monitoring</td> </tr> <tr> <td></td> <td>Complexity of architecture</td> </tr> <tr> <td></td> <td>Management change, technology change, and environment change</td> </tr> <tr> <td></td> <td>Lack of reassessment of management cycle</td> </tr> <tr> <td>Software scheduling risks</td> <td>Inadequate budget</td> </tr> <tr> <td></td> <td>Change of requirements and extension of requirements</td> </tr> <tr> <td></td> <td>Human errors</td> </tr> <tr> <td></td> <td>Inadequate knowledge about tools and techniques</td> </tr> <tr> <td></td> <td>Long-term training for personnel</td> </tr> <tr> <td></td> <td>Lack of employment of manager experience</td> </tr> <tr> <td></td> <td>Lack of enough skill</td> </tr> <tr> <td></td> <td>Lack of good estimation in projects</td> </tr> <tr> <td>Software quality risks</td> <td>Inadequate documentation</td> </tr> <tr> <td></td> <td>Lack of project standard</td> </tr> <tr> <td></td> <td>Inadequate budget</td> </tr> <tr> <td></td> <td>Human errors</td> </tr> <tr> <td></td> <td>Unrealistic schedule</td> </tr> <tr> <td></td> <td>Poor definition of requirements</td> </tr> <tr> <td></td> <td>Lack of enough skill</td> </tr> <tr> <td></td> <td>Lack of testing and good estimation in projects</td> </tr> </tbody> </table> III. RISK ENGINEERING The objective of risk management is to avoid or minimize the adverse effects of unforeseen events by avoiding the risks or drawing up contingency plans for dealing with them. There are number of models for risk management, in that they identify two main components-risk identification and risk management. An example of often used model viewed in Fig 3.1 shows a task breakdown structure of risk engineering. ![Risk engineering task breakdown structure](image) A. Software development risk management processes: As shown in Fig. 3.2, Software development management is an eight-step process during the initial phases of the project. When any new risks are identified throughout the project, a five-step inner process is used to improve earlier estimates and judgments continuously. Despite the inherent risks associated with software development projects, there are strong indicators that these risks can be managed successfully. Research of failed software projects showed that “their problems could have been avoided or strongly reduced if there had been an explicit early concern with identifying and resolving their high-risk elements”. Effective risk management is the most important management tool a project manager can employ to increase the likelihood of project success. Since risk management is not widely used and understood, this could be a significant competitive advantage to those that implement the risk management processes in their projects. A large number of processes have been generated in recent years to address the need for more effective risk management. **B. Risk Identification:** Risk Identification consists of listing all of the risks that can adversely affect the successful execution of the project. In the risk identification step, the team systematically enumerates as many project risks as possible to make them explicit before they become problems. The first stage in any risk assessment exercise is to identify the hazards that might affect the duration or resource costs of the project. A hazard is an event that might occur and will, if it does occur, create a problem for the successful completion of the project. For example, the illness of a team member is a hazard that might result in the problem of late delivery of a component. The late delivery of that component is likely have an effect on other activities and might, particularly if it is on the critical path; put the project completion date at risk [11]. A common way of identifying hazards is to use a checklist listing all the possible hazards and factors that influence them. Some hazards are generic risks- that is, they are relevant to all software projects and standard checklists can be used and augmented from an analysis of past projects to identify them. Some risks are identified in Fig 3.3. **Figure 3.2 Soft Risk Model** **Figure 3.3 General Categories of risk** Generic risks are potential threats to every software project. Some examples of generic risks are changing requirements, losing key personnel, or bankruptcy of the software company or of the customer. It is advisable for a development organization to keep a checklist of these types of risks. Teams can then assess the extent to which these risks are a factor for their project based upon the known set of programmers, managers, customers, and policies. Product-specific risks can be distinguished from generic risks because they can only be identified by those with a clear understanding of the technology, the people, and the environment of the specific product. An example of a product-specific risk is the availability of a complex network necessary for testing. Generic and product-specific risks can be further divided into project, product, and business risks. Project risks are those that affect the project schedule or the resources (personnel or budgets) dedicated to the project. Product risks are those that affect the quality or performance of the software being developed. Finally, business risks are those that threaten the viability of the software, such as building an excellent product no one wants or building a product that no longer fits into the overall business strategy of the company. a. The categories of factors that will need to be considered include the following: a) Application factors: The nature of the application- whether it is a simple data processing application, a safety critical system or a large distributed system with real-time elements- is likely to be a critical factor. The expected size of the application is also important-the larger the system, the greater is the likelihood of errors and communication and management problems. b) Staff factors: The experience and skills of the staff involving are clearly major factors-an experienced programmer is, less likely to make errors than one with little experience. c) Project factors: It is important that the project and its objectives are well defined and that they are absolutely clear to all members of the project team and all key stakeholders. Any possibility that this is not the case will pose a risk to the success of the project. d) Hardware/Software factors: A project that requires new hardware for development is likely to pose a higher risk than one where the software can be developed on existing hardware. Where a system is developed on one type of hardware or software platform to be used on another there might be additional risks at installation. e) Supplier factors: The extent to which a project relies on external organizations that cannot be directly controlled often influences the project’s success. Delays in, for example, the installation of telephone lines or delivery of equipment may be difficult to avoid. f) Environment factors: Changes in the environment can affect a project’s success. A significant change in the taxation regulations could have serious consequences for the development of an application. C. Risk estimation: Having estimated the risks that might affect our project we need some way of assessing their importance. Risk estimation consists of assessing the likelihood and impact of each hazard. The probability of a hazard’s occurring is known as the risk likelihood; the effect that the resulting problem will have on the project, if it occurs, is known as the risk impact and the importance of the risk is known as the risk value or risk exposure. The risk value is calculated using (1). Risk Exposure = risk likelihood x risk impact Ideally the risk impact is estimated in monetary terms and the likelihood assessed as a probability. The risk exposure for various risks can then be compared with each other to assess the relative importance of each risk and they can be directly compared with the costs and likelihoods of success of various contingency plans. D. Risk Evaluation: Risk evaluation consists of ranking the risks and determining risk aversion strategies. Many risk managers use a simple scoring method to provide a quantitative measure for assessing each risk. Some just categorize likelihoods and impacts as high, medium or low, but this form of ranking does not allow the calculation of a risk exposure. A better and popular approach is to score the likelihood and impact on a scale of, say 1 to 10 where the hazard is more likely to occur receives a score of 10 and the least likely a score of 1. Impact methods, must take into account the total risk to the project. This must include the following potential costs: - The cost of delays to scheduled dates for deliverables; - Cost overruns caused by using additional or more expensive resources; - The costs incurred or implicit in any compromise to the system’s quality or functionality; Managing risk involves the use of two strategies. First there is reducing the risk exposure by reducing the likelihood or impact; secondly, Drawing up contingency plans to deal with the risk should it occur; The analyzed risks are organized into a risk table. The template for a risk table is shown in Table II. The information that is to be provided in each of the columns is now explained. a) Rank of the risk. b) Risk is the description of the risk itself. c) Probability is the likelihood of the risk occurring, using either a numeric or categorical scale, as discussed in the last section. d) Impact is the magnitude of the loss if the risk were to occur, using either a numeric or a categorical scale. e) Rank last week and the number of weeks on list are documented so the team can monitor changes in priority, to determine if actions are being taken that cause changes in the stature of the risk. f) Action documents what the team is doing to manage the risk. The action field is often not completed until the risks have been prioritized [11]. <table> <thead> <tr> <th>Rank</th> <th>Risk</th> <th>Probability</th> <th>Impact</th> <th>Rank Last week</th> <th>Action</th> </tr> </thead> </table> Some risks, once recognised, can be reduced or avoided immediately with very little cost or effort and it is sensible to take action on these regardless of their risk value. For other risks we need to compare the costs of taking action with the benefits of reducing the risk. One method for doing this is to calculate the risk reduction leverage (RRL) using (2). \[ RRL = \frac{RE_{before} - RE_{after}}{riskreduction\ cost} \] Where \(RE_{before}\) is the original risk exposure value, \(RE_{after}\) is the expected risk exposure value after taking action and the risk reduction cost is the cost of implementing the risk reduction action. If the values are expected monetary values then an RRL greater than one indicates that we can expect to gain from implementing the risk reduction plan because the expected reduction in risk exposure is greater than the cost of the plan. In either case the higher the leverage value for a risk then the more worthwhile it will be to plan the risk reduction action. E. Risk Planning: Risk planning consists of drawing up contingency plans and, where appropriate, adding these to the project’s structure. With small projects, risk planning is likely to be the responsibility of the project manager but medium or large projects will benefit from the appointment of a full-time risk manager. Following are some examples of the kinds of risk planning actions that can take place. a. Information buying: Perceived risk can be reduced by obtaining more information through investigation. For example, in a project in which the use of a new technology has created risk, the team can invest some money to learn about the technology. Throw-away prototypes can be developed using the new technology to educate some of the staff on the new technology and to assess the fit of the new technology for the product. b. Contingency plans: A contingency plan is a plan that describes what to do if certain risks materialize. By planning ahead with such a plan, you are prepared and have a strategy in place to deal with the issue. c. Risk reduction: For example, if the team is concerned that the use of a new programming language may cause a schedule delay, the budget might contain a line item entitled “potential schedule” to cover a potential schedule slip. Because the budget already covers the potential slip, the financial risk to the organization is reduced. Alternately, the team can plan to employ inspections to reduce the risk of quality problems. d. Risk acceptance: Sometimes the organization consciously chooses to live with the consequences of the risk [12] and the results of the potential loss. In this case, no action is planned. F. Risk Control: Risk Control concerns the main functions of the risk manager in minimising and reacting to problems throughout the project. This function will include aspects of quality control in addition to dealing with problems as they occur. There are five strategies for risk controlling. a) Hazard Prevention: Some hazards can be prevented from occurring or their likelihood reduced to insignificant levels. b) Likelihood reduction: Some risks, while they cannot be prevented, can have their likelihoods reduced by prior planning. The risks of late changes to a requirement specification can, be reduced by prototyping. c) Risk avoidance: A project can be protected from the risk of overrunning the schedule by increasing duration estimates or reducing functionality. d) Risk transfer: The impact of some risks can be transferred away from the project by contracting out or taking out insurance. e) Contingency planning: Some risks are not preventable and contingency plans will need to be drawn up to reduce the impact should the hazard occur. A project manager should draw up contingency plans for using agency programmers to minimise the impact of any unplanned absence of programming staff. G. Risk Monitoring: Risk monitoring must be an ongoing activity, as the importance and likelihood of particular risks can change as the project proceeds. After risks are identified, analyzed, and prioritized, and actions are established, it is essential that the team regularly monitor the progress of the product and the resolution of the risk items, taking corrective action when necessary. This monitoring can be done as part of the team project management activities or via explicit risk management activities. Often teams regularly monitor their “Top 10 risks.” Risks need to be revisited at regular intervals for the team to reevaluate each risk to determine when new circumstances caused its probability and/or impact to change. At each interval, some risks may be added to the list and others taken away. Risks need to be reprioritized to see which are moved “above the line” and need to have action plans and which move “below the line” and no longer need action plans. A key to successful risk management is that proactive actions are owned by individuals and are monitored. [13] As time passes and more is learned about the project, the information gained over time may alter the risk profile considerably. Additionally, time may make it possible to refine the risk into a set of more detailed risks. These refined risks may be easier to mitigate, monitor, and manage. H. Risk Directing: Risk Directing and staffing are concerned with day-to-day management of risk. Risk aversion and problem strategies frequently involve the use of additional staff and this must be planned for and directed. IV. CONCLUSION The most important thing for a software project to do is to get focused on its critical success factors. Although software risk management is a daunting task, organizations that implement effective processes proved to be successful, while those that fail in this effort will be unsuccessful. The nature of software projects creates many risks that must be managed diligently to avoid the common drawback of many projects. For various reasons, including the influence of previous document-driven software management guidelines, projects get focused on activities which are not critical for their success. We can take some steps such as, ranking the project’s most significant risk items, establishing a regular schedule for higher management reviews of the project’s progress and so on to keep tracking on major risk factors. Formal risk management process is recommended to manage complex issues associated with software development projects. Many risk management processes have been created to aid organizations, but integrating the processes into organizations was not successful. Effective risk management process will succeed by changing the organizational culture to motivate the individual. We observed, still now software risk management reside in back seat but we should keep more focus on it. To handle all of the complex people-oriented and technology-driven success factors involved in software projects, a great measure of human judgment is required. V. REFERENCES development. IEEE Transactions on Software Engineering 17 (6), 582–590. [9] www2.latech.edu/.../Risk%20Management%20in%20Software%20Engineering...
{"Source-Url": "http://www.ijarcs.info/index.php/Ijarcs/article/download/1872/1860", "len_cl100k_base": 5810, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 18684, "total-output-tokens": 6303, "length": "2e12", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.00030112266540527344, "__label__crime_law": 0.0003879070281982422, "__label__education_jobs": 0.0016469955444335938, "__label__entertainment": 5.1915645599365234e-05, "__label__fashion_beauty": 0.00014460086822509766, "__label__finance_business": 0.0007910728454589844, "__label__food_dining": 0.0003285408020019531, "__label__games": 0.0004901885986328125, "__label__hardware": 0.0003676414489746094, "__label__health": 0.0004227161407470703, "__label__history": 0.00012862682342529297, "__label__home_hobbies": 6.836652755737305e-05, "__label__industrial": 0.0002532005310058594, "__label__literature": 0.00024437904357910156, "__label__politics": 0.0002008676528930664, "__label__religion": 0.0003025531768798828, "__label__science_tech": 0.0033111572265625, "__label__social_life": 0.00010186433792114258, "__label__software": 0.005420684814453125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0002460479736328125, "__label__transportation": 0.00029397010803222656, "__label__travel": 0.00015175342559814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31935, 0.0109]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31935, 0.55834]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31935, 0.94471]], "google_gemma-3-12b-it_contains_pii": [[0, 5611, false], [5611, 11203, null], [11203, 15567, null], [15567, 20061, null], [20061, 25338, null], [25338, 31150, null], [31150, 31935, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5611, true], [5611, 11203, null], [11203, 15567, null], [15567, 20061, null], [20061, 25338, null], [25338, 31150, null], [31150, 31935, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31935, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31935, null]], "pdf_page_numbers": [[0, 5611, 1], [5611, 11203, 2], [11203, 15567, 3], [15567, 20061, 4], [20061, 25338, 5], [25338, 31150, 6], [31150, 31935, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31935, 0.21472]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
41fe31027afcf2cf37a487026bf061ff93a517e3
ASE 12.0 Changes » Parallel and Serial Sort/Merge Joins » Smart Transformation of WHERE Clause Predicates » Improved Selectivity Estimation for LIKE Predicates » Join transitive closure » New Outer Join Syntax and Logic » Abstract Query Plans » Support for up to 50 tables in a join clause » Execute Immediate From Query Text to Query Results - Pre-optimization - Join Transitive Closure - ANSI Compliant Outer Joins - Predicate Transformation - Optimization - Improved costing of “%XXX” like clauses - Abstract Query Plans - Query Execution - Sort-Merge Joins - 50 table limit - Execute Immediate Work in progress - In ASE 11.9.x the optimizer was re-written: - sysstatistics and systabstats replaced distribution pages and provided a much greater level of detail on data distribution across the table - In the next release of ASE, the replacement for the query execution engine will be fully implemented - ASE 12.0 contains: - first phase of the replacement of the query execution engine - providing new query execution possibilities - increased intelligence in pre-optimization processing of queries What do the icon’s mean?? » New method of calculating costs in when generating the query plan » Typically due to additional information being made available from pre-optimization processing of the query » Performance enhancement » Due to new query execution options that process the data more efficiently » New query execution functionality » New methods of Query Execution to provide increased efficiency in the way that data is accessed and reduce the number of I/O’s that are required Does it all go faster? » Whilst many of the changes have been implemented for performance reasons, some provide new functionality that could not be supported before » Other changes made to ensure that Partner products are fully supported » Some of the changes, when used, add to the time taken to optimize queries (maybe significantly). These are cases where Abstract Query Plans may provide additional benefits » Intention is that nothing that is currently implemented should go slower From Query Text to Query Results - Pre-optimization - Join Transitive Closure - ANSI Compliant Outer Joins - Predicate Transformation - Optimization - Improved costing of “%XXX” like clauses - Abstract Query Plans - Query Execution - Sort-Merge Joins - 50 table limit - Execute Immediate Join Transitive Closure - Provide the optimizer with additional join paths and, hopefully, faster plans. - Example: - `select A.a from A, B, C where A.a = B.b and B.b = C.c` - Adds “and A.a = C.c” to query - Adds join orders BAC, BCA, ACB, CAB - A new join order may be the cheapest - SARG transitive closure added in ASE 11.5 - and guess what - it is still there!!!! Join Transitive Closure - Join Transitive Closure is not considered for: - Non-equi-joins (A.a > B.b) - Joins that include expressions (A.a = B.b + 1) - Joins under an OR expression - Outer Joins (A.a =* B.b) - Joins in subqueries - Joins used for view check or referential check constraints - Joins between different type columns (e.g., int = smallint) --- ANSI Joins - The Pre-ASE 12.0 outer join syntax (*=, =*) does not have clearly defined semantics - ANSI SQL92 specifies a new join syntax with clearly defined semantics - ASE 12.0 implements ANSI joins such that ALL outer joins (even those expressed in TSQL) have clearly defined semantics Example - Inner Joins » TSQL Inner Join » SELECT title, price FROM titles, salesdetail WHERE titles.title_id = salesdetail.title_id AND titles.price > 22.0 » ANSI Inner Join » SELECT title, price FROM titles INNER JOIN salesdetail ON titles.title_id = salesdetail.title_id AND titles.price > 22.0 Example - Outer Joins » TSQL Outer Join » SELECT title, price FROM titles, salesdetail WHERE titles.title_id *= salesdetail.title_id AND titles.price > 22.0 » ANSI Outer Join » SELECT title, price FROM titles LEFT OUTER JOIN salesdetail ON titles.title_id = salesdetail.title_id WHERE titles.price > 22.0 ANSI Join Terminology » Left and right outer joins » In a left join, the outer table and inner table are the left and right tables, respectively » The outer table and inner table are also referred to as the row-preserving and null-supplying tables, respectively » In a right join, the outer table and inner table are the right and left tables, respectively » In both of the following, T2 is the inner table » T1 left join T2 » T2 right join T1 Nested Joins » The left or right member of an ANSI join can be another ANSI join » Order of evaluation is determined by the position of ON clause » `select * from tname left join taddress ON tname.empid = taddress.empid left join temployee ON taddress.deptid = temployee.deptid` » `select * from tname left join taddress left join temployee ON taddress.deptid = temployee.deptid ON tname.empid = taddress.empid` » Parentheses only improve readability - they do not affect the order the join statements are evaluated in » `select * from (tname left join taddress ON tname.empid = taddress.empid) left join temployee ON taddress.deptid = temployee.deptid` Name Scoping Rules » The ON clause condition can reference columns from: » Table references directly introduced in the joined table itself » Table references that are contained in the ANSI join » Tables introduced in outer query blocks (i.e. - the ANSI outer join appears in a subquery). » The ON clause condition cannot reference: » Tables introduced in a containing outer join » Comma separated tables or joined tables in the from-list » Example - the following is not allowed: » select * from (titles left join titleauthor on titles.title_id=roysched.title_id) left join roysched on titleauthor.title_id=roysched.title_id where titles.title_id != "PS7777" Ambiguous TSQL Outer Joins (Continued) » In ASE 12.0, TSQL outer joins are converted to ANSI joins » For example, the TSQL query: » select * from T1, T2, T3 where T1.id *= T2.id and (T1.id = T3.id) and (T2.empno = 100 or T3.dept = 6) » is transformed internally to: » select * from T1 left join T2 on T1.id = T2.id, T3 where T1.id = T3.id and (T2.empno = 100 or T3.dept=6) » Query has same possible join orders as in pre-ASE12.0, but the OR clause will always be evaluated with WHERE clause » In ASE 12.0, an inner table can evaluate both ON and WHERE clause predicates Views & Outer Joins » Prior to 12.0, views containing outer joins and views referenced in outer join queries might not be merged. » Example: ``` create view VOJ1 as select o.c1, i.b1 from t3 o, t2 i where o.c1 *= i.b1 select * from t4, VOJ1 where t4.d1 = VOJ1.c1 and (VOJ1.b1 = 77 or VOJ1.b1 IS NULL) ``` » In 12.0, these types of queries can now be merged. » Better Performance » More join orders and indexing strategies possible. Predicate Transformation » Significant performance improvement in queries with limited access paths (i.e. very few possible SARGS/Joins/OR’s that can be used to qualify rows in a table) » Additional optimization achieved by generating new search paths based on » join conditions » search clauses » optimizable OR clauses » Full cartesian joins are avoided for some of the complex queries. Example » Example query: » select * from lineitem, part » where (p_partkey = l_partkey and l_quantity >= 10) » or (p_partkey = l_partkey and l_quantity <= 20) » Above query is transformed to the following: » select * from lineitem, part » where ((p_partkey = l_partkey and l_quantity >= 10) » or (p_partkey = l_partkey and l_quantity <= 20)) » and (p_partkey = l_partkey) » and (l_quantity >= 10 or l_quantity <= 20) Predicate Transformation Internals » New processing phase introduced in the compiler » just before the start of the optimizer » in the ‘decision’ module » The main driver routine performs the following: » identifies whether a set of disjuncts (minimum 2) are present at the top level of a query or part of a single AND statement » for each set of disjuncts, the predicates within it are classified into join, search and OR clauses » data structures are set up to point to the relevant predicates which are later factored out » (*) disjuncts - clauses on either side of an OR statement Predicate Transformation Internals » New conjuncts are created by suitable transformation of the collected predicates » These conjuncts are then added at the top level to the original search condition » Compilation is suppressed for » any new conjunct added by predicate factoring and transformation, which does not get selected as an access path (by optimizer) » (*) conjuncts - clauses separated by AND statements, typically SARG and Join clauses From Query Text to Query Results » Pre-optimization » Join Transitive Closure » ANSI Compliant Outer Joins » Predicate Transformation » Optimization » Improved costing of “%XXX” like clauses » Abstract Query Plans » Query Execution » Sort-Merge Joins » 50 table limit » Execute Immediate LIKE » Change to costing for LIKE clauses that are not migrated into SARG’s » Provides better row estimates, resulting in better query plans. » Example » `select ... from part, partsupp, lineitem where l_partkey = p_partkey and l_partkey = ps_partkey and p_title = '%Topographic%'` Better Selectivity Estimates For Like Clauses » New scheme to improve selectivity and qualifying row estimate » The LIKE string is compared with histogram cell boundaries » For every match, weight of the cell is added to selectivity estimates » If there are matches » The total of selectivity estimates * the number of rows in the table = estimated qualifying rows » If there are no matches » Estimated as 1 / # of cells in the histogram » This also applies to queries with LIKE clauses of the type » like “_abc”, or like “[ ]abc” Abstract Query Plans » What could go wrong with the Optimizer? » Statistics may not apply to the data that is now in the table » The query plan used for a stored procedure may not be applicable to the query at hand » The buffer cache model and the actual buffer cache usage at run time could differ » These issues are caused by: » Modeling for a different data skew » Modeling for a different usage skew » Data distribution unknown at development time, e.g.: » Densities » Magic numbers » What average for the density Can Better Be Worse Than Good? » What happens to the installed base when the optimizer is enhanced? » Most find it better » Some find it worse… » One solution to all these problems would be to implement rules based optimization. However: » Rule based decisions could be sub-optimal as they require the developer to have a knowledge of the eventual data layout » Developers very often have very little knowledge of how to write efficient query plans » The overhead on development of using Rules Based Optimization is massive » The assumed heuristics are not always right Curing Unexpected Behavior » What are the options for improving the optimizer and getting rid of unexpected behavior? » Implementing a better and more dynamic cost model » Implementing some form of extremely flexible rules based optimization » Allowing good query plans to be captured and re-used Abstract Query Plans » An abstract query plan is a persistent, human readable description of a query plan, that’s associated to a SQL statement » It is not syntactically part of the statement » The description language is a relational algebra » Possible to specify only a partial plan, where the optimizer completes the plan generation » Stored in a system catalog `sysqueryplans` » Persistent across: » connections » Server versions (i.e. upgrades) Where will AQP’s be used? » Application providers don’t want to include vendor specific syntax in their queries » In general, users don’t want to modify a production application to solve an upgrade optimizer problem » Still, it’s possible to include them if so desired » Example: » `select c1 from t1 where c2 = 0 plan '(I_scan () t1)'` How are the plans created? » Abstract query plans are captured and reused: » `set plan dump 'new_plans_group' on` » `set plan load 'new_plans_group'` » When the capture mode is enabled, all queries are stored, together with their generated abstract query plan, in SYSQUERYPLANS » Abstract query plan administration commands are available, allowing to create, delete or modify individual plans and groups What Do Abstract Plans Look Like? » Full plan examples: » select * from t1 where c=0 (i_scan c_index t1) » Instructs the optimizer to » perform an index scan on table t1 using the c_index index. » select * from t1, t2 where t1.c = t2.c and t1.c = 0 (nl_g_join (i_scan i1 t1) (i_scan i2 t2)) » Instructs the optimizer to: » perform a nested loop join with table t1 outer to t2 » perform an index scan on table t1 using the i1 index » perform an index scan on table t2 using the i2 index What Do Abstract Plans Look Like? (Continued) » Partial plan examples: » select * from t1 where c=0 (i_scan t1) » Instructs the optimizer to » perform an index scan on t1. » select * from t1, t2 where t1.c = t2.c and t1.c = 0 (t_scan t2) » Instructs the optimizer to » access t2 via a table scan. » select c11 from t1, t2 where t1.c12 = t2.c21 (prop t1 (parallel 1)) » Instructs the optimizer not to access t1 in parallel. From Query Text to Query Results - Pre-optimization - Join Transitive Closure - ANSI Compliant Outer Joins - Predicate Transformation - Optimization - Improved costing of "%XXX" like clauses - Abstract Query Plans - Query Execution - Sort-Merge Joins - 50 table limit - Execute Immediate Why sort-merge joins? - Ordered joins provide clustered access to joining rows; result in less logical and physical I/Os. - Can exploit indexes that pre-order rows on joining columns. - Sort Merge Join Algorithm - Often Better Performance for DW/DSS Queries Than Nested Loop Join of ASE Today Example ``` select ... from part, partsupp, lineitem where p_partkey = ps_partkey and ps_partkey = l_partkey and ps_orderkey = l_orderkey and p_type = 'CD' ``` Merge Join Internals ``` Table T1 where T1.pk = T2.pk Table T2 ``` Copyright 1998-1999, Sybase, Inc. - Do Not Copy Or Distribute The type of Merge Join selected depends on the join keys and available indexes - Merge Joins in ASE 12.0 are broken into four distinct types: - Full Merge Join - Left Merge Join - Right Merge Join - Sort Merge Join - There are actually eight Merge Joins possibilities since each one of the above Merge Join types can also be done in parallel **Full Merge Join** One step process Scan the indexes on the join keys for both tables and merge the results **Full Merge Join** - Both tables to be joined have useful indexes on the join keys - No sorting is needed - The tables can be easily merged by following the indexes - The index guarantees that the data can be accessed in a sorted manner by following the index leaf - Full Merge Joins are only possible for the outermost pair of tables in the join order - Thus, if the join order is \{R,S,T,U\}, only R and S can be joined via a Full Merge Join **Left Merge Join** 1. **Step 1** - Create and populate the worktable 2. **Step 2** - Sort the worktable and merge with the outer (left) table - **LMJ** Left Merge Join » The table the Optimizer has chosen to be the inner does not have a useful index on the join column » The inner (right) table must be first sorted into a worktable » A useful index with the necessary ordering from the left (outer) side is used to perform the merge join » Left Merge Joins are only possible for the outermost pair of tables in the join order » Thus, if the join order is \{R,S,T,U\}, only R and S can be joined via a Left Merge Join Right Merge Join Step 1 - Create and populate the worktable Step 2 - Sort the worktable and merge with the inner (right) table RMJ **Right Merge Join** » The table the Optimizer has chosen to be the outer does not have a useful index on the join column » The outer (left) table must be first sorted into a worktable » A useful index with the necessary ordering from the right (inner) side is used to perform the merge join **Sort-Merge Join** - **Step 1**: Worktable1 - Create and populate the worktables - **Step 2**: Worktable2 - Table S - **Step 3**: SMJ - Sort - Sort the worktables and merge the results - Worktable1 - Worktable2 Sort-Merge Join Neither table has an index on the join column, or the Optimizer’s costing algorithm has determined (based upon its cost calculation) that it is cheaper to “reformat” - This involves the base table being read into a worktable which is created with the required indexes - This method is chosen for Merge Joins when a useful index is not available - The worktable is then sorted - Subsequent joins are to the worktable, not the base table In the case of a Sort-Merge join, the Optimizer has determined that the base tables must both be sorted into worktables and then merged Cost Model Historically, the costing for join selection set is: - # of pgs for retrieval of a row from the inner table * number of qualifying rows in the outer table For sort merge join the Logical I/O cost is estimated as below: - outer_lio = cost of scanning outer table - inner_lio = # duplicates in outer * (join selection set + index height ) + ( # unique values in outer * (join selection set) ) Restrictions on Sort/Merge Joins » Merge Join not selected for the following cases » Subqueries (not outer query block) » Update statements » Outer Joins » Referential Integrity » Remote Tables » Cursor statements 50 Table Limit » Number of user tables in a query has been increased to make it possible for users to run queries with a large number of non-flattened subqueries. » Increase maximum number of non-RI tables per query » from 16 user tables and 12 work tables » to 50 user tables and 14 work tables » Not designed for 50 tables in the “from . . . . “ clause Are you nesting loops 50 deep? » In one respect the answer is yes, but this functionality is not designed to be used this way » Sort-merge will provide major performance improvements if you are » Short circuiting means that the number of tables actually accessed is reduced in most cases » Additional tables require configuration of auxiliary scan descriptors » previously these were only used for RI » now extended to support additional tables when more than 16 are accessed 50 Table Limit » What did not change? » Pre-allocated scan descriptors per process (16 non-RI user, 12 non-RI work, 20 system, 0 RI) » Maximum subqueries per query (16) » Maximum RI tables per query (192 RI user and 192 RI work) » Maximum user tables under all sides of a UNION (256) » Default “number of aux scan descriptors” per server (200) » Default number of tables considered at a time for 2 to 25 joining tables (4) » Note: for 25 - 37 and 38 - 50 tables this number decreases 50 Table Limit » What else changed? » Maximum auxiliary scan descriptors per process increased from 384 to 454 (192 RI user + 192 RI work + 34 non-RI user + 2 non-RI work + 34 system) » Default number of tables considered at a time by the optimizer when generating the query plan decreased to 3 for 26 to 37 joining tables, 2 for 38 to 50 joining tables » If you use set tablecount to change the number of tables considered, set tablecount 0 will reset it to the above behavior. Execute Immediate » Execute Immediate command is formed by materialising the command string. » The command string is materialised by concatenating the “string literals” and the values of the variables” » The variables can be filled at runtime as seen in the examples above. » Syntax » exec ( {str_constant | str_var} [+ {str_constant | str_var}] ... ) Enables variable syntax if required » Can be used: » inside procedures to query tables and columns specified as arguments to the procedure. » in ISQL scripts, where a batch queries tables or columns from the database and then constructs a query on the fly using those table and column names. » Example » » declare @tabname char(100) » select @tabname = b.authortable » from books b » where b.publisher = 'randomhouse' » exec ( " select authors from " + @tabname ) Static and Dynamic Context » Static Context :- » The context in which queries outside of execute immediate but within the same batch are executed. » Dynamic context :- » The context in which queries enclosed in an execute immediate command are executed. Static and Dynamic Context (continued) » Objects created in static scope can be referenced in dynamic scope. » create table tab1 exec ("select * from tab1") » Objects created in dynamic scope cannot be referenced in static scope. » exec (" create table tab1 ") select * from tab1 » Objects created in dynamic scope can be referenced in subsequent dynamic scope. » exec (" create table tab1 ") exec (" select * from tab1 ") Security Issues » Security is paramount, therefore permission checking is » Example » as user1 » create proc p1 @anyquery char(255) as <do a pile of stuff> exec (@anyquery) go » as user2 » p1 " select * from tab1" go » user2 will get an error if user2 does not have permissions on tab1. Restrictions » Only char and varchar variables can be used in the command string. » Certain commands are disallowed: » transaction commands (begin, end, abort) » database connection commands (use, connect) » set commands » dbcc commands » Execute immediate is not reentrant. Where/how it cannot be used? » White spaces are not automatically added into the string formed by concatenation. » exec ("select" + "" + "from" + "tab1") looks like "select" + "from" + "tab1" » It does not replace select command: » insert into t exec (" select * from tab1 ") » Within a quoted string, references to variables declared in the static scope are not allowed. » create proc p @tab char(30), @col char(30), @res int as exec ("select @res " + " from " + " @tab ") Where it cannot be used? (continued) » Cursor, Temp table visibility :- » In current implementation, cursors, temporary tables and variables are bound to the proc_hdr. » Execute Immediate creates a new proc_hdr to execute the commands and destroys the proc_hdr on completion. » Cursors, temporary tables are not carried over from the dynamic context to the static context. From Query Text to Query Results » Pre-optimization » Join Transitive Closure » ANSI Compliant Outer Joins » Predicate Transformation » Optimization » Improved costing of “%XXX” like clauses » Abstract Query Plans » Query Execution » Sort-Merge Joins » 50 table limit » Execute Immediate Summary » Pre-optimisation » Intelligent and improved pre-processing of queries provides the optimizer with more options in the production of the optimal query plan » Optimization » Increased use of existing statistics » Uncertainty over Query Plan changes when ASE is upgraded or when new implementation performed no longer occurs » Query Execution » New, more efficient, join strategies available » Much more complex SQL supported » “On the fly” SQL now possible
{"Source-Url": "http://www.csd.uoc.gr/~hy460/pdf/Sybase%20SQL%20Server%2011.1.pdf", "len_cl100k_base": 5914, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 49730, "total-output-tokens": 7312, "length": "2e12", "weborganizer": {"__label__adult": 0.0003306865692138672, "__label__art_design": 0.00019931793212890625, "__label__crime_law": 0.0003361701965332031, "__label__education_jobs": 0.0006437301635742188, "__label__entertainment": 8.445978164672852e-05, "__label__fashion_beauty": 0.00011795759201049803, "__label__finance_business": 0.00044345855712890625, "__label__food_dining": 0.00031757354736328125, "__label__games": 0.000911235809326172, "__label__hardware": 0.0005269050598144531, "__label__health": 0.0002446174621582031, "__label__history": 0.00021088123321533203, "__label__home_hobbies": 6.467103958129883e-05, "__label__industrial": 0.0004169940948486328, "__label__literature": 0.00020825862884521484, "__label__politics": 0.00016605854034423828, "__label__religion": 0.00034499168395996094, "__label__science_tech": 0.014373779296875, "__label__social_life": 7.861852645874023e-05, "__label__software": 0.06884765625, "__label__software_dev": 0.91064453125, "__label__sports_fitness": 0.0002148151397705078, "__label__transportation": 0.0002627372741699219, "__label__travel": 0.00020706653594970703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23934, 0.01738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23934, 0.13033]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23934, 0.84217]], "google_gemma-3-12b-it_contains_pii": [[0, 311, false], [311, 1133, null], [1133, 2122, null], [2122, 2814, null], [2814, 3480, null], [3480, 4175, null], [4175, 5388, null], [5388, 6693, null], [6693, 7541, null], [7541, 8596, null], [8596, 9358, null], [9358, 10217, null], [10217, 11359, null], [11359, 12119, null], [12119, 12878, null], [12878, 13906, null], [13906, 14508, null], [14508, 14801, null], [14801, 15265, null], [15265, 15888, null], [15888, 16500, null], [16500, 17019, null], [17019, 18021, null], [18021, 18609, null], [18609, 19595, null], [19595, 20436, null], [20436, 21180, null], [21180, 21978, null], [21978, 22764, null], [22764, 23449, null], [23449, 23934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 311, true], [311, 1133, null], [1133, 2122, null], [2122, 2814, null], [2814, 3480, null], [3480, 4175, null], [4175, 5388, null], [5388, 6693, null], [6693, 7541, null], [7541, 8596, null], [8596, 9358, null], [9358, 10217, null], [10217, 11359, null], [11359, 12119, null], [12119, 12878, null], [12878, 13906, null], [13906, 14508, null], [14508, 14801, null], [14801, 15265, null], [15265, 15888, null], [15888, 16500, null], [16500, 17019, null], [17019, 18021, null], [18021, 18609, null], [18609, 19595, null], [19595, 20436, null], [20436, 21180, null], [21180, 21978, null], [21978, 22764, null], [22764, 23449, null], [23449, 23934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23934, null]], "pdf_page_numbers": [[0, 311, 1], [311, 1133, 2], [1133, 2122, 3], [2122, 2814, 4], [2814, 3480, 5], [3480, 4175, 6], [4175, 5388, 7], [5388, 6693, 8], [6693, 7541, 9], [7541, 8596, 10], [8596, 9358, 11], [9358, 10217, 12], [10217, 11359, 13], [11359, 12119, 14], [12119, 12878, 15], [12878, 13906, 16], [13906, 14508, 17], [14508, 14801, 18], [14801, 15265, 19], [15265, 15888, 20], [15888, 16500, 21], [16500, 17019, 22], [17019, 18021, 23], [18021, 18609, 24], [18609, 19595, 25], [19595, 20436, 26], [20436, 21180, 27], [21180, 21978, 28], [21978, 22764, 29], [22764, 23449, 30], [23449, 23934, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23934, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
0555d0f6e5045dc000a448a3634ed5b759962d3e
Login Name ___________________ Name ___________________ Student ID ___________________ Signature ___________________ Final CSE 131 Winter 2012 Page 1 ___________ (29 points) Page 2 ___________ (30 points) Page 3 ___________ (24 points) Page 4 ___________ (38 points) Page 5 ___________ (21 points) Page 6 ___________ (38 points) Page 7 ___________ (27 points) Page 8 ___________ (23 points) Page 9 ___________ (23 points) Page 10 ___________ (22 points) Page 11 ___________ (17 points) Subtotal ___________ (292 points) = 100% Page 12 ___________ (17 points) [6% Extra Credit] Extra Credit Total ___________ 1. Given the following CUP grammar snippet (assuming all other Lexing and terminals are correct): ```java class Expr { Des AssignOp { System.out.println("1"); } Expr { System.out.println("2"); } | Des { System.out.println("3"); } | T_STAR { System.out.println("4"); } Des { System.out.println("5"); } | T_PLUSPLUS { System.out.println("6"); } Des { System.out.println("7"); } | T_AMPERSAND { System.out.println("8"); } Des { System.out.println("9"); } | Des2 { System.out.println("10"); } | Des2 { System.out.println("11"); } T_PLUSPLUS { System.out.println("12"); } | Des3 { System.out.println("13"); } | T_ID { System.out.println("14"); } | T_ASSIGN { System.out.println("15"); } | } ``` What is the output when parsing the following statement (you should have 26 lines/numbers in your output): ``` **x = *y++ = &z ``` In the above grammar, what is the associativity of the operator in the first production rule (the `Expr ::=` rule)? If variable `z` is defined to be type `int`, what types must variables `x` and `y` be defined to be for this statement to be semantically correct? ``` _____________ x; _____________ y; ``` 2. Given the following Reduced-C code fragment: ```c function : int foo( int & x, int * y, int z ) { /* Body of code not important for this question */ } function : int main() { int a = 8675309; int b; int c = a; b = foo( a, &b, c ); return b; } ``` Complete the SPARC Assembly language statements that might be emitted by a compliant Reduced-C compiler from this quarter for function main(). Allocate, store, and access all local variables on the Stack. See comments. ```assembly .section __________ .global __________ .align 4 __________: set _________________, %g1 save _________________, %g1, _________________ /* Initialize the local variables that have explicit initialization in this stack frame */ set _________________, %o0 st %o0, _________________ ! int a = 8675309; ld _________________, %o0 st %o0, _________________ ! int c = a; /* Set up the 3 actual arguments to foo() */ ______ _________________, %o0 ! large blank can be one or two operands ______ _________________, %o1 ______ _________________, %o2 call foo ! Call function foo() ______ st _________________, [%fp - 16] ! Save return value into local temp1 /* Copy saved return value stored in temp1 into local var b */ ______ [%fp - 16], _________________ ______ _________________, _________________ ! b = foo( ... ); /* return b; */ ld _________________, _________________ ! return b; ______ MAIN_SAVE = -(92 + ______) ________ ________ ! Save space for 3 local vars + 1 temp ``` 3. In object-oriented languages like Java, determining which method code/instructions to bind to (to execute) is done at run time rather than at compile time (this is known as dynamic dispatch or dynamic binding). However, the name-mangled symbol denoting a particular method name is determined at compile time. Given the following Java class definitions, specify the output of each print() method invocation. ```java public class Overloading_Final_Exam { public static void main (String [] args) { Earth element1 = new Earth(); Earth element2 = new Wind(); Earth element3 = new Fire(); Wind element4 = new Wind(); Wind element5 = new Fire(); Fire element6 = new Fire(); element1.print( element1 ); element2.print( element2 ); element3.print( element3 ); element4.print( element4 ); element5.print( element5 ); element6.print( element6 ); element1.print( (Earth) element6 ); element2.print( (Wind) element6 ); element3.print( (Fire) element6 ); } } ``` ```java class Earth { public void print(Earth p) { System.out.println("Earth 1"); } } ``` ```java class Wind extends Earth { public void print(Earth p) { System.out.println("Wind 1"); } public void print(Wind p) { System.out.println("Wind 2"); } } ``` ```java class Fire extends Wind { public void print(Earth p) { System.out.println("Fire 1"); } public void print(Wind p) { System.out.println("Fire 2"); } public void print(Fire p) { System.out.println("Fire 3"); } } ``` Now remove the entire print(Earth p) {} method in class Wind and remove the entire print(Wind p) {} method in class Fire. Specify the output of each print() method with these changes below. 4. Fill in the blanks of the following Reduced-C program with correct types to test if your global scope resolution operator works correctly. If it does, this program should compile without error. If it does not, this program should generate an assignment error at the line \( y = ::x; \) ```c int x; function : int main() { int y; y = ::x; // If :: working, this line will not cause an error! // If :: not working, this line will cause an error! return 0; } ``` In Reduced-C (which follows closely the real C standard) all typedefs use _____________ name equivalence. Struct operations (like \( =, ==, != \)) use _____________ name equivalence. In RC (and C/C++), we do not support the assignment of an entire array to another array (of the same type) using the assignment operator. However, we do support assignment of an entire struct instance to another struct instance of the same type. Using this fact, fill in the template of the code below, allowing arrays to piggy-back on a struct type to simulate entire-array assignments that are semantically and logically correct. ```c structdef INTARR5 { int [5] a; } int [5] x, y; function : void foo() { // \( x = y \) would be a semantic error, but ... the following will assign all elements of \( y \) into \( x \) ______ _____________________ ______ x _______ = ______ _____________________ ______ y ______; } ``` Given the definitions below, indicate whether each expression is either a A) Modifiable L-val B) Non-Modifiable L-val C) R-val ```c function : int & foo1() { /* Function body not important. */ } function : int * foo2() { /* Function body not important. */ } const int x = 5; int y; int[5] a; int *p = &y; ____ a[2] ____ &y ____ a ____ x ____ x + y ____ p ____ *p ____ *&p ____ &p ____ y ____ 42 ____ (float *)p ____ *(float *)p ____ (float *)&y ____ *(float *)&y ____ ::y ____ foo1() ____ foo2() ____ foo1()++ ____ y = *foo2() ____ *p++ ____ ++*p ____ *+p ____ --*+p ____ ++*p-- ``` What is Rick's favorite cheese? ____________________________________ 5. What gets printed in the following C++ program (just like Reduced-C without "function : " in front of each function definition)? If a value is unknown/undefined or otherwise cannot be determined by the code given, put a question mark ("?") for that output. Hint: Draw stack frames! ```cpp int a = 2; int b = 4; int c = 6; int mo; int & fubar( int x, int & y, int * z ) { static int m = x; x = x + 3; y = y + 3; *z = *z + 3; mo = ++m; return x; } void foo1( int & d, int * e, int f ) { d = d + 2; *e = *e + 2; f = f + 2; cout << a << endl; ________ cout << b << endl; ________ cout << c << endl; ________ cout << d << endl; ________ cout << *e << endl; ________ cout << f << endl; ________ cout << mo << endl; ________ cout << fubar( d, d, &d ) << endl; ________ cout << fubar( *e, *e, e ) << endl; ________ cout << fubar( f, f, &f ) << endl; ________ cout << a << endl; ________ cout << b << endl; ________ cout << c << endl; ________ cout << d << endl; ________ cout << *e << endl; ________ cout << f << endl; ________ cout << mo << endl; ________ } int main() { foo1( a, &b, c ); cout << a << endl; ________ cout << b << endl; ________ cout << c << endl; ________ cout << mo << endl; ________ return 0; } ``` 6. Using the load/load/compute/store and internal static variable paradigms recommended in class and discussion sections, complete the SPARC Assembly language statements that might be emitted by a compliant Reduced-C compiler from this quarter for function foo(). Store all formal params on the Stack. ```c function: int foo( int *x, int y, int & z ) { static int c = z; *x = c - y; return z; } ``` ```assembly .Ll: ! Perform *x = c - y; block ! c - y set ____________, %o0 ______ [__o0], %o0 ! c ld ________, %o1 ! y ______ %o0, %o1, %o0 ! c - y ! tmp2 <- (c - y) st ________, [%fp - 8] ! previous result from tmp2 ld ________, [%fp - 8], %o0 ! get param x ld ________, %o1 ! *x = c - y; (store tmp2 into *x) ______ %o0, ________ ! return z; ld ________, %o0 ld ________, %o0 ______ %o0, ________ __________ ! save space for 2 temporaries on stack foo.SAVE = -(92 + _____) _____ _____``` 7. Given the C array declaration \[ \text{int } a[2][4]; \] Mark with an A the memory location(s) where we would find \[ a[1][2] \] Each box represents a byte in memory. Using the Right-Left rule write the C definition of a variable named foo that is a pointer to an array of 9 elements where each element is a pointer to a function that takes a pointer to a struct RT as the single parameter and returns a pointer to a 3x17 2-D array where each element is a pointer to a pointer to a struct Fubar. Identify whether each of the following will cause an underlying bit pattern change. \[ \begin{align*} \text{int } a &= 5; \\ \text{float } b &= -4.20; \\ \text{int } * \text{ ptr1}; \\ \text{float } * \text{ ptr2}; \\ \text{void } \text{foo( float x, float } &\text{ y ) } \{ \text{ /* ... */ } \} \end{align*} \] \[ \begin{align*} b &= a; \\ \text{ptr1} &= \text{(int } *) \& b; \\ a &= \text{(int)} \ b; \end{align*} \] A) Yes – Underlying bit pattern change B) No – No underlying bit pattern change What are the values of \(a\) and \(b\) after the following Reduced-C statements? \[ \begin{align*} \text{bool } a &= \text{false}; \\ \text{bool } b &= \text{true } || (a = \text{true}); \end{align*} \] Value of \(a\) is \__________ Value of \(b\) is \__________ Name the part of the compilation sequence which performs each of the following. \[ \begin{align*} \text{______________________ } &\text{ takes an executable file on disk and makes it ready to execute in memory.} \\ \text{______________________ } &\text{ zero fills the BSS segment in memory.} \\ \text{______________________ } &\text{ puts globally defined symbols in the export list of the resulting object file.} \\ \text{______________________ } &\text{ translates assembly code into machine code.} \\ \text{______________________ } &\text{ combines all object modules into a single executable file.} \\ \text{______________________ } &\text{ resolves undefined external symbols with defined global symbols in other modules.} \end{align*} \] Variables declared to be \__________ will not be optimized by the compiler. 8. Given the following program, specify the order of the output lines when run and sorted by the address printed with the %p format specifier on a Sun SPARC Unix and Linux system. For example, which line will print the lowest memory address, then the next higher memory address, etc. up to the highest memory address? ```c #include <stdio.h> #include <stdlib.h> void foo1( int *, int ); /* Function Prototype */ void foo2( int, int * ); /* Function Prototype */ int a; int main( int argc, char *argv[] ) { int b; double c; foo2( a, &b ); /* 1 */ (void) printf( "1: argc --> %p\n", &argc ); /* 2 */ (void) printf( "2: c --> %p\n", &c ); /* 3 */ (void) printf( "3: argv --> %p\n", &argv ); /* 4 */ (void) printf( "4: malloc --> %p\n", malloc(50) ); /* 5 */ (void) printf( "5: b --> %p\n", &b ); } void foo1( int *d, int e ) { static struct foo {int a; int b;} f = { 1, 2 }; int g; /* 6 */ (void) printf( "6: f.b --> %p\n", &f.b ); /* 7 */ (void) printf( "7: d --> %p\n", &d ); /* 8 */ (void) printf( "8: e --> %p\n", &e ); /* 9 */ (void) printf( "9: f.a --> %p\n", &f.a ); /* 10 */ (void) printf( "10: foo2 --> %p\n", foo2 ); /* 11 */ (void) printf( "11: g --> %p\n", &g ); } void foo2( int h, int *i ) { int j = 411; int k[3]; foo1( i, j ); /* 12 */ (void) printf( "12: k[1] --> %p\n", &k[1] ); /* 13 */ (void) printf( "13: h --> %p\n", &h ); /* 14 */ (void) printf( "14: a --> %p\n", &a ); /* 15 */ (void) printf( "15: i --> %p\n", &i ); /* 16 */ (void) printf( "16: k[0] --> %p\n", &k[0] ); /* 17 */ (void) printf( "17: j --> %p\n", &j ); } ``` You are compiling foo1.c and foo2.c together with gcc. If foo1.c has a global variable definition ```c int a = 42; ``` indicate whether each of the following would cause a linkage editor error or not if put in foo2.c? - A) Yes - Linkage Editor Error - B) No - No Linkage Editor Error ```c ___ int a = 42; ___ extern void a( char * ); ___ extern char a; ___ int a( float b ) { return (int)b; } ___ static double a; ___ static int a( float b ) { return (int)b; } ``` 9. Pick one of the following numbers to answer the questions below related to the cdecl calling convention covered in class. 1) Prologue (in callee) 2) Epilogue (in callee) 3) Pre-Call/Call (in caller) 4) Post-Return (in caller) _____ Allocates space for return value _____ Restores caller-save registers _____ Copies actual arguments into argument space _____ Saves registers in callee-save scheme _____ Allocates space for actual arguments _____ Saves %pc into the return address location _____ Stores return value into return value location _____ Retrieves saved return address for return _____ Allocates space for local variables & temps _____ Performs initialization of local variables _____ Saves registers in caller-save scheme _____ Restores callee-save registers _____ Retrieves return value from return value location _____ Deallocates argument space in cdecl mode _____ Copies params passed in regs to param stack space _____ Deallocates local variable & temps space Many experienced programmers prefer to use pre-increment/pre-decrement to perform a stand-alone inc/dec of a variable. For example, ++i; or for (i = 0; i < SIZE; ++i) Why might a pre-increment/pre-decrement be preferred for these seasoned programmers? Think in terms of code gen from your compiler. Given the following C type definitions ```c struct foo { short a; char b; double c; int d; }; struct fubar { int e; char f[6]; struct foo g; int h; }; ``` struct fubar fubaz; What is the `sizeof(struct fubar)`? _____ What is the `offsetof(struct fubar, g.d)`? _____ If `struct fubar` had been defined as `union fubar` instead, what would be the `sizeof(union fubar)`? _____ What is the resulting type of the following expression? ```c *(int *) & ((struct fubar *) & fubaz.g.c) -> g ``` Write the equivalent expression that directly accesses this value/memory location without all the fancy casting and & operators. ```c fubaz. ``` 10. Identify where each of the following program parts live in the Java runtime environment as discussed in class. ```java public class Foo { private static Foo a; private int b; public Foo() { a = this; ++b; } public static void main( String[] args ) { double c = 4.20; Foo d; d = new Foo(); d.method( c ); } private void method( double e ) { double f = e; } } ``` Write a short, simple Reduced-C program to show how you tested pass-by-reference parameters in your compiler. You will need at least a main() and a function (let's call it foo()) that takes a pass-by-reference parameter. What output would you expect if your compiler implemented pass-by-reference correctly? 11. Use the letters A through D to indicate when you would expect to see each error listed below (assuming a compiled, not an interpreted, language). (A) compile-time (B) link-time (C) load-time (D) run-time _____ Error message: Left-hand side is not a modifiable l-value. _____ An "array-index-out-of-bounds" error using a non-constant index expression. _____ An "array-index-out-of-bounds" error using a constant-valued index expression. _____ Undeclared identifier "foo". _____ Segmentation fault. _____ Running "gcc someModule.o" gives the message "Undefined reference to 'main'". _____ Non-addressable argument of type %T to address-of operator. Use virtual register notation for each of the following. Change the following instruction into three instructions which are most likely a time improvement over the single instruction when it comes to actual execution time. ``` r2 = r4 * 258 ``` What term describes this particular kind of peephole optimization? Change the following instruction into another single instruction which is most likely a time improvement over the current instruction when it comes to actual execution time. ``` r1 = 16 % 3 ``` What term describes this particular kind of peephole optimization? Change the following instructions into two instructions which are most likely a time improvement over the set of instructions when it comes to actual execution time. ``` r1 = r2 * r5 r3 = r1 r6 = r2 * r5 r4 = r6 r1 = ... r6 = ... ``` What terms describe these particular kinds of peephole optimizations? List two that apply. 1) 2) 12. Extra Credit What gets printed when this C program is executed? ```c #include <stdio.h> int main() { char a[] = "Build!"; char *p = a + 3; printf( "%c\n", *p-- ); ______ printf( "%c\n", **p-- ); ______ printf( "%c\n", 2[a++ ] ); ______ printf( "%c\n", p[-1] = *(a+5) ); ______ printf( "%c\n", **p++ ); ______ printf( "%c\n", +++p ); ______ printf( "%d\n", p - a ); ______ printf( "%s\n", a ); _____________________ return 0; } ``` What gets printed if the following function is invoked as `recurse( 2, 10 )`? (Draw stack frames to help.) ```c int recurse( int a, int b ) { int local = b - a; int result; printf( "%d\n", local ); if ( b > 7 ) result = local + recurse( a, b - 1 ); else result = local; printf( "%d\n", result ); return result; } ``` Crossword Puzzle (next page) (1 point) Hexadecimal - Character | 00 NUL | 01 SOH | 02 STX | 03 ETX | 04 EOT | 05 ENQ | 06 ACK | 07 BEL | | 08 BS | 09 HT | 0A NL | 0B VT | 0C NP | 0D CR | 0E SO | 0F SI | | 10 DLE | 11 DC1 | 12 DC2 | 13 DC3 | 14 DC4 | 15 NAK | 16 SYN | 17 ETB | | 18 CAN | 19 EM | 1A SUB | 1B ESC | 1C FS | 1D GS | 1E RS | 1F US | | 20 SP | 21 ! | 22 " | 23 # | 24 $ | 25 % | 26 & | 27 ’ | | 28 ( | 29 ) | 2A * | 2B + | 2C , | 2D - | 2E . | 2F / | | 30 0 | 31 1 | 32 2 | 33 3 | 34 4 | 35 5 | 36 6 | 37 7 | | 38 8 | 39 9 | 3A : | 3B ; | 3C < | 3D = | 3E > | 3F ? | | 40 @ | 41 A | 42 B | 43 C | 44 D | 45 E | 46 F | 47 G | | 48 H | 49 I | 4A J | 4B K | 4C L | 4D M | 4E N | 4F O | | 50 P | 51 Q | 52 R | 53 S | 54 T | 55 U | 56 V | 57 W | | 58 X | 59 Y | 5A Z | 5B [ | 5C \ | 5D ] | 5E ^ | 5F _ | | 60 ` | 61 a | 62 b | 63 c | 64 d | 65 e | 66 f | 67 g | | 68 h | 69 i | 6A j | 6B k | 6C l | 6D m | 6E n | 6F o | | 70 p | 71 q | 72 r | 73 s | 74 t | 75 u | 76 v | 77 w | | 78 x | 79 y | 7A z | 7B { | 7C | | 7D } | 7E ~ | 7F DEL | A portion of the Operator Precedence Table **Operator** | **Associativity** --- | --- ++ postfix increment | L to R -- postfix decrement [] array element () function call ----------------------------- * indirection | R to L ++ prefix increment -- prefix decrement & address-of sizeof size of type/object (type) type cast ----------------------------- * multiplication | L to R / division % modulus ----------------------------- + addition | L to R - subtraction ----------------------------- = assignment | R to L
{"Source-Url": "http://cseweb.ucsd.edu/~ricko/CSE131/Final.wi12.pdf", "len_cl100k_base": 6113, "olmocr-version": "0.1.48", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 30426, "total-output-tokens": 7414, "length": "2e12", "weborganizer": {"__label__adult": 0.0005192756652832031, "__label__art_design": 0.00034236907958984375, "__label__crime_law": 0.0003008842468261719, "__label__education_jobs": 0.006092071533203125, "__label__entertainment": 9.65595245361328e-05, "__label__fashion_beauty": 0.0002040863037109375, "__label__finance_business": 0.00017333030700683594, "__label__food_dining": 0.0005235671997070312, "__label__games": 0.001110076904296875, "__label__hardware": 0.001346588134765625, "__label__health": 0.00040435791015625, "__label__history": 0.00028586387634277344, "__label__home_hobbies": 0.00018203258514404297, "__label__industrial": 0.0005793571472167969, "__label__literature": 0.00040030479431152344, "__label__politics": 0.00025343894958496094, "__label__religion": 0.0007185935974121094, "__label__science_tech": 0.00661468505859375, "__label__social_life": 0.0002168416976928711, "__label__software": 0.0040435791015625, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.0005125999450683594, "__label__transportation": 0.0007777214050292969, "__label__travel": 0.0002841949462890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20704, 0.03107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20704, 0.48331]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20704, 0.5782]], "google_gemma-3-12b-it_contains_pii": [[0, 626, false], [626, 1798, null], [1798, 3358, null], [3358, 5205, null], [5205, 7259, null], [7259, 8626, null], [8626, 9556, null], [9556, 11656, null], [11656, 13781, null], [13781, 15747, null], [15747, 16514, null], [16514, 18092, null], [18092, 18977, null], [18977, 20704, null], [20704, 20704, null], [20704, 20704, null]], "google_gemma-3-12b-it_is_public_document": [[0, 626, false], [626, 1798, null], [1798, 3358, null], [3358, 5205, null], [5205, 7259, null], [7259, 8626, null], [8626, 9556, null], [9556, 11656, null], [11656, 13781, null], [13781, 15747, null], [15747, 16514, null], [16514, 18092, null], [18092, 18977, null], [18977, 20704, null], [20704, 20704, null], [20704, 20704, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 20704, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20704, null]], "pdf_page_numbers": [[0, 626, 1], [626, 1798, 2], [1798, 3358, 3], [3358, 5205, 4], [5205, 7259, 5], [7259, 8626, 6], [8626, 9556, 7], [9556, 11656, 8], [11656, 13781, 9], [13781, 15747, 10], [15747, 16514, 11], [16514, 18092, 12], [18092, 18977, 13], [18977, 20704, 14], [20704, 20704, 15], [20704, 20704, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20704, 0.03065]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
1176dc0c9b7c7b049b5cf8c0a588ced08ff54cb0
CSE 564: Computer Graphics Graphics Foundation Klaus Mueller Computer Science Department Stony Brook University Surface Graphics - Objects are explicitly defined by a surface or boundary representation (explicit inside vs outside) - This boundary representation can be given by: - a mesh of polygons: - 200 polys - 1,000 polys - 15,000 polys - a mesh of spline patches: - an “empty” foot Polygon Mesh Definitions v1, v2, v3: vertices (3D coordinates) e1, e2, e3: edges e1 = v2 - v1 and e2 = v3 - v2 f1: polygon or face n1: face normal \[ n1 = \frac{e1 \times e2}{|e1 \times e2|} \] n1 = \frac{e_{11} \times e_{12}}{|e_{11} \times e_{12}|} n2 = \frac{e_{21} \times e_{22}}{|e_{21} \times e_{22}|}, e_{21} = -e_{12} Rule: if all edge vectors in a face are ordered counterclockwise, then the face normal vectors will always point towards the outside of the object. This enables quick removal of back-faces (back-faces are the faces hidden from the viewer): - back-face condition: \( vp \cdot n > 0 \) Polygons Mesh Data Structure - **Vertex list** \((v1, v2, v3, v4, \ldots)\): \[(x1, y1, z1), (x2, y2, z2), (x3, y3, z3), (x4, y4, z4), \ldots\] - **Edge list** \((e1, e2, e3, e4, e5, \ldots)\): \[(v1, v2), (v2, v3), (v3, v1), (v1, v4), (v4, v2), \ldots\] - **Face list** \((f1, f2, \ldots)\): \[(e1, e2, e3), (e4, e5, -e1), \ldots\ \text{or}\ \ (v1, v2, v3), (v1, v4, v2), \ldots\] - **Normal list** \((n1, n2, \ldots)\), one per face or per vertex \[(n1x, n1y, n1z), (n2x, n2y, n2z), \ldots\] - Use Pointers or indices into vertex and edge list arrays, when appropriate Basic Transformations - Translation and Scale Translation: translate by $T_x$ along the $x$-axis translate by $T_y$ along the $y$-axis $$x' = x + T_x$$ $$y' = y + T_y$$ Scale: scale by $S_x$ along the $x$-axis scale by $S_y$ along the $y$-axis $$x' = S_x \cdot x$$ $$y' = S_y \cdot y$$ If $S_x = S_y$ then scaling is uniform $S < 1$ shrinks, $S > 1$ enlarges the object Note: we always scale about the origin Translate (4, 2) Scale (0.5, 2) Basic Transformations - Rotation A point is represented by polar coordinates \((r, \varphi)\): \[ \begin{align*} x &= r \cos(\varphi) \\ y &= r \sin(\varphi) \end{align*} \] In this notation, a point after rotation is at: \[ \begin{align*} x' &= r \cos(\varphi + \theta) \\ y' &= r \sin(\varphi + \theta) \end{align*} \] Using trigonometric identities we get: \[ \begin{align*} x' &= r \cos(\varphi) \cos(\theta) - r \sin(\varphi) \sin(\theta) \\ y' &= r \sin(\varphi) \cos(\theta) + r \cos(\varphi) \sin(\theta) \end{align*} \] We know that: \[ \begin{align*} x &= r \cos(\varphi) \quad \text{and} \quad y = r \sin(\varphi) \end{align*} \] We can plug this expression into the previous ones: \[ \begin{align*} x' &= x \cos(\theta) - y \sin(\theta) \\ y' &= x \sin(\theta) + y \cos(\theta) \end{align*} \] Note: If \(\theta > 0\) then the rotation is counter-clockwise. Matrix Notation and Extension to 3D • Scale: \[ \begin{bmatrix} x' \\ y' \\ z' \end{bmatrix} = \begin{bmatrix} sx & 0 & 0 \\ 0 & sy & 0 \\ 0 & 0 & sz \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \] • Rotation about the z-axis: \[ \begin{bmatrix} x' \\ y' \\ z' \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \end{bmatrix} \] • What about translation? - recall, we’re adding Tx, Ty, and Tz ..... without multiplying by a coordinate • Solution: use homogenous coordinates \[ \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] Transformations in Homogenous Coordinates - Translation (T): \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & Tx \\ 0 & 1 & 0 & Ty \\ 0 & 0 & 1 & Tz \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] - Scale (S): \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} sx & 0 & 0 & 0 \\ 0 & sy & 0 & 0 \\ 0 & 0 & sz & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] - Rotation about the z-axis (Rz): \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta & 0 & 0 \\ \sin \theta & \cos \theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] - Rotation about the x-axis (Rx): \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta & 0 \\ 0 & \sin \theta & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] - Rotation about the y-axis (Ry): \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} \] Combining Transformations - When an object is transformed, all its vertices $v_i$ need to be transformed to $v'_i$: \[ v'_i = T \cdot R_z \cdot S \cdot v_i = [T \cdot R_z \cdot S] \cdot v_i = M_t \cdot v_i \] Combining the transformations into composite matrix $M_t$ minimizes the matrix-vector calculations. Transformation About an Arbitrary Point in Space - The standard matrices given in the past few slides only allow you to rotate and scale an object about the (world) origin (Note: translation is an exception) - What if you wanted to rotate or scale an object around an arbitrary point in space, say its center? \[ v_i' = T_2 \cdot R_z \cdot T_1 \cdot v_i = [T_2 \cdot R_z \cdot T_1] \cdot v_i = M_{r\text{-arbitrary\_point}} \cdot v_i \] A view is specified by: - eye position (Eye) - view direction vector (n) - screen center position (Cop) - screen orientation (u, v) - screen width W, height H u, v, n are orthonormal vectors After the viewing transform: - the screen center is at the coordinate system origin - the screen is aligned with the x, y-axis - the viewing vector points down the negative z-axis - the eye is on the positive z-axis All objects are transformed by the viewing transform Step 1: Viewing Transform • The sequence of transformations is: - *translate* the screen Center Of Projection (COP) to the coordinate system origin ($T_{\text{view}}$) - *rotate* the translated screen such that the view direction vector $n$ points down the negative $z$-axis and the screen vectors $u$, $v$ are aligned with the $x$, $y$-axis ($R_{\text{view}}$) • We get $M_{\text{view}} = R_{\text{view}} \cdot T_{\text{view}}$ • We transform all object (points, vertices) by $M_{\text{view}}$: $$ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} u_x & u_y & u_z & 0 \\ v_x & v_y & v_z & 0 \\ n_x & n_y & n_z & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & 0 & 0 & -\text{Cop}_x \\ 0 & 1 & 0 & -\text{Cop}_y \\ 0 & 0 & 1 & -\text{Cop}_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix} $$ • Now the objects are easy to project since the screen is in a convenient position - but first we have to account for perspective distortion... Step 2: Perspective Projection A (view-transformed) vertex with coordinates \((x', y', z')\) projects onto the screen as follows: \[ y_p = y' \cdot \frac{\text{eye}}{\text{eye} - z'} \] \[ x_p = x' \cdot \frac{\text{eye}}{\text{eye} - z'} \] - \(x_p\) and \(y_p\) can be used to determine the screen coordinates of the object point (i.e., where to plot the point on the screen) Step 1 + Step 2 = World-To-Screen Transform - Perspective projection can also be captured in a matrix $M_{\text{proj}}$ with a subsequent *perspective divide* by the homogenous coordinate $w$: $$ \begin{bmatrix} x_h \\ y_h \\ z_h \\ w \end{bmatrix} = \begin{bmatrix} \text{eye} & 0 & 0 & 0 \\ 0 & \text{eye} & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & -1 & \text{eye} \end{bmatrix} \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} $$ $$ \begin{align*} x_p &= \frac{x_h}{w} \\ y_p &= \frac{y_h}{w} \end{align*} $$ - So the entire *world-to-screen* transform is: $$ M_{\text{trans}} = M_{\text{proj}} \cdot M_{\text{view}} = M_{\text{proj}} \cdot R_{\text{view}} \cdot T_{\text{view}} $$ with a subsequent divide by the homogenous coordinate - $M_{\text{trans}}$ is composed only once per view and all object points (vertices) are multiplied by it Step 3: Window Transform (1) - Note: our camera screen is still described in world coordinates - However, our display monitor is described on a pixel raster of size (Nx, Ny) - The transformation of (perspective) viewing coordinates into pixel coordinates is called *window transform* - Assume: - we want to display the rendered screen image in a window of size (Nx, Ny) pixels - the width and height of the camera screen in world coordinates is (W, H) - the center of the camera is at the center of the screen coordinate system - Then: - the valid range of object coordinates is (-W/2 ... +W/2, -H/2 ... +H/2) - these have to be mapped into (0 ... Nx-1, 0 ... Ny-1): \[ x_s = \left( x_p + \frac{W}{2} \right) \cdot \frac{N_x - 1}{W} \quad \quad \quad y_s = \left( y_p + \frac{H}{2} \right) \cdot \frac{N_y - 1}{H} \] Step 3: Window Transform (2) - The window transform can be written as the matrix $M_{\text{window}}$: $$ \begin{bmatrix} x_s \\ y_s \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{N_x - 1}{W} & 0 & \frac{W}{2} \\ 0 & \frac{N_y - 1}{H} & \frac{H}{2} \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x_p \\ y_p \\ 1 \end{bmatrix} $$ - After the perspective divide, all object points (vertices) are multiplied by $M_{\text{window}}$ - Note: we could figure the window transform into $M_{\text{trans}}$ - in that case, there is only one matrix multiply per object point (vertex) with a subsequent perspective divide - the OpenGL graphics pipeline does this Orthographic (Parallel) Projection - Leave out the perspective mapping (step 2) in the viewing pipeline - In orthographic projection, all object points project along parallel lines onto the screen Rendering the Polygonal Objects - The Hidden Surface Removal Problem - We have removed all faces that are *definitely* hidden: the back-faces - But even the surviving faces are only *potentially* visible - they may be obscured by faces closer to the viewer face A of object 1 is partially obscured by face B of object 2 - Problem of identifying those face portions that are visible is called the *hidden surface problem* - Solutions: - pre-ordering of the faces and subdivision into their visible parts before display (expensive) - the z-buffer algorithm (cheap, fast, implementable in hardware) The Z-Buffer (Depth-Buffer) Scan Conversion Algorithm - Two data structures: - z-buffer: holds for each image pixel the z-coordinate of the closest object so far - color-buffer: holds for each pixel the closest object’s color - Basic z-buffer algorithm: ```c // initialize buffers for all (x, y) z-buffer(x, y) = -infinity; color-buffer(x, y) = color_background // scan convert each front-face polygon for each front-face poly for each scanline y that traverses projected poly for each pixel x in scanline y and projected poly if z_poly(x, y) > z-buffer(x, y) z-buffer(x, y) = z_poly(x, y) color-buffer(x, y) = color_poly(x, y) ``` ![Diagram](image_url) Illumination Total light decomposition Light = reflected + transmitted + absorbed Reflected light Reflected light = ambient + diffuse + specular \[ I = I_a + I_d + I_s \] Illumination - Examples - ambient - ambient + diffuse - ambient + diffuse + specular (and a checkerboard) Ambient Reflection - Uniform background light - $I_a = k_a I_A$ - $I_A$: ambient light - $k_a$: material’s ambient reflection coefficient - Models general level of brightness in the scene - Accounts for light effects that are difficult to compute (secondary diffuse reflections, etc) - Constant for all surfaces of a particular object and the directions it is viewed at Diffuse Reflection - Models dullness, roughness of a surface - Equal light scattering in all directions - For example, chalk is a diffuse reflector \[ I_d = k_d I_L \cos \varphi = k_d I_L \mathbf{N} \cdot \mathbf{L} \] Lambertian cosine law: \[ I_d = k_d I_L \cos \varphi = k_d I_L \mathbf{N} \cdot \mathbf{L} \] \[\mathbf{L} = \frac{\mathbf{Light} - \mathbf{P}}{|\mathbf{Light} - \mathbf{P}|} = \frac{\mathbf{Light}_x - P_x}{|\mathbf{L'}|}, \frac{\mathbf{Light}_y - P_y}{|\mathbf{L'}|}, \frac{\mathbf{Light}_z - P_z}{|\mathbf{L'}|} \] \[|\mathbf{L'}| = \sqrt{(\mathbf{Light}_x - P_x)^2 + (\mathbf{Light}_y - P_y)^2 + (\mathbf{Light}_z - P_z)^2} \] \[ \mathbf{N} \cdot \mathbf{L} = (N_x L_x + N_y L_y + N_z L_z) \] Dot product: \[ \mathbf{N} \cdot \mathbf{L} = (N_x L_x + N_y L_y + N_z L_z) \] Specular Reflection - Fundamentals - Models reflections on shiny surfaces (polished metal, chrome, plastics, etc.) - Ideal specular reflector (perfect mirror) reflects light only along reflection vector $R$ - Non-ideal reflectors reflect light in a lobe centered about $R$ - $\cos(\alpha)$ models this lobe effect - the width of the lobe is modeled by Phong exponent $n_s$, it scales $\cos(\alpha)$ ### Phong specular reflection model: $$I_s = k_s I_L \cos^{n_s} \alpha = k_s I_L (E \cdot R)^{n_s}$$ - $I_L$: intensity of lightsource - $L$: light vector - $R$: reflection vector $= 2 N (N \cdot L) - L$ - $E$: eye vector $= (\text{Eye}-P) / |\text{Eye}-P|$ - $\alpha$: angle between $E$ and $R$ - $n_s$: Phong exponent - $k_s$: specular reflection coefficient - $n_s = \infty$ (perfect mirror) - $n_s$ large (100) (shiny surface) - $n_s$ small (8) (dull surface) Specular and Diffuse Reflection - Varying the Coefficients diffuse coefficient $k_d$ Phong exponent $n_s$ Specular Reflection - Using the Half Vector - Sometimes the half vector H is used instead of R in specular lighting calculation - Both alternatives have similar effects Phong specular reflection model: \[ I_s = k_s \, I_L \, \cos^{ns} \beta = k_s \, I_L \, (H \cdot N)^{ns} \] - \( I_L \): intensity of lightsource - \( L \): light vector - \( H \): half vector = \( (L + E) / |L + E| \) - \( R \): reflection vector - \( E \): eye vector Total Reflected Light - Total reflected light (for a white object): \[ I = k_a I_A + k_d I_L N \cdot L + k_s I_L (H \cdot N)^{ns} \] - Multiple lightsources: \[ I = k_a I_A + \sum (k_d I_i N \cdot L_i + k_s I_i (H_i \cdot N)^{ns}) \] - Usually, I is a color vector of (R=red, G=green, B=blue) - Object has a color vector \( C_{\text{obj}} = (R_{\text{obj}}, G_{\text{obj}}, B_{\text{obj}}) \) - Object reflects I, modulated by \( C_{\text{obj}} \) - Color C reflected by object: \[ C = C_{\text{obj}} (k_a I_A + \sum (k_d I_i N \cdot L_i)) + \sum (k_s I_i (H_i \cdot N)^{ns}) \] - In many applications, the specular color is not modulated by object color - specular highlight has the color of the lightsource - Note: (R, G, B) cannot be larger than 1.0 (later scaled to [0, 255] for display) - either set a maximum for each individual term or clamp final colors to 1.0 Polygon Shading Methods - Faceted Shading • How are the pixel colors determined in z-buffer? • The simplest method is *flat or faceted shading*: - each polygon has a constant color - compute color at one point on the polygon (e.g., at center) and use everywhere - assumption: lightsource and eye is far away, i.e., $N \cdot L, H \cdot E = \text{const}$. • Problem: discontinuities are likely to appear at face boundaries Polygon Shading Methods - Gouraud Shading - Colors are averaged across polygons along common edges → no more discontinuities - Steps: - determine average unit normal at each poly vertex: \[ \mathbf{N}_v = \frac{n}{\sum_{k=1}^{n} N_k} \left( \sum_{k=1}^{n} \frac{N_k}{N_k} \right) \] \(n\): number of faces that have vertex \(v\) in common - apply illumination model at each poly vertex → \(C_v\) - linearly interpolate vertex colors across edges - linearly interpolate edge colors across scan lines - Downside: may miss specular highlights at off-vertex positions or distort specular highlights Polygon Shading Methods - Phong Shading • Phong shading linearly interpolates normal vectors, not colors → more realistic specular highlights • Steps: - determine average normal at each vertex - linearly interpolate normals across edges - linearly interpolate normals across scanlines - apply illumination model at each pixel to calculate pixel color • Downside: need more calculations since need to do illumination model at each pixel • `glMatrixMode(GL_PROJECTION)` • Define the viewing window: - `glOrtho()` for parallel projection - `glFrustum()` for perspective projection • `glMatrixMode(GL_MODELVIEW)` • Specify the viewpoint - `gluLookat()` /* need to have GLUT */ • Model the scene - `glTranslate()`, `glRotate()`, `glScale()`, ... Modelview Matrix Stack ``` gluLookat(...) glTranslate(x,y,z) glRotate(\phi_y,0,1,0) glRotate(\phi_z,0,0,1) glRotate(\phi_x,1,0,0) ``` order of execution rotate first, then translate, then do viewing... OpenGL rendering pipeline ``` Vertex <table> <thead> <tr> <th>x</th> <th>y</th> <th>z</th> <th>w</th> </tr> </thead> </table> object coordinates → Modelview Matrix → Projection Matrix → Perspective Division → Viewport Transformation eye coordinates clip coordinates window coordinates normalized device coordinates look also in www.opengl.org Rendering With OpenGl (2) Specify the light sources: `glLight()` Enable the z-buffer: `glEnable(GL_DEPTH_TEST)` Enable lighting: `glEnable(GL_LIGHTING)` Enable light source $i$: `glEnable(GL_LIGHT$i$)` /* GL_LIGHT$i$ is the symbolic name of light $i$ */ Select shading model: `glShadeModel()` /* GL_FLAT or GL_SMOOTH */ For each object: /* duplicate the matrix on the stack if want to apply some extra transformations to the object */ ```c glBegin(GL_POLYGON); glColor3fv(c1); glVertex3fv(v1); glNormal3fv(n1); /* vertex 1 */ glColor3fv(c2); glVertex3fv(v2); glNormal3fv(n2); /* vertex 2 */ glColor3fv(c3); glVertex3fv(v3); glNormal3fv(n3); /* vertex 3 */ glEnd(); ``` `glPopMatrix()` /* get rid of the object-specific transformations, pop back the saved matrix */ Example: Scene Graph Bike \[ T_d = \text{glTranslate}(\text{dist}) \] // translate bike \[ \text{glPush()} \] // duplicate \( T_d \) on the stack \[ T_f = \text{glTranslate}(+w_1 \rightarrow O) \] \[ R = \text{glRotate}(\text{angle}) \] \[ T_b = \text{glTranslate}(-w_1 \rightarrow O) \] \[ \text{Render}(w_1) \] // \( T_d T_b R T_f w_1 \) \[ \text{glPop()} \] // expose \( T_d \) \[ \text{glPush()} \] // duplicate \( T_d \) \[ \text{glTranslate}(+w_2 \rightarrow O) \] \[ \text{glRotate}(\text{angle}) \] \[ \text{glTranslate}(-w_2 \rightarrow O) \] \[ \text{Render}(w_2) \] // \( T_d T_b R T_f w_1 \) \[ \text{glPop()} \] // expose \( T_d \) \[ \text{Render}(\text{frame}) \] // \( T_d f \)
{"Source-Url": "http://www3.cs.stonybrook.edu/~mueller/teaching/cse528/graphicsFoundations.pdf", "len_cl100k_base": 6467, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 51507, "total-output-tokens": 8228, "length": "2e12", "weborganizer": {"__label__adult": 0.0005869865417480469, "__label__art_design": 0.01096343994140625, "__label__crime_law": 0.0005946159362792969, "__label__education_jobs": 0.01102447509765625, "__label__entertainment": 0.0002899169921875, "__label__fashion_beauty": 0.0004131793975830078, "__label__finance_business": 0.0005035400390625, "__label__food_dining": 0.0005059242248535156, "__label__games": 0.0017871856689453125, "__label__hardware": 0.004627227783203125, "__label__health": 0.0009303092956542968, "__label__history": 0.0012683868408203125, "__label__home_hobbies": 0.00038051605224609375, "__label__industrial": 0.0015420913696289062, "__label__literature": 0.0007338523864746094, "__label__politics": 0.0003693103790283203, "__label__religion": 0.0011138916015625, "__label__science_tech": 0.2841796875, "__label__social_life": 0.0002181529998779297, "__label__software": 0.036041259765625, "__label__software_dev": 0.64013671875, "__label__sports_fitness": 0.0004363059997558594, "__label__transportation": 0.0010013580322265625, "__label__travel": 0.0004107952117919922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19278, 0.02842]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19278, 0.4166]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19278, 0.57406]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 412, null], [412, 1029, null], [1029, 1613, null], [1613, 2061, null], [2061, 2978, null], [2978, 3627, null], [3627, 5137, null], [5137, 5450, null], [5450, 5890, null], [5890, 6355, null], [6355, 7388, null], [7388, 7769, null], [7769, 8643, null], [8643, 9472, null], [9472, 10170, null], [10170, 10368, null], [10368, 10973, null], [10973, 11668, null], [11668, 11844, null], [11844, 11953, null], [11953, 12328, null], [12328, 13131, null], [13131, 14003, null], [14003, 14111, null], [14111, 14554, null], [14554, 15434, null], [15434, 15864, null], [15864, 16486, null], [16486, 16935, null], [16935, 17771, null], [17771, 18581, null], [18581, 19278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 412, null], [412, 1029, null], [1029, 1613, null], [1613, 2061, null], [2061, 2978, null], [2978, 3627, null], [3627, 5137, null], [5137, 5450, null], [5450, 5890, null], [5890, 6355, null], [6355, 7388, null], [7388, 7769, null], [7769, 8643, null], [8643, 9472, null], [9472, 10170, null], [10170, 10368, null], [10368, 10973, null], [10973, 11668, null], [11668, 11844, null], [11844, 11953, null], [11953, 12328, null], [12328, 13131, null], [13131, 14003, null], [14003, 14111, null], [14111, 14554, null], [14554, 15434, null], [15434, 15864, null], [15864, 16486, null], [16486, 16935, null], [16935, 17771, null], [17771, 18581, null], [18581, 19278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19278, null]], "pdf_page_numbers": [[0, 115, 1], [115, 412, 2], [412, 1029, 3], [1029, 1613, 4], [1613, 2061, 5], [2061, 2978, 6], [2978, 3627, 7], [3627, 5137, 8], [5137, 5450, 9], [5450, 5890, 10], [5890, 6355, 11], [6355, 7388, 12], [7388, 7769, 13], [7769, 8643, 14], [8643, 9472, 15], [9472, 10170, 16], [10170, 10368, 17], [10368, 10973, 18], [10973, 11668, 19], [11668, 11844, 20], [11844, 11953, 21], [11953, 12328, 22], [12328, 13131, 23], [13131, 14003, 24], [14003, 14111, 25], [14111, 14554, 26], [14554, 15434, 27], [15434, 15864, 28], [15864, 16486, 29], [16486, 16935, 30], [16935, 17771, 31], [17771, 18581, 32], [18581, 19278, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19278, 0.00343]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
75566af9de76dc3c400dbb9d8cd2ff553c4ffa91
Licensed Program Specifications Enterprise COBOL for z/OS, Version 4 Release 2–Program Number 5655-S71 With IBM® Enterprise COBOL for z/OS®, Version 4, you get more than 40 years of IBM experience in application development to facilitate your new On Demand Business endeavors. Enterprise COBOL helps you integrate COBOL and Web-based business processes in Web services, XML, Java™, and COBOL applications. This interoperability lets you capitalize on existing IT investment while smoothly incorporating new, Web-based applications as part of your organization’s infrastructure. Enterprise COBOL is a leading-edge IBM z/OS-based compiler that helps you create and maintain mission-critical, line-of-business COBOL applications, targeted to execute on your z/OS systems. It offers access to IBM DB2®, IBM CICS®, and IBM IMS™ systems, as well as other data and transaction systems. **Version 4.2 enhancements** Enterprise COBOL, V4.2 delivers: Further enhancements to XML parsing using the z/OS XML System Services parser: - You can now parse XML documents with validation against an XML schema, using the VALIDATING phrase of the XML PARSE statement. - Performance is improved for nonvalidating parsing. - Character processing is enhanced for any XML document that contains a reference to a character that is not included in the single-byte EBCDIC code page of the document. A new facility lets you customize message severity: - The new MSGEXIT suboption of the EXIT compiler option lets you specify a module that is called for each compiler diagnostic message and each FIPS (FLAGSTD) message. Using the MSGEXIT module, you can change the severity of messages, suppress messages, and convert FIPS messages into diagnostic messages. A new compiler option, BLOCK0, lets programs take advantage of system-determined block size for QSAM output files: - When a program is compiled using the BLOCK0 compiler option, an implicit BLOCK CONTAINS 0 clause is activated for all eligible QSAM files in the program, which can result in enhanced processing speed and minimized storage requirements for output files. COBOL user-defined words can now include the underscore character (_): - User-defined words such as data names and program names can now include underscores characters. Underscores are also supported in the literal form of program names. Compiler listings display CICS options in effect: - When applications are compiled using the integrated CICS translator, compiler listings will show the CICS options that are in effect. This facility provides the same benefit to CICS users as was previously made available to DB2 users. Additional SDKs supported for Java interoperability: - Enterprise COBOL applications using object-oriented syntax for Java interoperability can now run with Java 5 or Java 6. Java SDK 1.4.2 continues to be supported. Specified operating environment for Enterprise COBOL This section lists the hardware and software requirements for IBM Enterprise COBOL for z/OS, Version 4 Release 2. Hardware requirements Enterprise COBOL for z/OS, V4.2 runs on any z/Architecture® processor that includes the z/Architecture Extended-Translation Facility 2. Software requirements Enterprise COBOL for z/OS, V4.2 runs under the control of, or in conjunction with, the currently supported releases of the following programs and their subsequent releases or their equivalents. For information about programs listed below that require program temporary fixes (PTFs), see the Enterprise COBOL Program Directory and the preventive service planning (PSP) bucket. Required licensed programs Enterprise COBOL and its generated object programs run under the following zSeries® operating systems: - z/OS, V1.9 (5694-A01), or later Language Environment® provides the execution environment and library of COBOL runtime services required to compile and run COBOL applications using Enterprise COBOL: - z/OS Language Environment V1.9, V1.10, or V1.11, and PTFs for APAR PK90754 For installation on z/OS, the following is required: - z/OS SMP/E element The following is required for customization during or after installation: - z/OS High Level Assembler Enterprise COBOL XML processing with the default XMLPARSE(XMLSS) option requires: - z/OS XML Systems Services V1.9, V1.10, or V1.11, and PTFs for APARs OA28253 and OA28398. When parsing with validation under CICS, the PTFs for APAR OA29675 are also required. Optional licensed programs for z/OS Support for applications using object-oriented COBOL syntax for Java interoperability requires one of the following: - SDK for z/OS, Java Technology Edition V6 (5655-R31) and PTFs for APAR PK89762 - SDK for z/OS, Java 2 Technology Edition, V5 (5655-N98) - SDK for z/OS, Java 2 Technology Edition, V1.4.2 (5655-I56) Note: COBOL requires a 31-bit Java SDK, 64-bit Java technology is not currently supported. Support for DB2 integrated coprocessor (SQL compiler option) requires one of the following: - DB2® Universal Database™ for z/OS, V9 (5635-DB2) - DB2 Universal Database for z/OS, V8 (5625-DB2) Support for use of national decimal host variables in EXEC SQL statements requires DB2 V8 and PTFs for APAR PQ93857, or DB2 V9. Support for use of alternate DDNAME for DBRMLIB requires DB2 V8, or DB2 V9 and PTFs for DB2 APAR PK55937. Support for the integrated CICS translator (CICS compiler option) requires one of the following: - CICS Transaction Server for z/OS, V4 (5655-S97) - CICS Transaction Server for z/OS, V3 (5655-M15) Including CICS options in effect as part of the COBOL listing requires CICS Transaction Server for z/OS, V4.1 (5655-S97) and PTFs for APAR PK89224, or CICS Transaction Server for z/OS, V3.2 (5655-M15) and PTFs for APAR PK91041. For sorting and merging, you must use the following feature of z/OS, or an equivalent product: - DFSORT™ element of z/OS (5694-A01) Programs with Report Writer statements require: - COBOL Report Writer, Release 4 (5798-DYR, 5798-DZX) Enterprise COBOL, V4.2 runs with the currently supported releases of the following programs: - CICS Transaction Server for z/OS, V4 (5655-S97) - CICS Transaction Server for z/OS, V3 (5655-M15) - DB2 Universal Database for z/OS, V9 (5635-DB2) - DB2 Universal Database for z/OS, V8 (5625-DB2) - IMS, V10 (5635-A01) - IMS, V9 (5655-J38) - IBM Application Performance Analyzer for z/OS, V9 (5697-P10) - IBM Application Performance Analyzer for z/OS, V8 (5697-N63) - IBM Application Performance Analyzer for z/OS, V7 (5697-N53) - Debug Tool for z/OS, V9 (5655-U27) - Debug Tool for z/OS, V8 (5655-S17) - Debug Tool for z/OS, V7 (5655-R44) - Debug Tool Utilities and Advanced Functions for z/OS, V8 (5655-S16) - Debug Tool Utilities and Advanced Functions for z/OS, V7 (5655-R45) - IBM Fault Analyzer for z/OS, V9 (5655-U28) - IBM Fault Analyzer for z/OS, V8 (5655-S15) - IBM Fault Analyzer for z/OS, V7 (5655-R46) - IBM File Manager for z/OS, V9 (5655-U29) - IBM File Manager for z/OS, V8 (5655-S14) - IBM File Manager for z/OS, V7 (5655-R47) - IBM Rational® Developer for System z®, V7 (5724-T07) - COBOL Report Writer Release 4 (5798-DYR, 5798-DZX) - High Level Assembler MVS™ & VM & VSE (5696-234) - Enterprise PL/I for z/OS, V3 (5655-H31) - VS FORTRAN, V2 (5668-806, 5668-087) - For C/C++ with Enterprise COBOL, you must use the C/C++ feature of z/OS **Industry standards supported by Enterprise COBOL V4.2** Enterprise COBOL supports the following industry standards. **ISO standards** ISO 1989:1985, Programming Languages - COBOL. ISO/IEC 1989/AMD2:1994, Programming Languages - Correction and clarification amendment for COBOL. ISO 1989:1985 is identical to ANSI INCITS 23-1985 (R2001), Programming Languages - COBOL. ISO/IEC 1989/AMD1:1992 is identical to ANSI INCITS 23a-1989 (R2001), Programming Languages - Intrinsic Function Module for COBOL. ISO/IEC 1989/AMD2:1994 is identical to ANSI INCITS 23b-1993, Programming Language - Correction Amendment for COBOL. For supported modules, see American National Standards below. International Reference Version of the ISO 7-bit code defined in *International Standard 646, 7-Bit Coded Character Set for Information Interchange*. **American National standards** ANSI INCITS 23-1985 (R2001), Programming Languages - COBOL. ANSI INCITS 23a-1989 (R2001), Programming Languages - Intrinsic Function Module for COBOL. ANSI INCITS 23b-1993 (R2001), Programming Language - Correction Amendment for COBOL. The 7-bit coded character set defined in American National Standard X3.4-1977, Code for Information Interchange. All required modules are supported at the highest level defined by the standard. In the following list, the shorthand notation for describing module levels is shown in parentheses. For example, to summarize module information for sequential input and output, the shorthand notation is (2 SEQ 1,2). The first digit indicates the level of language elements within the module supported by Enterprise COBOL. Next is the three-character abbreviation of the module name as used in the standard. Finally, the two digits separated by a comma indicate the minimum and maximum levels of the module. For example, (2 SEQ 1,2) means that Enterprise COBOL supports the sequential I-O module at level 2, while the range of levels in the module is from 1 (minimum) to 2 (maximum). - **Nucleus (2 NUC 1,2)** Provides internal processing of data within the four basic divisions of a program and the capability for defining and accessing tables. - **Sequential I-O (2 SEQ 1,2)** Provides access to records of a file in established sequence. The sequence is established as a result of writing the records to the file. - **Relative I-O (2 REL 0,2)** Provides access to records in either a random or sequential manner. Each record is uniquely identified by an integer specifying the record’s logical position in a file. - **Indexed I-O (2 INX 0,2)** Provides access to records in either a random or sequential manner. Each record in an indexed file is uniquely identified by the value of a key within that record. - **Sort-Merge (1 SRT 0,1)** Orders one or more files of records, or combines two or more identically ordered files of records, according to a set of user-specified keys. - **Inter-Program Communication (2 IPC 1,2)** Allows a COBOL program to communicate with other programs through transfers of control and access to common data items. - **Source Text Manipulation (2 STM 0,2)** Allows the insertion of source program text as part of the compilation of the source program. COBOL libraries contain texts which are available to the compiler at compile time and which can be treated by the compiler as part of the source program. In addition, the following optional modules of the standard are supported: - **Intrinsic Functions (1 ITR 0,1)** Provides the capability to reference a data item whose value is derived automatically at the time of reference during the execution of the object program. - **Debug (1 DEB 0,2)** Monitors object program execution through declarative procedures, special debugging lines, and a special register, DEBUG-ITEM, which gives specific information about execution status. - **Segmentation (2 SEG 0,2)** Refreshes independent segments when required. The following optional module of the standard is supported with the optional IBM COBOL Report Writer Precompiler (5798-DYR): - **Report Writer** The following optional modules of the standard are not supported: - **Communications** - **Debug (2 DEB 0,2)** **Restrictions:** Enterprise COBOL has the following restrictions related to industry standards: - **OPEN EXTEND** is not supported for ASCII encoded tapes (CODESET STANDARD-1 or STANDARD-2). - When division by zero occurs in an arithmetic expression and an ON SIZE ERROR phrase is not specified, processing abnormally terminates. Compatibility with previous product releases Compatibility with Enterprise COBOL for z/OS, Version 4 Release 1 Enterprise COBOL for z/OS, Version 4 Release 2 is fully source and object compatible with Enterprise COBOL for z/OS, Version 4 Release 1, except in the following cases: - There are new reserved words. For further details, see the Enterprise COBOL for z/OS Compiler and Runtime Migration Guide, Version 4 Release 2. - Character processing is enhanced for any XML document that contains a reference to a character that is not included in the single-byte EBCDIC code page of the document. For further details, see Enterprise COBOL for z/OS Compiler and Runtime Migration Guide, Version 4 Release 2. Compatibility with Enterprise COBOL for z/OS, Version 3 Enterprise COBOL for z/OS, Version 4 Release 2 is fully source and object compatible with Enterprise COBOL for z/OS, Version 3, except in the following cases: - There are new reserved words. See the Enterprise COBOL for z/OS Compiler and Runtime Migration Guide, Version 4 Release 2 for details. - The SIMVRD runtime option and simulated variable length relative record data sets are no longer supported. - The suboptions of the TEST compiler option are simplified. Existing suboptions are tolerated for compatibility, and are automatically mapped to the new suboption values. Symbolic debugging information is always generated when the TEST option is in effect. - Corrections to the SEARCH ALL statement have been made that might result in behavior incompatible with Enterprise COBOL, V3 if the compiler installation is at release 3, or earlier, or at release 4 if the installation does not have current service applied. See the Enterprise COBOL for z/OS Compiler and Runtime Migration Guide, Version 4 Release 2 for details. Security, auditability, and control The announced program uses the security and auditability features of the host operating system software. The customer is responsible for evaluation, selection and implementation of security features, administrative procedures, and appropriate controls in application systems and communication facilities. Licensed program materials availability Restricted materials - No. This licensed program is available without source licensed program materials. It is available in object code only. Supplemental terms Designated Machine Identification Designated Machine Identification required: Yes. Testing period - Basic License: Not applicable. - DSLO License: Not applicable. Installation or location license Not applicable. A separate license is required for each machine on which the licensed program will be used. Usage restriction Not applicable. Type and duration of program services - Central Service. - Until discontinued by IBM with a minimum of six months’ written notice. Authorization for copy and use on home or portable computer Not applicable. Softcopy publications Enterprise COBOL licenses may include licensed publications in displayable or source form. Except as provided in this section, the terms and conditions of the license agreement with IBM apply to these publications and to any copies that are made from them. The licensed publications may be used in displayable or source form on all machines designated for this program. The licensed publications may also be copied and used on other machines in support of authorized use of Enterprise COBOL. To support authorized use of Enterprise COBOL, printed copies of the displayable or source material may be made if the copyright notice and any other legend of ownership is reproduced on each copy or partial copy. Notices and information for supported standards W3C(R) DOCUMENT LICENSE http://www.w3.org/Consortium/Legal/2002/copyright-documents-20021231 Permission to copy, and distribute the contents of this document, or the W3C document from which this statement is linked, in any medium for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the document, or portions thereof, that you use: 1. A link or URL to the original W3C document. 2. The pre-existing copyright notice of the original author, or if it doesn’t exist, a notice (hypertext is preferred, but a textual representation is permitted) of the form: "Copyright (©) [date-of-document] World Wide Web Consortium, Massachusetts Institute of Technology, European Research Consortium for Informatics and Mathematics" 3. If it exists, the STATUS of the W3C document: a. Extensible Markup Language (XML) 1.0 b. http://www.w3.org/TR/REC-xml/ c. Copyright © 2008 W3C (MIT, ERCIM, Keio), All Rights Reserved. d. Status: This document specifies a syntax created by subsetting an existing, widely used international text processing standard (Standard Generalized Markup Language, ISO 8879:1986(E) as amended and corrected) for use on the World Wide Web. It is a product of the XML Core Working Group as part of the XML Activity. The English version of this specification is the only normative version. However, for translations of this document, see http://www.w3.org/2003/03/Translations/byTechnology?technology=xml. This document is a W3C Recommendation. This fifth edition is not a new version of XML. As a convenience to readers, it incorporates the changes dictated by the accumulated errata (available at http://www.w3.org/XML/xml-V10-4e-errata) to the Fourth Edition of XML 1.0, dated 16 August 2006. In particular, erratum [E09] relaxes the restrictions on element and attribute names, thereby providing in XML 1.0 the major end user benefit currently achievable only by using XML 1.1. As a consequence, many possible documents which were not well-formed according to previous editions of this specification are now well-formed, and previously invalid documents using the newly-allowed name characters in, for example, ID attributes, are now valid. This edition supersedes the previous W3C Recommendation of 16 August 2006. Please report errors in this document to the public xml-editor@w3.org mail list; public archives are available. For the convenience of readers, an XHTML version with color-coded revision indicators is also provided; this version highlights each change due to an erratum published in the errata list for the previous edition, together with a link to the particular erratum in that list. Most of the errata in the list provide a rationale for the change. The errata list for this fifth edition is available at http://www.w3.org/XML/xml-V10-5e-errata. An implementation report is available at http://www.w3.org/XML/2008/01/xml10-5e-implementation.html. A Test Suite is maintained to help assessing conformance to this specification. This document has been reviewed by W3C Members, by software developers, and by other W3C groups and interested parties, and is endorsed by the Director as a W3C Recommendation. It is a stable document and may be used as reference material or cited from another document. W3C’s role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability of the Web. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. When space permits, inclusion of the full text of this NOTICE should be provided. We request that authorship attribution be provided in any software, documents, or other items or products that you create pursuant to the implementation of the contents of this document, or any portion thereof. No right to create modifications or derivatives of W3C documents is granted pursuant to this license. However, if additional requirements (documented in the Copyright FAQ) are satisfied, the right to create modifications or derivatives is sometimes granted by the W3C to individuals complying with those requirements. THIS DOCUMENT IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, OR TITLE; THAT THE CONTENTS OF THE DOCUMENT ARE SUITABLE FOR ANY PURPOSE; NOR THAT THE IMPLEMENTATION OF SUCH CONTENTS WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE DOCUMENT OR THE PERFORMANCE OR IMPLEMENTATION OF THE CONTENTS THEREOF. The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to this document or its contents without specific, written prior permission. Title to copyright in this document will at all times remain with copyright holders. This formulation of W3C’s notice and license became active on December 31, 2002. This version removes the copyright ownership notice such that this license can be used with materials other than those owned by the W3C, moves information on style sheets, DTDs, and schemas to the Copyright FAQ, reflects that ERCIM is now a host of the W3C, includes references to this specific dated version of the license, and removes the ambiguous grant of ”use”. See the older formulation for the policy prior to this date. Please see our Copyright FAQ for common questions about using materials from our site, such as the translating or annotating specifications. Other questions about this notice can be directed to site-policy@w3.org. W3C(R) SOFTWARE NOTICE AND LICENSE http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231 This work (and included software, documentation such as READMEs, or other related items) is being provided by the copyright holders under the following license. By obtaining, using and/or copying this work, you (the licensee) agree that you have read, understood, and will comply with the following terms and conditions. Permission to copy, modify, and distribute this software and its documentation, with or without modification, for any purpose and without fee or royalty is hereby granted, provided that you include the following on ALL copies of the software and documentation or portions thereof, including modifications: 1. The full text of this NOTICE in a location viewable to users of the redistributed or derivative work. 2. Any pre-existing intellectual property disclaimers, notices, or terms and conditions. If none exist, the W3C Software Short Notice should be included (hypertext is preferred, text is permitted) within the body of any redistributed or derivative code. 3. Notice of any changes or modifications to the files, including the date changes were made. (We recommend you provide URLs to the location from which the code is derived.) THIS SOFTWARE AND DOCUMENTATION IS PROVIDED "AS IS," AND COPYRIGHT HOLDERS MAKE NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO, WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE OR DOCUMENTATION WILL NOT INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR OTHER RIGHTS. COPYRIGHT HOLDERS WILL NOT BE LIABLE FOR ANY DIRECT, INDIRECT, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF ANY USE OF THE SOFTWARE OR DOCUMENTATION. The name and trademarks of copyright holders may NOT be used in advertising or publicity pertaining to the software without specific, written prior permission. Title to copyright in this software and any associated documentation will at all times remain with copyright holders. This formulation of W3C’s notice and license became active on December 31, 2002. This version removes the copyright ownership notice such that this license can be used with materials other than those owned by the W3C, reflects that ERCIM is now a host of the W3C, includes references to this specific dated version of the license, and removes the ambiguous grant of "use". Otherwise, this version is the same as the previous version and is written so as to preserve the Free Software Foundation's assessment of GPL compatibility and OSI's certification under the Open Source Definition. Please see our Copyright FAQ for common questions about using materials from our site, including specific terms and conditions for packages like libwww, Amaya, and Jigsaw. Other questions about this notice can be directed to site-policy@w3.org. Warranty This program is warranted as specified in the IBM license. Licensed Program Specifications may be updated from time to time and such updates may constitute a change in specifications. For Distributed Systems License Option (DSLO) Licenses, warranty service, if any, will be provided only through the Basic License location. Following the discontinuance of all program services, this program will be provided “As Is” as specified in the IBM license. Trademarks The following terms are trademarks and/or registered trademarks of the IBM Corporation in the United States or other countries or both: - CICS - DB2 - DB2 Universal Database - DFSORT - IBM - IMS - IMS/ESA® - Language Environment - MVS - OS/390® - Rational - System z - VM/ESA® - z/Architecture - z/OS - zSeries Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. References in this publication to IBM products, program, or services do not imply that IBM intends to make these available in all countries in which IBM operates. Any reference to an IBM product, program, or service is not intended to state or imply that only IBM’s product, program, or service can be used. Any functionally equivalent product, program, or service that does not infringe any of IBM’s intellectual property rights can be used instead of the IBM product, program, or service. Any other documentation with respect to this licensed program, including any documentation referenced herein, is provided for reference purposes only and does not extend or modify these specifications. August 2009 © Copyright International Business Machines Corporation 2009. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. G111-7871-02
{"Source-Url": "http://publibz.boulder.ibm.com/epubs/pdf/i1178712.pdf", "len_cl100k_base": 6007, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26988, "total-output-tokens": 6851, "length": "2e12", "weborganizer": {"__label__adult": 0.0002713203430175781, "__label__art_design": 0.00022470951080322263, "__label__crime_law": 0.0004241466522216797, "__label__education_jobs": 0.0002493858337402344, "__label__entertainment": 5.8650970458984375e-05, "__label__fashion_beauty": 9.715557098388672e-05, "__label__finance_business": 0.001445770263671875, "__label__food_dining": 0.00016129016876220703, "__label__games": 0.0005121231079101562, "__label__hardware": 0.0010700225830078125, "__label__health": 0.00014197826385498047, "__label__history": 9.620189666748048e-05, "__label__home_hobbies": 4.184246063232422e-05, "__label__industrial": 0.0003561973571777344, "__label__literature": 0.00012058019638061523, "__label__politics": 0.00014412403106689453, "__label__religion": 0.00028824806213378906, "__label__science_tech": 0.003847122192382813, "__label__social_life": 3.4332275390625e-05, "__label__software": 0.050384521484375, "__label__software_dev": 0.939453125, "__label__sports_fitness": 0.00013744831085205078, "__label__transportation": 0.00019729137420654297, "__label__travel": 0.0001112818717956543}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26626, 0.05451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26626, 0.09845]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26626, 0.85746]], "google_gemma-3-12b-it_contains_pii": [[0, 2860, false], [2860, 5867, null], [5867, 8777, null], [8777, 11926, null], [11926, 14456, null], [14456, 16595, null], [16595, 20722, null], [20722, 24805, null], [24805, 26626, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2860, true], [2860, 5867, null], [5867, 8777, null], [8777, 11926, null], [11926, 14456, null], [14456, 16595, null], [16595, 20722, null], [20722, 24805, null], [24805, 26626, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26626, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26626, null]], "pdf_page_numbers": [[0, 2860, 1], [2860, 5867, 2], [5867, 8777, 3], [8777, 11926, 4], [11926, 14456, 5], [14456, 16595, 6], [16595, 20722, 7], [20722, 24805, 8], [24805, 26626, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26626, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6b750c26236974133efc37bfc67ee777f199096b
[REMOVED]
{"len_cl100k_base": 7812, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37815, "total-output-tokens": 13069, "length": "2e12", "weborganizer": {"__label__adult": 0.000640869140625, "__label__art_design": 0.000926494598388672, "__label__crime_law": 0.0005698204040527344, "__label__education_jobs": 0.0009794235229492188, "__label__entertainment": 0.0002264976501464844, "__label__fashion_beauty": 0.0003833770751953125, "__label__finance_business": 0.0003743171691894531, "__label__food_dining": 0.0005373954772949219, "__label__games": 0.0014476776123046875, "__label__hardware": 0.003932952880859375, "__label__health": 0.0014047622680664062, "__label__history": 0.0006284713745117188, "__label__home_hobbies": 0.00017321109771728516, "__label__industrial": 0.0008206367492675781, "__label__literature": 0.0003864765167236328, "__label__politics": 0.0005154609680175781, "__label__religion": 0.0009765625, "__label__science_tech": 0.3955078125, "__label__social_life": 0.00013530254364013672, "__label__software": 0.01016998291015625, "__label__software_dev": 0.57763671875, "__label__sports_fitness": 0.0005269050598144531, "__label__transportation": 0.0009212493896484376, "__label__travel": 0.0003311634063720703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50018, 0.03577]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50018, 0.1524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50018, 0.84646]], "google_gemma-3-12b-it_contains_pii": [[0, 5332, false], [5332, 11410, null], [11410, 14658, null], [14658, 19262, null], [19262, 23448, null], [23448, 29756, null], [29756, 33813, null], [33813, 39595, null], [39595, 44958, null], [44958, 50018, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5332, true], [5332, 11410, null], [11410, 14658, null], [14658, 19262, null], [19262, 23448, null], [23448, 29756, null], [29756, 33813, null], [33813, 39595, null], [39595, 44958, null], [44958, 50018, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50018, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50018, null]], "pdf_page_numbers": [[0, 5332, 1], [5332, 11410, 2], [11410, 14658, 3], [14658, 19262, 4], [19262, 23448, 5], [23448, 29756, 6], [29756, 33813, 7], [33813, 39595, 8], [39595, 44958, 9], [44958, 50018, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50018, 0.04396]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
056805d282332deef839ec7b7fb176a8d1a2b0fb
Software Reliability Growth with Test Coverage Yashwant K. Malaiya, Senior Member IEEE Colorado State University, Fort Collins Naixin Li Microsoft, Redmond James M. Bieman, Senior Member IEEE Colorado State University, Fort Collins Rick Karcich Sun Microsystem, Broomfield 1 Key Words Software reliability, software testing, test coverage, reliability-growth model, defect density. 2 Summary and Conclusions Software test coverage measures quantify the degree of thoroughness of testing. Tools are now available that measure test coverage in terms of blocks, branches, c-uses, p-uses, etc. covered. In this paper, we model the relations among testing time, coverage and reliability. We present a logarithmic-exponential (LE) model that relates testing effort to test coverage (block, branch, c-use or p-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have different probabilities of being exercised; just like defects have different probabilities of being encountered. This model allows us to relate a test coverage measure directly with defect coverage. We have fitted the model to four data sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data sets used suggest the potential of the proposed model. The model presented here is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based Logarithmic software reliability growth model. It takes into account the fact that at 100% coverage for a given enumerable, all defects may not yet have been found. 3 Introduction Acronyms LE model the proposed logarithmic-exponential model DSi (i=1,2,3,4) data-set i RGM reliability growth model Developers can achieve the target reliability of software systems in a predictable way by evaluating reliability during development. By evaluating and projecting reliability growth, developers can optimally allocate resources to meet a deadline with the target reliability [mus99]. To quantify reliability during testing, the code is executed using inputs randomly selected following some distribution. Then, a reliability growth model can be used to predict the amount of effort required to satisfy product reliability requirements, provided the distribution used for testing is same as the operational profile. However, the focus of testing is on finding defects, and defects can be often found much faster by non-random methods [bei90]. Testing is directed towards inputs and program components where errors are more likely. For example, testing may be conducted to ensure that particular portions of the program and/or boundary cases are covered. Models that can measure and predict reliability based on the status of non-random testing are clearly needed. Reliability achieved will be affected by several factors: - The testing strategy: Test coverage may be based on the functional specification (black-box), or it may be based on internal program structure (white-box). Strategies can vary in their ability to find defects. - The relationship between calendar time and execution time: The testing process can be accelerated through the possibly parallel, intensive execution of tests at a faster rate than would occur during operational use. - Testing of rarely executed modules: Such modules include exception handling or error recovery routines. These modules rarely run [hec94], and are notoriously difficult to test. Yet, they are critical components of a system that must be highly reliable. Intuition suggests that test coverage must be related to reliability. Yet, the connection between structure based measurements, like test coverage, and reliability is still not well understood. There are several motivations for investigating the relation between test coverage and reliability. Test coverage, rather than test effort is a direct measure of how thoroughly a system has been exercised. With the same test effort (measured in CPU execution time or calendar time), a less effective test strategy may be less efficient in finding defects. Measuring test-coverage is usually an intrusive approach, however available tools now allow it to be done automatically. The effectiveness of testing in finding defects has been recently examined by several researchers. Dalal, Horgan and Keterring [dhk93] have examined the correlation between test coverage and the error removal rate. Vouk [vou92] has suggested that the relation between structural coverage and fault coverage is a variant of the Rayleigh distribution. Chen et al. [chm92, chm96] add structural coverage to traditional time-based software reliability models (SRMs) by excluding test cases that do not increase coverage. Assuming random testing, Piwowarski, Ohba and Caruso [poc93] analyze block coverage growth during function test, and derive an exponential model relating the number of tests to block coverage. Frankl and Weiss [fra93] have experimented with detection of defects in small programs. Hutchins et al. [hfgo94] study detection effectiveness of test sets with different coverage values for realistic seeded faults. They find that a test set with higher coverage has higher per-test detection probability. They also showed that 100% coverage using a specific measure may not detect all the faults. In this paper, we explore the connection between test coverage and reliability. We develop a model that relates test coverage to defect coverage. With this model we can estimate the defect density. With knowledge of the fault exposure ratio, we can predict reliability from test coverage measures. Notation Superscript 0 indicates defects and superscripts 1, 2, 3, 4 indicate specific enumerables \( C^j(n) \) expected coverage of the enumerables of type \( j \) \( C_{\text{knee}}^j \) The coverage level at which the knee occurs \( \beta_i^0, \beta_i^1 \) the Logarithmic model parameters for enumerable \( i \) \( a_i^1, a_i^2, a_i^3 \) parameters for proposed model in terms of enumerable \( i \) used in Equation 4 \( b_i^0, b_i^1 \) parameters used for Equation 3 \( K^i \) fault or enumerable exposure ratio \( T_L \) Linear execution time \( t_f \) Time when debugging stops \( N_0^i \) The total number of enumerables of type \( i \) \( \lambda \) failure intensity \( A^i, B^i \) parameters used in equation Equation 5 4 Coverage of Enumerables Test coverage in software is measured in terms of structural or data-flow units that have been exercised. Some of the common coverage measures are defined below: - Statement (or block) coverage: the fraction of the total number of statements (blocks) that have been executed by the test data. - Branch (or decision) coverage: the fraction of the total number of branches that have been executed by the test data. - C-use coverage: the fraction of the total number of computation uses (c-uses) that have been covered during testing. A c-use pair includes two points in the program, a point where the value of a variable is defined or modified followed by a point where it is used for computation (without the variable being modified along the path) [rap85, ram85]. - P-use coverage: the fraction of the total number of predicate uses (p-uses) that have been covered during testing. A p-use pair includes two points in the program, a point where the value of a variable is defined or modified followed by a point which is a destination of a branching statement where it is used as a predicate (without modifications to the variable along the path) [rap85, ram85]. To keep the following discussion general, we will use the term enumerable to indicate a unit covered by testing [mali94]. For defect coverage the enumerables are defects, for branch coverage, the enumerables are branches, and so on. We use the term “enumerable-type” to imply defects, blocks, branches, c-uses or p-uses. We use superscript \( i \), \( i = 0 \) to \( 4 \), to identify one of the five types in this way: \( 0 \): defects, \( 1 \): blocks, \( 2 \): branches, \( 3 \): c-uses, \( 4 \): p-uses. We assume that no functional changes are being attempted; and thus no new code is being added to the software under test. When an enumerable is exercised, it is possible that one or more associated faults may be detected. Counting the number of units covered gives us a measure of the extent of sampling. Sometimes 85% branch coverage is considered to be the minimum acceptable value [gra92]. The defect coverage in software can be defined in an analogous manner; it is the fraction of actual defects initially present that would be detected by a given test set. In general, test coverage increases when more test cases are applied as long as the test cases are not repeated and complete test coverage has not already been achieved. A small number of enumerables may not be reachable in practice. We assume that the fraction of such enumerables is negligible. It has been shown that if all paths in the program have been exercised, then all p-uses must have been covered. Similarly all-p-use coverage implies all-branches coverage and all-branches coverage implies all-instructions coverage. This is termed the subsumption hierarchy [rap85, cla89, bisc92]. 5 A New Logarithmic-Exponential (LE) Coverage Model In this paper, we use the Musa-Okumoto logarithmic growth model [mus87, far96, mus99, mkv92, mvs93]. We hypothesize that the defect coverage growth follows the logarithmic model: \[ C^0(t) = \frac{1}{N^0} \beta_0^0 \ln(1 + \beta_1^0 t), \quad C^0(t) \leq 1 \] where \( C^0(t) \) is the defect coverage at time \( t \) and \( N^0 \) is the total number of initial defects. Note that since the maximum value of coverage is one, this equation is applicable for coverage values less than or equal to one. We also hypothesize that the coverage growth of enumerable \( i \) also follows the logarithmic model (\( i = 1, 2, 3, 4 \)), \[ C^i(t) = \frac{1}{N^i} \beta_0^i \ln(1 + \beta_1^i t), \quad C^i(t) \leq 1 \] Both equations 1 and 2 can be considered to be 2-parameters models. Note that the maximum value of \( C^i(t) \) is 1. Once this value is reached during testing, it remains 1 with further testing. Equation 2 can be given in the general form by \[ C^i(t) = b_0^i \ln(1 + b_1^i t), \quad C^i(t) \leq 1, \quad i = 1 \text{ to } 4 \] Equation 2 relates coverage \( C^i \) to the number of tests applied. We use it to obtain an expression giving defect coverage \( C^0 \) in terms of one of the coverage metrics \( C^i, i = 1 \text{ to } 4 \). Using Equation 2, we solve for \( t \), \[ t = \frac{1}{\beta_1^i} [\exp(\frac{C^i N^i}{\beta_0^i}) - 1], \quad i = 1 \text{ to } 4 \] Substituting \( t \) for \( C^0 \) in Equation 1, \[ C^0(C^i) = \frac{\beta_0^0}{N_0^0} \ln[1 + \frac{\beta_1^0}{\beta_1^i}(\exp(\frac{C^i N^i}{\beta_0^i}) - 1)], \quad i = 1 \text{ to } 4 \] Defining \( a_0^i = \frac{\beta_0^0}{N_0^0}, a_1^i = \frac{\beta_0^i}{\beta_1^0} \) and \( a_2^i = \frac{N_0^i}{\beta_0^i} \), we can write the above using three parameters as, \[ C^0(C^i) = a_0^i \ln[1 + a_1^i \exp(a_2^i C^i) - 1] \quad i = 1 \text{ to } 4 \] (4) Equation 4 gives a convenient three-parameter model for defect coverage in terms of a measurable test coverage metric. Equation 4 is applicable for only \( C^0 \leq 1 \). Figure 1 plots the relationship of defect coverage versus test coverage, as given by Equation 4. The overall curve is nonlinear, although the initial segment may not be observed in small programs because even a single test execution may provide close to 50% enumerable coverage. The location of the knee of the curve depends on the initial defect density [mal98]. As we can see from Figure 1, the curve can be approximated by a linear plot when coverage \( C^i \) exceeds a knee in the curve. This knee value is termed \( C^i_{knee} \). We can see that Equation 4 will result in a linear expression when \( a_1^i \exp(a_2^i C^i) \gg 1 \) and when \( \exp(a_2^i C^i) \gg 1 \). Analysis of actual data in the next section suggests that \( a_1^i \ll 1 \) thus \( a_1^i \exp(a_2^i C^i) \gg 1 \) implies \( \exp(a_2^i C^i) \gg 1 \). The knee at \( C^i_{knee} \) is influenced by the initial defect density [mal98]. A low initial defect density may mean that easy to find defects have already been found and removed in the past. Then one would start finding new defects only when test coverage is sufficiently high. For \( C^i > C^i_{knee} \), a linear approximation for \( C^0 \) can be given as: \[ C^0 \approx a_0^i \ln(a_1^i \exp(a_2^i C^i)) = -A^i + B^i C^i \quad C^i > C^i_{knee} \] (5) where \( A^i \) and \( B^i \) are the parameters for the linear approximation. Note that full test coverage of an enumerable does not imply full defect coverage. Full statement coverage may be reached before full branch coverage because of the subsumption hierarchy. Defect Coverage may not be observed in small programs approximately linear here Enumerable Test Coverage Figure 1: Defect Coverage vs Test Coverage 6 Analysis of Data We have fitted the proposed model, as given by Equations 2 and 4, using four data sets listed in Table 1. The first data set, DS1, is from a multiple-version automatic airplane landing system [lyu93]. It was collected using the ATAC tool developed at Bellcore. The twelve versions have a total of 30,694 lines of code. The data used is for integration and acceptance test phases, where 66 defects were found. One additional defect was found during operational testing. The next three data sets, DS2, DS3, and DS4 are from a NASA supported project implementing sensor management in an inertial navigation system [vou92]. As an example, the data set DS3 is reproduced in Table 2. Table 1: Data Sets Used <table> <thead> <tr> <th>DataSet</th> <th>KLOC</th> <th>#Tests</th> <th>Defects</th> <th>Tool</th> </tr> </thead> <tbody> <tr> <td>DS1(lyu93)</td> <td>30</td> <td>21k</td> <td>66</td> <td>ATAC</td> </tr> <tr> <td>DS2(vou92)</td> <td>5</td> <td>1196</td> <td>9</td> <td>BCG1</td> </tr> <tr> <td>DS3(vou92)</td> <td>5</td> <td>796</td> <td>9</td> <td>BCG1</td> </tr> <tr> <td>DS4(vou92)</td> <td>5</td> <td>796</td> <td>7</td> <td>BCG1</td> </tr> </tbody> </table> 1. internal tool 2. limited data points 3. evolving program The results for data set DS1 are summarized in Table 3. The first row gives the total number of enumerables for all versions. The second row gives the average coverage when 21,000 tests had been applied. The values of the estimated parameters \( b_0 \) and \( b_1 \) and the least square error (LSE) are given in the rows below. Table 4 summarizes the result for DS2. Nine faults were revealed by application of 1196 tests; we assume that one fault (i.e. 10%) is still undetected. Figure 2 shows actual and computed values for fault coverage for data sets DS2, DS3 and DS4. The computed values have been obtained using branch coverage and Equation 4. Note that the knee occurs at different branch coverage values. For Data Set DS2 (shown by a solid line), at 50% branch coverage the fault coverage is still quite low (about 10%), however with Table 2: Coverage Data: DS3 NASA project: Sensor management in inertial management [Vouk] (integration/acceptance test phase: 9 faults found with 796 tests) <table> <thead> <tr> <th>Cumulative</th> <th>Number of Test Cases</th> <th>%Coverage</th> </tr> </thead> <tbody> <tr> <td>Faults</td> <td>blocks</td> <td>branches</td> </tr> <tr> <td>1</td> <td>1</td> <td>57.01</td> </tr> <tr> <td>2</td> <td>2</td> <td>58.50</td> </tr> <tr> <td>3</td> <td>4</td> <td>61.30</td> </tr> <tr> <td>4</td> <td>10</td> <td>69.39</td> </tr> <tr> <td>5</td> <td>20</td> <td>77.80</td> </tr> <tr> <td>6</td> <td>30</td> <td>85.61</td> </tr> <tr> <td>7</td> <td>44</td> <td>87.00</td> </tr> <tr> <td>8</td> <td>114</td> <td>92.40</td> </tr> <tr> <td>9</td> <td>160</td> <td>93.50</td> </tr> <tr> <td>9</td> <td>796</td> <td>95.99</td> </tr> </tbody> </table> only 84% branch coverage, 90% fault coverage is obtained. Note that the plots in Figures 2 and 4 assume that that in each case, one fault is still undetected. In practice, estimating the number of number of remaining defects is a major challange that needs further investigation. Table 5 presents the results for DS3 which involves 796 test cases. The values of the parameters obtained can be compared with the values for DS2 presented in Table 4. The coverage growth of different enumerables are plotted in Figure 3. Figure 4 plots actual and model defect coverage values against branch coverage for DS3. It Table 3: Summary table for DS1 (total 21,000 tests applied) <table> <thead> <tr> <th>Blocks i=1</th> <th>Decisions i=2</th> <th>c-uses i=3</th> <th>p-uses i=4</th> <th>Defects i=0</th> </tr> </thead> <tbody> <tr> <td>Total enum.</td> <td>6977</td> <td>3524</td> <td>8851</td> <td>4910</td> </tr> <tr> <td>Final cov.</td> <td>91.8%</td> <td>83.9%</td> <td>91.7%</td> <td>73.5%</td> </tr> <tr> <td>$b_0$</td> <td>0.031</td> <td>0.049</td> <td>0.036</td> <td>0.041</td> </tr> <tr> <td>$b_1$</td> <td>2E+8</td> <td>1234</td> <td>3.4E+6</td> <td>2439</td> </tr> <tr> <td>LSE</td> <td>5.7E-4</td> <td>3.5E-5</td> <td>5.8E-4</td> <td>8.1E-5</td> </tr> </tbody> </table> 10 Table 4: Summary table for DS2 <table> <thead> <tr> <th></th> <th>Blocks</th> <th>Branches</th> <th>c-uses</th> <th>p-uses</th> <th>Defects</th> </tr> </thead> <tbody> <tr> <td>i=1</td> <td>89%</td> <td>84%</td> <td>76%</td> <td>61%</td> <td>90%</td> </tr> <tr> <td>i=2</td> <td>0.032</td> <td>0.060</td> <td>0.034</td> <td>0.039</td> <td>0.166</td> </tr> <tr> <td>i=3</td> <td>2E+8</td> <td>870</td> <td>3E+7</td> <td>2500</td> <td>0.11</td> </tr> <tr> <td>i=4</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>i=0</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>LSE</td> <td>0.02</td> <td>6.2E-4</td> <td>3.5E-3</td> <td>4.9E-3</td> <td>0.025</td> </tr> </tbody> </table> Table 5: Summary table for DS3 (796 test cases) <table> <thead> <tr> <th></th> <th>Blocks</th> <th>Branches</th> <th>C-uses</th> <th>P-uses</th> <th>Defects</th> </tr> </thead> <tbody> <tr> <td>i=1</td> <td>96%</td> <td>94%</td> <td>92%</td> <td>85%</td> <td>90%</td> </tr> <tr> <td>i=2</td> <td>0.07</td> <td>0.074</td> <td>0.044</td> <td>0.079</td> <td>0.139</td> </tr> <tr> <td>i=3</td> <td>2725</td> <td>870</td> <td>6.6E6</td> <td>86</td> <td>2.03</td> </tr> <tr> <td>i=4</td> <td>0.015</td> <td>0.01</td> <td>0.008</td> <td>0.002</td> <td>0.038</td> </tr> <tr> <td>i=0</td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>LSE</td> <td>0.139</td> <td>0.139</td> <td>0.14</td> <td>0.189</td> <td></td> </tr> <tr> <td>a_i</td> <td></td> <td>7E-4</td> <td>2.4E-3</td> <td>9E-7</td> <td>0.042</td> </tr> <tr> <td>a_i</td> <td>14.13</td> <td>13.14</td> <td>21.46</td> <td>9.88</td> <td></td> </tr> <tr> <td>LSE</td> <td>0.023</td> <td>0.014</td> <td>0.04</td> <td>0.023</td> <td></td> </tr> </tbody> </table> Table 6 summarizes the result for DS4. Figure 5 illustrates the correlation of other test coverage measures $C^2$, $C^3$ and $C^4$ with block coverage $C^1$. As we expect, branch coverage, and to a lesser extent p-use coverage, are both strongly correlated with block coverage. The correlation with c-use coverage is weaker. Figure 2: Actual and fitted values of defect coverage for DS2, DS3 and DS4 Figure 3: Coverage Growth of Different Enumerables (DS3) Figure 4: Fault Coverage & Relative Defect Density (DS3) Table 6: Summary table for DS4 (796 test cases) <table> <thead> <tr> <th>Final Coverage</th> <th>Blocks</th> <th>Branches</th> <th>C-uses</th> <th>P-uses</th> <th>Defects</th> </tr> </thead> <tbody> <tr> <td></td> <td>94%</td> <td>93%</td> <td>94%</td> <td>87%</td> <td>90%</td> </tr> <tr> <td>$b_0$</td> <td>0.063</td> <td>0.072</td> <td>0.051</td> <td>0.077</td> <td>0.116</td> </tr> <tr> <td>$b_1$</td> <td>9759</td> <td>1400</td> <td>4.4E5</td> <td>214</td> <td>3.78</td> </tr> <tr> <td>LSE</td> <td>0.013</td> <td>0.017</td> <td>0.012</td> <td>0.011</td> <td>0.01</td> </tr> <tr> <td>$a_0$</td> <td>0.116</td> <td>0.116</td> <td>0.11</td> <td>0.116</td> <td></td> </tr> <tr> <td>$a_1$</td> <td>6E-4</td> <td>3.8E-3</td> <td>1E-5</td> <td>0.017</td> <td></td> </tr> <tr> <td>$a_2$</td> <td>15.23</td> <td>13.4</td> <td>19.20</td> <td>12.95</td> <td></td> </tr> <tr> <td>LSE</td> <td>0.022</td> <td>0.022</td> <td>0.04</td> <td>0.01</td> <td></td> </tr> </tbody> </table> Figure 5: Plot of C2, C3 and C4 against C1 (DS4) 7 Defect density and reliability Here we consider the failure intensity during the operational period. We assume that debugging stops at a time $t_f$ and no further changes in the program are done. After time $t_f$, the defects remaining are not removed. Thus the failure intensity $\lambda$ no longer depends on time. Since the failure intensity is proportional to the number of defects $[mvs93]$, we have, $$\lambda(t_f) = \frac{K}{T_L} N^0(t_f)$$ where $K$ is the overall value of fault exposure ratio. Musa et al. have found that the value of $K$ ranges between $1 \times 10^{-7}$ to $7.5 \times 10^{-7}$ failures/fault for several data sets examined $[mus87]$. The value of $K$ does not depend on the program size, but can depend on defect distribution in the program and the testing approach $[mvs93]$. During testing and debugging, the faults found are removed. If we assume that no new faults are introduced during this process, the total number of defects to be found by $t_f$ can be computed as: $$N^0(t_f) = N^0_0 (1 - C^0(t_f))$$ It should be noted that in actual practice debugging may be imperfect $[ohb89]$. Substituting $C^0$ using Equation 4, $$N^0(t_f) = N^0_0 (1 - a'_0 \ln [1 + a'_1 (\exp(a'_2 C^i(t_f)) - 1)])$$ Hence, the expected duration between successive failures can be obtained as $$\frac{1}{\lambda(t_f)} = \frac{T_L}{K N_0 (1 - a'_0 \ln [1 + a'_1 (\exp(a'_2 C^i(t_f)) - 1)])}$$ Equation 6 can also be used for the operational period with the appropriate value for the fault exposure ratio. Notice that $K$ will depend on the operational profile encountered during the operational period $[mus99]$. 8 Future Work Further experimental and theoretical research is needed to validate the model proposed in this paper. Analysis of additional data sets will provide further insight into the problem. Here we have evaluated the values of the parameters $a_1^i$, $a_2^i$, $d_2^i$ by curve fitting. It will be useful to be able to obtain initial estimates of the parameter values using empirical methods. That would involve interpretation of the parameters for the logarithmic model [mvs93, mald97]. Estimation of the number remaining defects is another problem that needs further investigation. 9 Acknowledgement The work by Y.K. Malaiya and N. Li was partly supported by a BMDO funded project monitored by ONR. J. Bieman’s work was supported in part by NSF, the NASA Langley Research Center, the Colorado Advanced Software Institute, Storage Technology Inc. and Micro-Motion Inc. We would like to thank Mladen Vouk for providing us some the data sets, and Alberto Pasquini, Bob Horgan, Aditya Mathur and Bob Skibbe for discussions on this subject. References [ram85] J. Ramsey and V.R. Basili, “Analyzing the Test Process Using Structural Cover- age”, Proc. 8th Int. Conf. on Software Engineering, August 1985, pp. 306-312. [rap85] S. Rapps and E.J. Weyuker, “Selecting Software Test Data Using Data Flow In- [vou92] M.A. Vouk “Using Reliability Models During Testing With Non-operational Pro- files,” Proc. 2nd Bellcore/Purdue workshop on issues in Software Reliability Esti-
{"Source-Url": "http://www.cs.colostate.edu/~bieman/Pubs/Malaiya-etal02.pdf", "len_cl100k_base": 7012, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37418, "total-output-tokens": 9100, "length": "2e12", "weborganizer": {"__label__adult": 0.0003991127014160156, "__label__art_design": 0.0003266334533691406, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.0005965232849121094, "__label__entertainment": 7.30752944946289e-05, "__label__fashion_beauty": 0.00017690658569335938, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.00038743019104003906, "__label__games": 0.0007309913635253906, "__label__hardware": 0.0015287399291992188, "__label__health": 0.0006532669067382812, "__label__history": 0.00024366378784179688, "__label__home_hobbies": 9.894371032714844e-05, "__label__industrial": 0.00042510032653808594, "__label__literature": 0.00034356117248535156, "__label__politics": 0.00019156932830810547, "__label__religion": 0.0004565715789794922, "__label__science_tech": 0.0443115234375, "__label__social_life": 9.518861770629884e-05, "__label__software": 0.006999969482421875, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.00031876564025878906, "__label__transportation": 0.0005121231079101562, "__label__travel": 0.00019097328186035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27145, 0.07946]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27145, 0.28901]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27145, 0.87902]], "google_gemma-3-12b-it_contains_pii": [[0, 1281, false], [1281, 3332, null], [3332, 5720, null], [5720, 7381, null], [7381, 9534, null], [9534, 11348, null], [11348, 13170, null], [13170, 13320, null], [13320, 15233, null], [15233, 17409, null], [17409, 18919, null], [18919, 19052, null], [19052, 19829, null], [19829, 19878, null], [19878, 21517, null], [21517, 23653, null], [23653, 26622, null], [26622, 27145, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1281, true], [1281, 3332, null], [3332, 5720, null], [5720, 7381, null], [7381, 9534, null], [9534, 11348, null], [11348, 13170, null], [13170, 13320, null], [13320, 15233, null], [15233, 17409, null], [17409, 18919, null], [18919, 19052, null], [19052, 19829, null], [19829, 19878, null], [19878, 21517, null], [21517, 23653, null], [23653, 26622, null], [26622, 27145, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27145, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27145, null]], "pdf_page_numbers": [[0, 1281, 1], [1281, 3332, 2], [3332, 5720, 3], [5720, 7381, 4], [7381, 9534, 5], [9534, 11348, 6], [11348, 13170, 7], [13170, 13320, 8], [13320, 15233, 9], [15233, 17409, 10], [17409, 18919, 11], [18919, 19052, 12], [19052, 19829, 13], [19829, 19878, 14], [19878, 21517, 15], [21517, 23653, 16], [23653, 26622, 17], [26622, 27145, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27145, 0.25]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
00900004003364e49ce3c2285c074cf4c3517518
The design and prototype implementation of a deductive processor for efficient extraction of implicit information from explicit data stored within a relational data-base system is described. General statements (premises or inference rules) as well as queries are expressed in a canonical form as implications. From user queries, the system constructs skeletal derivations (proof plans) through the use of a predicate connection structure representing possible deductive interactions among the general statements. The system incorporates techniques for rapid selection of small sets of relevant premises (by proof planning); development and elaboration of proof plans; proof plan verification; use of proof plans as a basis for determining data-base access strategies; and instantiation of plans (i.e., turning proof plans into proofs) with retrieved data-base values. Examples of the current capability of the system are illustrated. INTRODUCTION The deductive processor (DP) described in this paper has been designed to interface with existing and emerging relational data management systems (RDMSs). Given this orientation, we have made a sharp distinction between specific facts (n-tuples) which reside in an RDMS data base and general statements (rule-based knowledge or premises) that are directly accessible to the DP. Since the number of general statements that may be required for a practical application is likely to be large (perhaps hundreds to thousands of premises), particular attention has been paid to the development of techniques for the rapid selection of relatively small sets of premises relevant to answering a user’s specific request. Premise-selection techniques are automatically invoked when deductive support is necessary to respond to a user’s request; otherwise, queries “fall through” the DP and directly drive the RDMS. This “deductive inference by exception” principle suggests that the DP be viewed as an add-on or enhancement to existing data-base searching capabilities. Such an enhancement can result in a major increase in the power of a data management system by providing a means for extracting and deriving implicit information from data bases of explicit facts. Further, as we shall see, the DP can aid a user in evaluating the utility and/or plausibility of an inferred assumption by displaying the evidence on which the answer is based. We briefly review some of the relevant work in the field of deductive question answering, outline our approach, describe the several components of our prototype DP, and illustrate by means of two examples the current operation of the system. APPRAOCH Previous approaches to adding deductive capabilities to data management have occurred primarily in the development of question-answering systems (Simmons, 1976). Reviews of the early systems and their deductive methods that have been used are set inclusion logic, e.g., CONVERSE and SYNTHESIS, techniques based on the "resolution" principle, e.g., QA3 and MAPPER, procedural-oriented deduction, e.g., SHREDLIB, and goal-oriented backward chaining, e.g., MYCN. The primary difference between these systems and our DP is in our use of planning. Our system creates deduction plans to guide the generation of full deductions. We believe such planning to be essential for cutting through the massive number of dead ends and irrelevant inferences which have impaired the performance of earlier systems. Planning becomes even more important for systems involving large numbers of premises. Selection of a manageable small set of possibly relevant premises can be based on such planning. To this end we have designed and implemented a deductive processor that first builds derivation skeletons which represent possible deduction plans. Once such plans are generated, the system will attempt to instantiate and verify the plans (examine substitutions for variables in premises). We have thus separated the premise-selection process from the process of verifying the consistency of variable substitutions. The generation of derivation (proof) plans is centered around middle-term chaining. This process finds implication chains from assumptions to goals through the premises. Middle-term chaining combines the processes of forward chaining from the assumptions in a query and backward chaining from the goals in a query. (In the case of no query assumptions, middle-term chaining defaults to backward chaining.) As chaining proceeds in the two directions, intersections are performed on the derived sets. When a non-empty intersection occurs, the system has found an implication chain from an assumption to a goal. The resulting chain is passed on to the proof plan generator, which extracts the premises whose occurrences are involved in the chain. Subproblems may result, requiring further deduction or data-base search. The examples presented below will illustrate these processes. The chaining process does not operate on the premises themselves but on a net-structure called the predicate connection graph (PCG). This graph is abstracted from the premises. When a premise is introduced into the system, the implication connections existing among the predicate occurrences in the premise are encoded into the PCG. Further, the deductive interactions (i.e., unifications) between predicate occurrences in the new premise and predicate occurrences in existing premises are pre-computed and encoded into the PCG. The variable substitutions required to effect the unifications are stored elsewhere, for latter use by the proof plan verifier. Thus, the PCG contains information on the implications within premises and the deductive interactions among the premises. During the generation of middle-term chains and proof plans, the system is aware of the existence of unifications among the premises, but it does not need to generate the unifications nor does it need to examine and combine the variable substitutions associated with the interfacing unifications. The former is done by a pre-processor, while the latter is done by the verifier after proof planning. Although some connection graphs used in theorem-proving systems also contain information on the unifications among general assertions (resolution clauses in these systems), they are not used as a planning tool as is the PCG. The PCG most resembles Sickel's clause interconnectivity graph in that both graphs represent the initial deductive search space and are not changed in the course of constructing deductions. Other graph procedures involve adding nodes to graphs as deductions are formed. More detail on the PCG is given in Klahr. REPRESENTATION OF INFORMATION The basic representation for general assertions (premises) is the primitive conditional. This form is a normalized first-order predicate-calcus implication statement. The antecedent of the implication contains the assumptions (conditions) of the assertion; the consequent contains the goal of the assertion. Conjunctions, disjunctions, and negations can occur on either side of the implication. Each assumption and goal is a predicate occurrence consisting of a predicate (relation) and its argument terms (i.e. variables, constants, or functions). The primitive conditional was chosen because general assertions are usually formulated in the form of “if...then...” implications. Users can easily express and understand general assertions in this form and can easily control and understand proofs involving them. Further, this form facilitates system discovery of deductive implication chains. Variables and constants occurring in premises and queries may be categorized into specific domain classes. For example, a variable “x” might be specified as being a LABORATORY and the constant “Joe” as being a SCIENTIST. In attempting to match argument strings involving these terms, the system will not allow the substitution of Joe for x because they belong to different domains. The use of such semantic information eliminates certain deductive interactions among the premises and thus reduces the search space of possible deductions. Semantic information in the form of user-supplied advice can also be given to the system. Advice most typically involves recommendations on the use of particular premises or predicates in finding deductions. For advised premises, the system will try using them whenever possible in the course of constructing a proof. For advised predicates, the system will try chaining through occurrences of them (in premises). In the case of negative advice, specified premises and predicates are avoided in proofs. Advice may be given for a particular input (query or stored in a permanent advice file which the system accesses for each query. Advice statements are in the form of condition-recommendation rules similar to the meta-rules used in MYCIN. The conditions contain information about predicates, constants, and domain classes that may occur in query assumptions and goals. The conditions are matched against the input query and, if they are satisfied, the associated recommendations about the use of certain premises and predicates are activated. Internally, advice is transformed into premise and predicate alert lists (as well as negative alert lists for negative advice), which are accessed in the chaining and proof-planning processes. In addition to the information used by the deductive processor, there is also a file of specific facts used by a data management system. This latter system searches for and retrieves specific facts needed to resolve subproblems resulting from premises. For our experiments with the prototype deductive processor, we have written a small LISP relational data-base management system. Facts are stored relationally as n-tuples associated with a predicate (relation) name. When a particular predicate occurrence becomes a subproblem, the system has three alternative methods for resolving it; the decision is based on how the user defined the various predicates known to the system. If a predicate is defined computationally by a procedure, the procedure is executed to determine the predicate's truth value. If a predicate is specified by the user as defined primarily by its data-base values, the unresolved predicate is left for data-base search. Otherwise, an unresolved predicate occurrence is given further deductive support through the premises. (Such predicate classification is currently mutually exclusive but need not be. An alternative control structure could try several methods for resolving each subgoal.) The examples below will show the interface between the deductive processor and the data management system, as well as examples of procedurally defined predicates. Figure 1. Deductive Processor Components SYSTEM COMPONENTS Figure 1 displays the various components of the deductive processor as well as its position in a deductive data management system. The language processor is currently not a part of our initial prototype environment but will be incorporated at a later date. The control processor shown in Figure 1 currently accepts premises and queries in primitive conditional form as well as user advice and commands. It accesses and coordinates the several system components described below. Array Initialization and Maintenance Information abstracted from the premises is segmented into seven internal arrays. This segmentation contributes to good system structuring and increases processing efficiency. Each predicate occurrence is assigned a unique integer index. Information about a particular predicate occurrence is obtained from the array containing the kind of information needed by indexing into the array with the integer associated with the occurrence. The seven arrays are: Premise Array: Each entry represents a premise and contains a list of the occurrences (i.e., occurrence indices) in the premise, the plausibility of the premise, and the premise itself, both symbolic (primitive conditional form) and English, for purposes of display. Predicate Array: This array contains the relations known to the system. Associated with each relation is its support indicator, i.e., the method used to resolve the relation when it occurs as a subgoal (deduce, search data base, compute). Predicate Occurrence Array: Each entry represents a predicate occurrence and contains the following information about the occurrence: its predicate name (index into predicate array), the premise in which it occurs (index into premise array), the sign of the occurrence (positive or negative), whether the occurrence is an antecedent or consequent of a primitive conditional, the main connective governing the occurrence (i.e., conjunction or disjunction), and the numerical position of the occurrence within the premise. The information is compactly stored in a single one-word bit vector. Arguments Array: The argument strings of the predicate occurrences are stored in this array in a one-to-one correspondence to the positions of the occurrences in the predicate occurrence array. Links Array: Deductive dependencies within premises are stored in this array. Basically, these dependencies derive from implication connections among predicate occurrences within premises (Klahr). This array is also indexed by occurrence integers. For each occurrence, a list of the occurrences it implies is stored in the entry corresponding to the occurrence's index. Unifications Array: Each entry contains a list of the unifications (deductive interactions) associated with the given occurrence. The unifications array and the links array comprise the predicate connection graph. Variable-Substitutions Array: The substitution lists associated with unifications are stored in a one-to-one correspondence to the position of the unifications in the unifications array. **Middle-Term Chain Generator** Each input query is broken down (based on the logical connectives in the query) into sets of assumptions (from query antecedents) and goals (from query consequents). The predicate connection graph is used to find deductive implication chains between assumptions and goals. "Wave fronts" are expanded out of assumptions and out of goals until an intersection is found, at which point the middle-term chain is identified and extracted. **Proof Plan Generator** For each middle-term chain generated, the system extracts the premises whose occurrences are part of the chain. Any subgoals resulting from the premises are set up as requiring deductive support through the premises, data-base search, or procedural computation. Subgoals are added to a proof proposal tree, which contains proof plans as they are being formed and developed. Proof plans having no remaining deduce subgoals are then passed on to the verifier. **Proof Plan Verifier** The variable substitutions required by the unifications in a proof plan are examined for consistency. If there are no clashes, i.e., no variable taking on more than one distinct constant value, then verification is successful. If there are any remaining subgoals requiring data-base support, the data management system is called to search the file of specific facts. **Display Processor** The user has a wide variety of display options available to monitor the operation of the deductive system. In particular, he can examine middle-term chains generated, proof plans formed, subgoals, proof plan verification, data-base search requests, data-base values returned, answers, completed proofs, and premises used in proofs. **Computer Examples** In Figures 2 and 3 we illustrate examples of the current operation of our initial DP prototype interfaced to a small RDMS. (Both DP and RDMS are written in LISP 1.5 and operate on an IBM 370/158 computer.) In the first example, we illustrate the generation of short inference and search/compute plans for the question, "What ships are closer to the Kittyhawk's home port than the Kittyhawk is?" The query is first shown in English and then in the primitive conditional symbolic form that our prototype currently recognizes. The query is expressed in terms of a conjunctive goal composed of the predicates CLOSER-THAN and HOME-PORT. Constants (e.g., Kittyhawk) are specified by being enclosed in parentheses, while variables (e.g., x and y) are not. One of the query goals (HOME-PORT) is to be given data-base support, i.e., it has been characterized as defined by data-base values, while the other goal (CLOSER-THAN) is to be deduced. Since the antecedent in the query is empty, middle-term chaining defaults to backward chaining. The system back-chains from CLOSER-THAN through premise 29. The plausibility (similar to certainty factors in MYCIN) of the plan in this case is simply the plausibility of the single premise used. Premise plausibilities range from 1 to 99 and are set by the user. Two new search requests (in addition to HOME-PORT) result from premise 29, as well as a compute relation containing functional arguments. Computations for the functions and the relation are delayed until values for the variables x and y have been found in the data base (i.e., values which satisfy the search requests). The system sends the three search requests to the RDMS, which finds two ships, the Forrestal and the Gridley, that are closer to the Kittyhawk's home port (San Diego) than the Kittyhawk is. The system then displays the proof that led to the first answer (the Forrestal). A proof using the other answer would be identical to this one except that Gridley would replace Forrestal in the proof, and the distance between the Gridley and San Diego would replace 310 (the distance between the Forrestal and San Diego). The symbols G2, G3, etc., represent nodes in the proof proposal tree and are used here for reference. G2 and G3 represent the original goals as also shown in the inference plan. G5, G6, and G7 are subgoals that resulted from premise 29, which was used to deduce G2. Thus, these three subgoals are indented below G2. The middle-term-chaining and proof-planning processes are more evident in the example in Figure 3. The input query contains two assumptions (DAMAGED and DESTINATION) and one goal (TRANSPORT). Taurus and NY are constants; Cargo and x are variables. The query asks the system to find values for x that satisfy the query. The variable x is restricted to range over ships. (This is an example of a domain class specification for a variable. Such domain specifications could also have been used in the previous example.) In the course of developing deductions, the system will not allow values to be substituted for x that belong to domain classes other than ships. The inference plan shown in Figure 3 has already been verified. To see the planning mechanism more clearly, we will refer to Figure 4. The first middle-term chain generated connects the DESTINATION assumption to the TRANSPORT goal via premise 23. This is shown by the unifications u1 and u2 in Figure 4. The predicate occurrences involving the relations AVAILABLE and OFFLOAD become subproblems. The former is to be given data-base support; the latter is a middle-term chain from the DAMAGED assumption through premises 7 and 15. The chain is shown in Figure 4 by the unifications u3, u4, and u5. The **Figure 2. Deduction Involving Deduce, Data-Base Search, and Compute Predicates** **Figure 3. Deduction Using Middle-Term Chaining** **Figure 4. Proof Plan for Query in Figure 3** two new subproblems are to be given data-base support. Thus the plan generated uses three premises and contains three subproblems requiring data-base search. The plausibility of the plan is currently calculated by a fuzzy intersection (the minimum of the plausibilities of the premises involved). The plan is then verified with variable substitutions inserted in the plan and in the search requests (Figure 3). Note the variable constraints in the search requests. The variable \( x_{n_2} \) represents the home port of Taurus; values found for this variable must be the same as those found for \( x_{n_1} \) in the AVAILABLE search request. The proof display is given for the first answer found (the Pisces). In Figure 4 we note that the unifications \( u_4 \) and \( u_5 \) were computed when these premises were first entered into the system and stored in the PCG. Also stored in the PCG were the implication connections within the premises; e.g., between DAMAGED and RETURNS, between RETURNS and OFFLOAD, and between DESTINATION and TRANSPORT. The unifications \( u_1, u_2, \) and \( u_6 \) were computed after query input (because they involve predicate occurrences in the query) and serve to locate possible middle-term-chain end points. Once these end points were identified, only the PCG was used for middle-term chaining. **SUMMARY AND FUTURE PLANS** We have described a deductive system specifically designed to provide inferential capability for a data management system. From a set of general assertions, the system generates skeletal derivations or proof plans in response to given input queries. These plans are then used to trigger data-base search requests for the specific facts needed to instantiate and thus complete proof plans, turning them into proofs and answers. General information is thus used to guide and direct the proof-planning process and to identify subproblems that may be resolved by data-base search or by computation. (Or subproblems may be left open in the display of incomplete proof plans to the user, thus identifying information which cannot be found within the system but which the user may be able to supply from without.) We are currently expanding the prototype along several different dimensions in line with our goal of eventually incorporating the deductive processor into an operational data management system and language processor environment. A number of improvements in man-machine interaction and user displays are being made in order to allow users to have more direct and flexible control of the proof-plan-generation and data-base-search processes. Additional semantic constraints on the generation of plans will be introduced through the use of a semantic net to further restrict the range of variables, as well as through extensions to the existing semantic-advice condition-recommendation formalism. Work in these two critical areas of improved user and semantic control of deductive processes is being supplemented by additional investigations into the encoding and integration of incomplete and plausible knowledge. **ACKNOWLEDGEMENTS** The research reported here has been supported by the Advanced Research Projects Agency of the Department of Defense and is monitored by the Office of Naval Research under Contract N00014-76-C-0885. **REFERENCES**
{"Source-Url": "https://apps.dtic.mil/dtic/tr/fulltext/u2/a072091.pdf", "len_cl100k_base": 4394, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25459, "total-output-tokens": 5744, "length": "2e12", "weborganizer": {"__label__adult": 0.0003592967987060547, "__label__art_design": 0.0004396438598632813, "__label__crime_law": 0.0006117820739746094, "__label__education_jobs": 0.00435638427734375, "__label__entertainment": 0.00013577938079833984, "__label__fashion_beauty": 0.00023448467254638672, "__label__finance_business": 0.0004096031188964844, "__label__food_dining": 0.0005078315734863281, "__label__games": 0.0007467269897460938, "__label__hardware": 0.00234222412109375, "__label__health": 0.0009388923645019532, "__label__history": 0.0003764629364013672, "__label__home_hobbies": 0.00016105175018310547, "__label__industrial": 0.00080108642578125, "__label__literature": 0.0009965896606445312, "__label__politics": 0.0003159046173095703, "__label__religion": 0.0005545616149902344, "__label__science_tech": 0.416259765625, "__label__social_life": 0.0001785755157470703, "__label__software": 0.033966064453125, "__label__software_dev": 0.5341796875, "__label__sports_fitness": 0.00022792816162109375, "__label__transportation": 0.0006952285766601562, "__label__travel": 0.0001742839813232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25539, 0.02389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25539, 0.56534]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25539, 0.92025]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 4915, false], [4915, 10719, null], [10719, 13731, null], [13731, 19258, null], [19258, 19441, null], [19441, 24512, null], [24512, 25539, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 4915, true], [4915, 10719, null], [10719, 13731, null], [13731, 19258, null], [19258, 19441, null], [19441, 24512, null], [24512, 25539, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25539, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25539, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 4915, 3], [4915, 10719, 4], [10719, 13731, 5], [13731, 19258, 6], [19258, 19441, 7], [19441, 24512, 8], [24512, 25539, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25539, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
867e2111f60c4ddeb1092a2e4a30e13732ff2adc
Big SaaS: The Next Step Beyond Big Data Hong Zhu, Ian Bayley, M. Younas, David Lightfoot, Basel Yousef and Dongmei Liu Applied Formal Methods Research Group Department of Computing and Communication Technologies Oxford Brookes University, Oxford OX33 1HX, UK E-mail: hzhu@brookes.ac.uk Abstract Software-as-a-Service (SaaS) is a model of cloud computing in which software functions are delivered to the users as services. The past few years have witnessed its global flourishing. In the foreseeable future, SaaS applications will integrate with the Internet of Things, Mobile Computing, Big Data, Wireless Sensor Networks, and many other computing and communication technologies to deliver customizable intelligent services to a vast population. This will give rise to an era of what we call Big SaaS systems of unprecedented complexity and scale. They will have huge numbers of tenants/users interrelated in complex ways. The code will be complex too and require Big Data but provide great value to the customer. With these benefits come great societal risks, however, and there are other drawbacks and challenges. For example, it is difficult to ensure the quality of data and metadata obtained from crowdsourcing and to maintain the integrity of conceptual model. Big SaaS applications will also need to evolve continuously. This paper will discuss how to address these challenges at all stages of the software lifecycle. 1 Introduction Software-as-a-Service (SaaS) is a cloud computing model in which computer applications are delivered to the users as services [1, 2]. It contrasts with the hitherto more conventional practice of selling applications as products to be owned by the customer, and has led to a revolution in what functions can be offered. Table 1 lists just some of the many successful SaaS applications that have arisen over the past few years. There is, however, less research on SaaS than on other related areas such as Big Data, Internet of Things (or Cyber-Physical Systems), Wireless Sensor Networks etc. For this reason, it is desired to assess the start of the art for both research and applications. This paper does this and then identifies future directions, recognizes the main challenges, outlines our assumptions and approach, and finally recounts recent progress. The paper is organized as follows. Section 2 defines the notion of Big SaaS applications. Section 3 identifies the major challenges in their development. Section 4 discusses approaches to solving these problems and reports our preliminary work. Section 5 concludes the paper with a summary. Table 1 Examples of SaaS Applications <table> <thead> <tr> <th>SaaS</th> <th>Application Area</th> </tr> </thead> <tbody> <tr> <td>Booking.com</td> <td>Hotel booking</td> </tr> <tr> <td>EasyChair</td> <td>Conference management</td> </tr> <tr> <td>Ebay</td> <td>Online shopping</td> </tr> <tr> <td>Facebook</td> <td>Web portal and Social networking media</td> </tr> <tr> <td>Gmail</td> <td>Message communication</td> </tr> <tr> <td>Just Eat</td> <td>Online order for Take Away restaurants</td> </tr> <tr> <td>Lastminute.com</td> <td>Travel agency</td> </tr> <tr> <td>LinkedIn</td> <td>Social networking media for professionals</td> </tr> <tr> <td>Moodle</td> <td>Online Learning Platform</td> </tr> <tr> <td>ResearchGate</td> <td>Social networking media for researchers</td> </tr> <tr> <td>Rightmove</td> <td>Estate Agency</td> </tr> <tr> <td>SalesForce.com</td> <td>Customer Relationship Management</td> </tr> <tr> <td>WhatsApp</td> <td>Instant message communication</td> </tr> </tbody> </table> 2 The Growth of SaaS Those SaaS applications well known to the public today are mostly small, but our vision of the near future is that an era of Big SaaS is emerging. Here, we define Big SaaS applications as those SaaS applications with the following characteristics. (1) Big Tenancy. A Big SaaS application usually serves a large number of tenants and users that may well be interrelated in a complex way. Examples of this include: - **Just Eat**: 40,800 takeaway restaurants (in 13 countries) and has 6 million users with active accounts. - **Booking.com**: 638,960 properties (in 211 countries) with over 800,000 room-nights reserved per day. - **Rightmove** (UK’s largest online estate property advertisement portal): 19,304 agent and new homes advertisers, for more than 1 million properties. Examples of complex interrelationships include hierarchies (e.g. a tenant may have sub-tenants etc.) and users being associated with many tenants or no particular tenants. (2) Big Data. Large volumes of data will be processed when the number of tenants and users is large. For example, in January 2014, the Rightmove.com website had a record 100 million visits viewing 1.5 billion 3.1 Societal Risks For a SaaS application, the risk $Risk_{SaaS}$ of failure is: $$Risk_{SaaS} = R \times T \times C,$$ where $T$ is the number of tenants reside in the system; $R$ is the failure rate of the system; $C$ is the average consequence of a failure per tenant. For a software application system that is owned by the customers, the total risk $Risk_{WS}$ of failure globally is: $$Risk_{WS} = R' \times C' \times S,$$ where $S$ is the number of copies of the system running at the same time globally; $R'$ is the failure rate of the system, and $C'$ is the average consequence of a failure to the customer who runs a copy of the software. Assume that each tenant runs one copy of the system (i.e. $T=S$), and that the SaaS is of the same level of reliability as the customer owned software (i.e. $R=R'$). Then, we have that $Risk_{SaaS} = Risk_{WS}$, if $C=C'$. From this one can conclude that the two modes of software have equal risks of failure. However, the calculation makes sense only for so-called individual risks. There is, however, a concept of societal risks, borrowed from safety engineering, where the risks from SaaS are considered greater. In general, individual risk is the risk for one person of loss of property or life due to system failures. In safety engineering, whether the risk is tolerable can be judged relatively easily for individuals as people knowingly take and accept risks all the time. Travelling in a car brings the risk of an accident but a train crash that kills many people causes an immense public reaction even many more die per year on roads than on trains. These situations are addressed by estimating societal risk, expressed as the relationship between the probability of a catastrophic incident and the number of users affected. It can be represented as an $F-N$ curve that plots the expected frequency ($F$) of failure and the number ($N$ or more) of users affected by each failure. Figure 1 illustrates the difference between societal risks for SaaS and those for customer-owned software of similar reliability. These risks are exacerbated if failure recovery is slow, as with the two recent outages of Salesforce’s CRM system. They each took more than 10 hours to recover, during which users of more than 100,000 tenants were deprived of the service. Therefore, it is crucial for SaaS application developers to reduce the societal risk significantly to an acceptable level. 3.2 Trustable Crowdsourcing When there are a large number of tenants, it is highly desirable that a SaaS application supports customization so that the specific needs of the customers and their users can be accommodated. However, for Big SaaS, such customization cannot be done by the service provider manually. A solution that adopted by almost all existing successful SaaS applications is crowdsourcing. This means that the customers perform customization themselves. For example, Rightmove provides a facility for the estate agents to upload themselves information on the properties for sale or to let. Likewise, Booking.com enables property owners to set room prices and room availabilities. Similarly, eBay enables sellers to enter the information about the goods for sale and the method of payment. Such facilities are fairly simple, however, when compared to Salesforce’s facility to let customers build their own applications. An unsolved problem is how to ensure the quality of data and of system configurations obtained by crowdsourcing. This is the second grand challenge to Big SaaS. 3.3 Continuous Evolution Continuous evolution has been applied to software development practice for web-based systems, as a part of agile methodologies. In this approach, a software system is revised, tested and updated so frequently that the notion of versions and releases no longer makes sense. Moreover, continuous evolution also requires that such updates and releases go live without any interruption to service. This is of paramount importance for Big SaaS but the unprecedented scale and complexity of Big SaaS presents a challenge. Imagine the situation where hundreds of thousands of tenants each have their own customized version of the system running simultaneously on a number of big clusters distributed around the globe. At the same time numerous new tenants are also performing customization and configuration to join the system. As both of these are happening, developers are committing multiple changes to the system in parallel to fix bugs, to introduce new functions, and to refactor system structure. These changes will inevitably interact with each other while each change may have devastating impact for a large number of users. After a few days of such frequent modifications, the relations between the components could soon become a spaghetti-like mess. No current software change impact analysis tool could be used here and yet updates will have to go live without interruption to the service. The pressure to complete the testing, verification and validation of each change within a short time with a high adequacy will be several magnitudes higher than ever before. To enable Big SaaS to be evolved continuously, we must overcome the barriers in software engineering, especially the methods and tools for change impact analysis, for testing, verification and validation, and for on-line refactoring of software structure. 3.4 Conceptual Integrity Conceptual integrity is one of the key features of a good software design. It means that there is a simple conceptual model of the system in which its structure, functionality and dynamic behavior can be understood. It appears that the design of a good conceptual model for a Big SaaS application and maintaining its integrity both play a crucial role in development and maintenance. They also play a role in the customization and continuous evolution of the system. Currently, such a conceptual model is rarely formally defined, and often not even documented explicitly, but conveyed instead informally through demonstrations, case studies, online training materials, marketing articles, etc. The advantages of such an approach is that it is user-oriented, but it leaves much scope for ambiguity, incompleteness and misunderstanding. On the other hand, most online documentation is too developer-oriented, with technical details in place of information about the conceptual model. Ontology and semantic web services can provide user-understandable descriptions of services at the conceptual model level. However, a weakness of ontology based service descriptions is that they are fragmented. Moreover, such documentation and descriptions of services are not verifiable and testable. A link seems missing from the conceptual model to low-level system specification. 4 Research Directions In this section, we seek for potential solutions to the engineering problems raised in the previous section. We focus on four phases of the software development lifecycle: functional specification, architectural design, implementation and testing. For each of these, we will briefly review the existing work, outline our approach, report the preliminary progresses we have made so far, and point out directions for future research. 4.1 Design: Fault Tolerance Architectures The societal risk must be addressed by appropriate architectural design of SaaS applications. Chong and Carraro asserted that “A well-designed SaaS application is scalable, multi-tenant-efficient, and configurable” [1]. These are the three key differentiators that separate it from a poorly-designed SaaS application. Based on architectural features, they proposed a 4-level maturity model of SaaS applications shown in Figure 2. Level 1 is ad-hoc, the least mature, and essentially the same as the traditional application service provider (ASP) model of software delivery. Each subsequent level adds one of the three key features (configurability, multi-tenant efficiency, scalable in that order). It is no surprise that almost all successful SaaS applications nowadays employ an architecture model of level 3 and 4, and it seems inevitable that level 4 will be needed for Big SaaS, because, as Chong and Carraro argued, “[such] a SaaS system is scalable to an arbitrarily large number of customers ... without requiring additional re-architecting of the application, and changes... or fixes can be rolled out to thousands of tenants as easily as a single tenant” [1]. However, this architecture has not addressed the societal risks caused by system level failures. Addressing this problem, in [3] we suggested integrating the architecture with a fault tolerance facility to reduce the consequences of system-scale failures with reduced probability of failure and quicker recovery from failure. Fault-tolerance is one of the most challenging issues of distributed and high performance computing [4]. The extensive research in the past few years for cloud computing in particular can be classified according to the fault to be tolerated. Resource-level fault tolerance aims to achieve high reliability in individual computing resources, such as processor, memory, I/O and network bandwidth, which are lent to users as services, etc. [5, 6]. Infrastructure-level fault tolerance techniques include those for virtual machines (VM) or virtual clusters [7], with required availability and reliability via tolerance of underlying hardware failures [8, 9]. At platform level, fault tolerance facilities have been provided in various parallel programming models, such as MapReduce, in which a failed map or reduce task is restarted and/or relocated to a new compute node. The performances of two most commonly used checkpoint / restart techniques for distributed systems, i.e. the Distributed Multi-Threaded Checkpointing and Berkeley Lab Checkpoint/Restart library, have been evaluated in Amazon Elastic Compute Cloud EC2 environment [10]. However, there is no work at application level for SaaS. Moreover, almost all research on fault tolerance in cloud computing assumes that a set of virtual machines are deployed on a number of physical servers and a virtual machine is created for one tenant/user. Thus, they are only applicable to those SaaS applications in the multi-instance architecture of Chong and Carraro’s level 2, but not suitable for those in the multi-tenancy architectures of level 3 and 4. In summary, while some of the above techniques are useful to reduce failure rate of lower level entities, they have not addressed satisfactorily the problem of the high societal risks of Big SaaS. The current practice still relies on traditional periodical backup operations. For example, Salesforce backs up all data to a tape storage on a nightly basis. This traditional checkpoint-and-rollback fault tolerance technique is unsatisfactory for Big SaaS applications. In fact, Salesforce’s tenants also use third party facilities for backing up their own data. Addressing this problem, in [3], we proposed a new approach called tenant-level checkpointing and implemented a prototype called Tench. In this approach, instead of saving the whole system’s state, each checkpointing only saves a part of system state related to a specific tenant. This is important because saving the state of the whole system with one checkpointing operation will cause I/O contention and long delays, as all users of all tenants lose access to the system. Figure 2 Four-Level SaaS Maturity Model [1] Figure 3 Integration of a fault tolerance facility with SaaS Application Architecture Figure 3 shows the architecture of such a fault tolerance facility and how it is integrated with the service-oriented SaaS application architecture [1]. In comparison with existing bulk checkpointing techniques, our preliminary theoretical and empirical studies demonstrated that tenant-level checkpointing increase the performance by a factor of $O(N)$, where $N$ is the number of tenants [11]. It has the following advantages. First, while a SaaS application runs continuously, tenant-level checkpointing can target a specific tenant when the users of the tenant are less active. Thus, a checkpoint can be created without causing too much disruption to normal operations of the system, as requests for services from other tenants are not blocked. Second, tenants with different quality of service requirements (e.g., different reliability levels) can be treated differently by having different checkpoint frequencies. Third, tenant-level checkpointing can be implemented to block only those users of the tenant being checkpointed without affecting any other users. The experiments reported in [3] have shown that the latency of creating a checkpoint for a tenant only depends on the size of the tenant's state. It is independent of the number of tenants. Moreover, partial checkpointing enables different types of data to be treated differently, with the more important data being checkpointed more frequently. An example of higher priority data would be metadata as it plays an important role in SaaS applications. Finally, but most importantly, recovery from a system-scale failure can proceed by tenant so that the most important tenants are roll-backed first. This significantly reduces the total outage time and hence the societal risk of system-scale failures. It is worth noting that VM checkpointing, replication and live migration facilities [12] not only provide fault tolerant solutions to reliability problems, but also balance service work load [13], reduce system energy consumption of data centers [14], and can even the cost of subscription per user [15]. Similar benefits can be obtained from a tenant-level checkpointing facility like Tench for SaaS applications that do not run on virtual machines. Therefore, tenant level checkpointing could be a viable fault-tolerance solution to Big SaaS' societal risk problem. 4.2 Specification: Algebraic Method Formal methods have proved their value by their successful applications in safety-critical systems. They can significantly improve software reliability and ensure system safety. Their application in the development of Big SaaS can reduce their societal risk, too. Although this is considered to be a myth [16, 17], formal methods are widely regarded too expensive to be used. However, the great value of Big SaaS applications makes formal methods viable as its cost would then be justifiable. They can also be easy to learn for ordinary software engineers [18]. Moreover, we believe that formal methods can also provide better solutions to the problems of maintaining conceptual integrity, trustworthy crowdsourcing, and continuous evolution. The following reports our preliminary work on how formal methods address these issues. 4.2.1 Support for Crowdsourcing-Based Customization As discussed in Section 2, it is highly desirable to include a crowdsourcing-based customization facility in Big SaaS applications. In this approach, services are discovered and composed by the customers with little support from the service provider. One approach to realize such customization is to employ semantic descriptions of the services as illustrated in Figure 4. The results of these customizations and compositions must be of high reliability, due to our requirement to minimize societal risks. To achieve this service semantics need accurate descriptions, which should also be the following: - **Comprehensible**: easy for users to understand even if they have no IT professional knowledge or skills. - **Abstract**: the design and implementation details hidden from the users for comprehensibility and also to protect intellectual property. - **Machine-Searchable**: for the discovery, composition and configuration of services. - **Testable**: that service providers and users can both verify the service's correctness with respect to semantic descriptions. However, no existing technique satisfies all of these requirements. They tend to fall into two categories. The majorities are based on ontology and use a vocabulary to annotate services. The others are based on the mathematical notations of formal methods. Semantic Web Services are an example of the former approach [19] and OWL-S was the first major ontology definition language for this purpose [20]. It provides a set of constructs for describing the properties and capabilities of Web Services in a machine-readable format. Formal methods were applied to provide a precise mathematical meaning in a formal ontology. An alternative approach is the Web Service Modelling Ontology (WSMO) [21], which is a conceptual model that uses the Web Services Modelling Language (WSML) [22]. As well as Big Web Services, work has also been carried out on how to specify the semantics of RESTful web services, such as, MicroWSMO/hrESTS [23], WADL [24] and SA-REST [25]. The above works all take the same approach to specify the semantics of services. That is, a vocabulary is defined by ontology of its application domain to give the meanings of the input and output parameters, as well as the functions of the services. Such descriptions are easy for human developers to understand and efficient for computers to process. However, they cannot provide a verifiable and testable definition of a service's function, because any ontology is limited to stereotypes formed from the relationship between the concepts and their instances. Formal methods, as an alternative to the ontological approach, have been developed over the past 40 years to define the semantics of software systems in mathematical notations. One such formal method, algebraic specification was first proposed in the 1970s as an implementation-independent specification technique for defining the semantics of abstract data types. Over these years, it has been advanced to specify concurrent systems, state-based systems and software components, all based on solid foundations of the mathematical theories of behavioural algebras [26] and co-algebras [27]. We argue that it is particularly suitable for the development of Big SaaS. Algebraic specifications are at a very high level of abstraction. They are independent of any implementation details. One attractive feature they have is that they can be used directly in automated software testing; see Section 4.4. This feature is particularly important for SaaS engineering, because, when services are customized and composed together by the customer, testing must be performed automatically without the developer’s support. In [28], we investigated the application of the algebraic specification method to service-oriented software by extending and combining the behavioural algebra and co-algebra techniques. The algebraic specification language CASOCC, which originally designed for traditional software entities, such as abstract data types, classes and components, was extended to CASSOC-WS for the formal specification of Big Web Services. A tool was developed to automatically generate the signatures of algebraic specifications from WSDL descriptions of Big Web Services. CASOCC-WS was also applied to RESTful web services [29]. A tool was developed to check syntax-level consistency of formal specifications. A case study was conducted applying CASOCC-WS to a real industrial system, GoGrid. Based on these works, a new algebraic formal specification language called SOFIA [43] was proposed to improve the usability of algebraic specification languages when applied to services. However, algebraic specifications and other formal methods do not directly support efficient searching of services. To bridge the gap between algebraic specification and ontological descriptions, we proposed in [30] to derive the former from the latter, thereby augmenting algebraic specification with the machine-readable and human-understandable attributes of ontology. A software tool called TrS2O (Translator from Specification to Ontology) has been designed and implemented [30]. It translates formal specifications in SOFIA to ontological descriptions of services in OWL. Figure 6 shows the overall structure of the TrS2O Tool. ![Figure 6. The Overall Structure of The TrS2O Tool](image-url) ![Figure 5. Ontology generated from the SOFIA specification](image-url) A case study of the RESTful web service interface of an actual industrial system called GoGrid shows that the approach is practically useful. 4.2.2 Formal Specification of Conceptual Models One advantage of the algebraic method is that the infrastructure, platform, application domain knowledge, and the services of a SaaS application can all be formally specified in the same language and decomposed into a number of reusable specification packages. For example, in the case study of GoGrid’s RESTful API, we first specified the RESTful web service in a package, then used that to specify the basic constructs of computing infrastructure, and then used both packages to specify the services that GoGrid provides. Figure 5 gives the ontology generated from the SOFIA specification of RESTful web services. Therefore, the specification of domain concepts can be used to serve as a formal specification of the conceptual model of the system. This specification supports automated testing and its internal consistency can be verified. This enables it to support the maintenance of conceptual integrity, too. 4.3 Implementation: New Paradigm of Programming Currently, most web-based applications, including those for SaaS, are implemented in many different programming and scripting languages and even several different paradigms. This complicates development and makes it difficult to develop supporting tools. A desirable alternative is to have a new single paradigm that is particularly suitable for SaaS applications. The agent-oriented paradigm has long been considered suitable for dynamic environments such as the Internet [31], and many research efforts have been reported in the literature [32]. However, the IT industry has been slow to adopt the approach. There are a number of possible reasons for this. First, the notion of agents seems to be too strongly linked to distributed artificial intelligence for software engineers to accept it. Secondly, there are no efficient implementations of agent-oriented programming languages. We now report our work in progress that addresses these problems. 4.3.1 Agent-Oriented Programming Language To address the first problem, we proposed a simplified model of agent [33, 34]. Agents are service providers that consist of: - **actions** that the agent can perform, representing the services it provides or requests it can submit, - **variables**, which represents its internal state of the agent, - **behaviour rules**, forming the body of the service, that determine how the requests are processed, - **collaborating agents**, from which the service requests are received. This set can be updated at runtime. For example, the following is the Hello World example of the language CAOPLE, which we are developing. ```plaintext caste Peer; action say(word: string); init say("Hello world!") end Peer Caste is the classifier of agents so agents are instances of castes. In the above example, the caste Peer is defined. It can take the action of say("Hello world!") and it does this when the agent is created. An agent is therefore an active autonomous computational entity. Castes can be extended to sub-castes just as classes in object-orientation have subclasses. For example, the following is a sub-caste of Peer. ```plaintext caste GreetingPeer inherits Peer; observes all in Peer; body when exists A in Peer: say("Hello world!") say("Welcome to the world!") end end GreetingPeer An agent of GreetingPeer observes the actions taken by all agents of Peer, as described in the observes clause, which defines its collaborative agents. When there is an agent in the caste Peer that takes the action say("Hello world!")), it will react with the action say("Welcome to the world!"). In general, an agent communicates with other agents by taking observable actions to send messages and it receives messages by observing the observable actions of its collaborative agents. An action can be targeted to one or a set of specific agents. For example, if the say statement can be changed to one of the following: ```plaintext say("Welcome to the world!") to All in Peer; say("Welcome to the world!") to A; ``` If the target receiver is omitted, the default is public. In contrast to the notation of class in object-oriented programming, an agent can be a member of multiple castes at once and its membership can be changed dynamically at runtime by executing one of the caste membership statements: - **Join** casteID: to become a member of casteID; - **Quit** casteID: to quit the membership of casteID; - **Suspend** casteID: to suspend the execution of the body of casteID; - **Resume** casteID: to resume the execution of the body of casteID; - **MoveTo** casteID: to quit from the current caste and become a member of the named caste. Using castes and the inheritance relationships between them, one can encapsulate different behaviours in different contexts together with a set of related state variables, actions, and collaborative agents. The flexible casteship enables agent to have adaptability and to be easy to compose and configure. For example, the following shows how agent can adapt its behaviour according to the context by change its caste membership. ```plaintext caste CheerfulPeer inherits Peer; body when exists A in Peer: say("Hello world") do say("Hi, good morning."); end; end CheerfulPeer caste SmartPeer inherits Peer; observes DateTime: Clock; body when DateTime: Tick() do if DateTime.Day = Monday then Join FriendlyPeer else Join CheerfulPeer end; end; end SmartPeer ``` The above just a few key features of the agent-oriented programming language CAOPLE. Readers are referred to [34] for more details. In general, we believe that a new programming paradigm such as agent-orientation will enable the implementation of SaaS applications at a high level of abstraction. Thus, it is worth pursuing. 4.3.2 Implementation of CAOPLE Language Our approach to the implementation of the CAOPLE programming language is to translate CAOPLE source code into machine code for a virtual machine [35]. Our virtual machine, called CAVM, differs from other language specific virtual machines like JVM in that it consists of two parts: a local execution engine LEE and a communication engine CE. The LEE executes the program’s computational code, while the CE realises communication between agents distributed over a computer network. ![Figure 7 Compiling, deploying and executing CAOPLE code](image) As illustrated in Figure 7, the casts in a CAOPLE program are compiled so that one Object Code module is generated from each caste Source Code. It is deployed to a Computer node that runs a communication engine. An agent of a caste can be created on any Computer node that runs an execution engine. It will load the object code module of the caste and execute the code on the machine. For cross-machine communications between agents, the messages are send to the communication engine where the caste resides and further distributed to execution engines where the target agents executes. They may be passed through one or more other communication engine. The reader is referred to [35] for more details of the design, implementation and experiment results of CAVM. 4.4 Testing: Specification-Based Test Automation Automated testing can play at least two roles in the development of Big SaaS: it supports continuous evolution and it ensures the quality of crowdsourcing in service customization. There are a number of approaches to automated testing for software in general and for service-oriented systems in particular. In [36], we proposed a collaborative approach that realizes automated testing of composite web services through composition of test services, as illustrated in Figure 8. In this approach, each web service is accompanied by a testing service, and the framework of automated testing contains a number of general test services for test case generation, test adequacy measurement, test result correctness checking, etc. A test request for the composition of services is submitted to a test broker, which decomposes the testing task into subtasks if needed and if so, searches for and invokes appropriate test services for each sub-task. The searching and invocation of test services (and the initial registration) employs ontologies both of software testing and of the application domain. ![Figure 8. Collaborative Automated Testing of Web Services](image) This approach was devised for web services and should be applicable to Big SaaS, but we believe a formal specification language like SOFIA would make the test automation efficient without developing various test services. ![Figure 9. Architecture of ASSAT Testing Tool](image) Techniques of software test automation based on algebraic specifications have been investigated since 1980s for procedural languages [37, 38], OO software [39, 40], and component-based systems [41], etc. More recently, we have been developing an automated testing tool called ASSAT [42] for testing web services based on formal specification written in SOFIA [43]. Figure 9 shows the architecture of the tool and Figure 10 shows its GUI. Such testing tools can achieve complete automation of the whole testing process including test case generations, test invocation and test result correctness checking. Although SOFIA and ASSAT were originally developed for web services, the principles underlying the language and the implementation of the tool are applicable to Big SaaS. It is worth further research to adapt them to Big SaaS and evaluate their effectiveness. It is worth noting that there are two approaches to the quality assurance of customization. The first is brutal force approach. In this approach, all possible compositions of services and all possible configurations of the SaaS application are tested up to a certain level of combination adequacy, say the coverage of all 2-way or 3-way combinations, before the system is released to the users. This approach is viable only when the number of possible service compositions and configurations is small. Unfortunately, even for a SaaS application of modest scale, there could be a huge number of test cases even to cover 2-way or 3-way combinations of services and configurations. The second is the automated online testing approach. During the development process, testing focus at the individual services to ensure each service is correct with respect to its specification. The most popular and important combinations and configurations of the services are also tested. When a user builds his or her own customized version of the system, the customization, which is a composition and configuration of the services, is then tested automatically against the specification. In this approach, automated testing plays a crucial role to support customization of services. It requires testing to be performed with little human involvement because crowdsourcing-based customization is conducted by the users. 5 Conclusion In this paper we argue that an era of Big SaaS is emerging. It differs from existing SaaS applications in the number of tenants/users and the complexity of their relationships, as well as in the size and complexity of the program code. They will possess and utilize Big Data to provide great added value to their services. Developing Big SaaS applications will impose grave challenges to software and service engineering to reduce the societal risks to an acceptable level, to enable trustable crowdsourcing-based customization, to maintain conceptual integrity of the system and to support continuous evolution. We argued that these challenges must be met in all stages of the software development lifecycle. In particular, in the specification phase, an algebraic specification language can support formal development of service-oriented systems to improve reliability. It also helps to maintain conceptual integrity by providing a formal definition of the conceptual model. It supports crowdsourcing-based customization by linking formal specification to the ontological description of services. Moreover, testing can be automated based on algebraic specifications. This also helps with continuous evolution. Also, for the architectural design phase, a tenant-level checkpointing facility could play a significant role in reducing societal risks. In the implementation phase, a new paradigm of programming is desirable and we are exploring the potential of an agent-oriented programming language. In the testing phase, automation is essential and formal specification will make this possible. References
{"Source-Url": "http://cms.brookes.ac.uk/staff/HongZhu/Publications/CLOUD2015VP.pdf", "len_cl100k_base": 7494, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41309, "total-output-tokens": 10168, "length": "2e12", "weborganizer": {"__label__adult": 0.0002512931823730469, "__label__art_design": 0.00028586387634277344, "__label__crime_law": 0.00026035308837890625, "__label__education_jobs": 0.00058746337890625, "__label__entertainment": 5.233287811279297e-05, "__label__fashion_beauty": 0.00011473894119262697, "__label__finance_business": 0.00034046173095703125, "__label__food_dining": 0.00026488304138183594, "__label__games": 0.00034689903259277344, "__label__hardware": 0.0005216598510742188, "__label__health": 0.00033783912658691406, "__label__history": 0.00018799304962158203, "__label__home_hobbies": 5.453824996948242e-05, "__label__industrial": 0.00023114681243896484, "__label__literature": 0.00022923946380615232, "__label__politics": 0.00021541118621826172, "__label__religion": 0.00029087066650390625, "__label__science_tech": 0.01409149169921875, "__label__social_life": 7.009506225585938e-05, "__label__software": 0.00820159912109375, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.00017321109771728516, "__label__transportation": 0.00033855438232421875, "__label__travel": 0.00014889240264892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43733, 0.02478]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43733, 0.39597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43733, 0.91124]], "google_gemma-3-12b-it_contains_pii": [[0, 4694, false], [4694, 7282, null], [7282, 13071, null], [13071, 17069, null], [17069, 21972, null], [21972, 24984, null], [24984, 30042, null], [30042, 33763, null], [33763, 37318, null], [37318, 43733, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4694, true], [4694, 7282, null], [7282, 13071, null], [13071, 17069, null], [17069, 21972, null], [21972, 24984, null], [24984, 30042, null], [30042, 33763, null], [33763, 37318, null], [37318, 43733, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43733, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43733, null]], "pdf_page_numbers": [[0, 4694, 1], [4694, 7282, 2], [7282, 13071, 3], [13071, 17069, 4], [17069, 21972, 5], [21972, 24984, 6], [24984, 30042, 7], [30042, 33763, 8], [33763, 37318, 9], [37318, 43733, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43733, 0.06276]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
f0cf911d13462a11e80bf75f119074f90f24a389
Model-Based Software Design Neil Iscoe, Zheng-Yang Liu, Guohui Feng, Britt Yenne, Larry Van Sickle, Michael Ballantyne EDS Research, Austin Laboratory 1601 Rio Grande, Ste. 500 Austin, Texas 78701 iscoe@austin.eds.com Abstract Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model. Introduction Although empirical field studies (Curtis, et al., 1988) have shown that application domain knowledge is critical to the success of large projects, this knowledge is rarely stored in a form which facilitates its use in creating, maintaining and evolving software systems. Capturing and managing this knowledge is a prerequisite to automating software design. Unfortunately, domain knowledge is implicitly embodied in application code rather than explicitly recorded and maintained in separate documents. Even when documents are maintained separately from the code, the knowledge is stored in voluminous natural language documents in an informal rather than a formal manner. Although problem-specific languages are designed to remedy this situation, domain-specific knowledge is still captured in an ad hoc instead of a systematic manner. Furthermore, these languages are generally not designed in such a way that the results can be generalized or even replicated. We are attempting to capture the domain-specific knowledge about different industry areas as a set of application domain models. Application domain models are representations of relevant aspects of application domains that can be used to achieve specific software engineering operational goals. Operational goals are always implicit in the construction of a domain model and are essential to understanding the form and content of that model. Unlike generalized knowledge representation projects such as Cyc (Lenat, 1990) that attempt to provide a basis for modeling encyclopedic knowledge, domain modeling explicitly acknowledges the commonly held view (Amarel, 1968) that representations are designed for particular purposes. These purposes—the operational goals—inherently bias any particular solution and dictate the final form of the model. Many different operational goals and modeling projects are being pursued within the field of domain modeling (Iscoe, et al., 1991). This paper begins with an overview of the domain modeling research at EDS and our corresponding operational goals. We explain our approach to automating software design as a paradigm which facilitates the creation of multiple-specification models from a domain model. Finally, we discuss a set of issues that we have encountered in achieving our goals. Programming-in-the-Large EDS produces large software systems for a variety of industries such as utilities, finance, health insurance, and so on. Associated with each industry area is a rich body of knowledge which is critical to specifying and implementing the proper software system. This knowledge includes legal, financial, technical, and other expertise which is acquired by personnel over a period of years. EDS is organized into strategic business units (SBUs) so that the organization's knowledge about a particular industry can be leveraged through reuse. At the EDS Austin research laboratory, we are building a domain modeling system which is designed to achieve the following operational goals: - Requirements & Specifications—Eliciting, verifying, and formalizing software requirements and specifications, - Program Transformation/Generation—Transforming a specification into efficient executable code, - Reverse Engineering—Identifying the semantics of existing code in terms of a partial specification. The realization of these operational goals is consistent with our long-term plan for creating knowledge-based tools to support programming-in-the-large (Barstow, 1988). The domain modeling approach provides ample opportunities for creating an automated software development paradigm. Figure 1 illustrates the context in which we operate. The industry knowledge for each SBU is instantiated into a domain model, which then serves as a source of knowledge for programs (the ovals) to achieve operational goals, such as reverse engineering source code or eliciting system specifications. The figure actually illustrates two different processes. The left side of figure 1 shows the process of domain model instantiation while the right side illustrates the domain model being used to produce a single specification. The System Specification (rectangle) illustrates a specification for a single specific system within an application domain. However, a multitude of system specifications can be created from a domain model. Figure 2 illustrates the two separate modeling tasks required by our approach. Domain experts interact with a system to represent their knowledge in terms of domain modeling constructs. Specification designers then use the system to build specification models which satisfy constraints in the domain model. In order to create a system specification, the application designer selects a set of relevant policies and constraints from the domain model that must be included and enforced in the specification model. The constraints include intra-attribute as well as inter-attribute relationships within and across classes relevant to the task at hand. Because one of our goals is to generate executable code, we require that a particular specification model be consistent. A very large but finite number of specification models can be created which are consistent and correct with respect to a particular domain model. Reverse Engineering We are using reverse engineering to help instantiate both domain and specification models. Figure 1 illustrates how application domain knowledge and programming knowledge are used to extract partial specifications from source code. The box labeled “programming knowledge” currently represents knowledge of COBOL syntax, coding conventions, and program plans and structures (Van Sickle, 1992). This knowledge crosses all of the targeted application domains and is the basis of a separate code browser that operates independently of the operation shown in Figure 1. We are also attempting to mechanically pre-instantiate domain models by using the data gathered from the applications of an EDS entity-relationship-based CASE tool that is used by SBUs for data modeling and code generation. By analyzing data models, we have access to tens of thousands of specific entities, relationships, and constraints which have been used to specify programs and are useful for partially instantiating domain models. Modeling Considerations Models are inevitably abstractions of reality that capture information to achieve specific goals. A domain model determines the items of interest that exist in the world and sanctions the types of inferences allowed [Liu and Farley, 1990; Davis, 1991]. A model is the result of conscious decisions about what to describe and what to ignore. No model is complete or correct in the sense that it is applicable to all tasks. Domain models in our system are structured to represent the type of information that is used within EDS SBUs to achieve our operational goals. Although EDS serves a wide range of industries, we are not attempting to model real-time or other application areas which diverge from standard business transaction processing. A general issue of interest in this research is the extent to which any particular representation/model can be mutated to hold different types of information for different tasks while still effectively achieving the original operational goals. One requirement for our models is that they be consistent. Domain and specification model consistency is maintained by a specialized theorem prover. The theorem prover, STR+VE, is an upgraded version of the prover presented in (Bledsoe, 1980) for proofs of theorems in general inequalities. A TMS is being constructed to interface between the modeling system and the theorem prover. Dynamic Knowledge Structure The remainder of this paper presents one aspect of domain model representation and gives a glimpse of the relationship between specification and domain models and the organization of domain models. While most would agree that hierarchical organizational strategies provide a reasonable way to structure knowledge within complex domains, the creation of a hierarchical structure, like any type of representational scheme, imposes a particular view of the world. Unfortunately, there is no particular view that is optimal for every application. Although the programs within a particular application share the same legal, physical, and economic constraints, the construction of any particular specification model depends upon a set of policy decisions that determine how cases are handled. Furthermore, software in the large systems are continually changing in such a manner that the concept of a static hierarchy is insufficient to capture the process of system evolution. Consider software systems that manage the payment of health insurance claims. Although conceptually simple, these systems handle hundreds of thousands of different cases. One way to represent these cases is to enumerate the leaf nodes of the hierarchies created by the appropriate partitioning of attributes such as gender, age, family status, previous condition, employment, deductibles, copayments, prognosis, and so on. Unfortunately, the tree structure created by case expansion not only obscures relevant and interesting cases, but is also a monolithic structure. A paradox of object-oriented approaches is that well-adapted structures are not adaptable to new situations. Because of the combinatorial explosion of the leaf nodes, it makes sense to handle the cases at as high a level as possible. Term subsumption systems such as CLASSIC (Borgida, et al., 1989) automate this process by determining the place in a hierarchy in which terms are subsumed. But subsumption systems assume a single structure in which all sub-models can belong. In the case of applications such as health insurance, individual modules may have different hierarchical structures and still maintain the integrity and constraint rules of the domain model. Attribute Definitions Attributes are normally considered as data values or slot fillers within a class or frame. However, the standard treatment of attributes as lists of data values with some underlying machine representation fails both to capture sufficient semantic information from the application domain and to state definitions with sufficient formality to allow semantics-related consistency checks. Attributes are functions which define how a set of objects is mapped within a class. One type of attribute has a value set represented by a nominal scale which consists of a set of values, $\mathcal{A} = \{c_1, ... c_n\}$. One of the ways that the modeling process maps the world into a domain model is by creating categories in such a way that items to be categorized with respect to a particular attribute are as homogeneous as possible within a category and as heterogeneous as possible between categories. Examples of nominal scales abound and map cleanly to the notion of enumerated type as shown below: \[ \text{(Colors)} \\ \quad :type \quad \text{nominal\_scale} \\ \quad :values \quad \{\text{Red, Yellow, Green, Blue}\} \] The next type of attribute is an ordinal scale—a nominal scale in which a total ordering exists among the categories. Interval and ratio scales are the more quantitative scales and add definitions of dimensions, units, and granularity. This brief description of attribute type was included to allow the reader to understand the examples in this paper. Attributes have additional types and a number of other properties which are explained in (Iscoe, et al., 1992). Hierarchical Decomposition Hierarchies are a natural way to view and organize information and, at some level of abstraction, are a part of most object-oriented and knowledge representation languages. Unfortunately, the simplicity of these concepts can sometimes obscure the semantics that a model is attempting to capture. That one's needs dictate one's ontological choice is a fundamental premise of knowledge engineering. The ability to systematically define a new set of attributes by partitioning the value sets of old attributes and then using these new attributes to reclassify the domain in accordance with the new requirements is an important aspect of our attribute characterization. By preserving the "ontological map" as a component of the attribute, the domain modeler can shift between the differing paradigms modeled by various classes of objects. Attribute characterization provides a representation and systematic methodology for the partitioning of attributes that facilitates the way they are organized, subdivided, and built into hierarchies. An attribute restriction is a new attribute whose value set and set of applicable relations are subsets of the original attribute. Creating a new attribute serves the dual purpose of creating a set of views on the old attribute as well as creating a new attribute. Often, new attributes are defined in terms of old attributes by partitioning the original value set and then equating each new attribute value set with an element of the partition. As an example, an accounts receivable (AR) system may use the attribute days_to_payment whose value is the average number of days it takes for the client to pay a bill. \[(\text{days_to_payment})\] \[:\text{type} \quad \text{ratio_scale}\] \[:\text{dimension} \quad \text{time}\] \[:\text{unit} \quad \text{days}\] From the standpoint of AR applications, a more useful attribute might be: \[(\text{type_of_payer})\] \[:\text{type} \quad \text{Ordinal_scale}\] \[:\text{Ordered_by} \quad \text{lateness_of_payment}\] \[:\text{values} \quad \text{(pays_on_time slow_pay dead_beat)}\] \[\text{days_to_payment} : \text{Ratio_scale Time in Days (Min 0) (Max 360)}\] Figure 3 — Partitioning days_to_payment This new attribute will be defined by partitioning the value set of days_to_payment by subdividing the range of values, then equating each value with one of the elements of the partition as illustrated in figure 3 and described as follows: \[(\text{type_of_payer})\] \[:\text{mapped_from} \quad \text{days_to_payment}\] \[:(\text{pays_on_time} \quad (<=30))\] \[:(\text{slow_pay} \quad (AND \quad (> 30) \quad (< 90)))\] \[:(\text{dead_beat} \quad (>= 90)))\] Note that the days_to_payment attribute is based on a quantitative attribute while the type_of_payer attribute is based on a qualitative attribute. In general, an attribute mapping represents a loss of information (in this example, the number of days overdue) in return for a more useful and inherently less detailed category. Using Population Parameters Population parameters are used to help automate the process of creating new attributes from old ones. For example, some graduate admissions committees use GRE scores to separate applicants into acceptance categories. Population parameters allow application designers to create new attributes based on restrictions to the original attribute as shown below: \[\text{GRE_Score} : \text{Interval_scale Score in GRE units}\] \[:(\text{min} 400) \quad (\text{max} 1600)\] \[:(\text{dist normal}) \quad (\text{mean 1100}) \quad (\text{stddev 125})\] Figure 4 — Using Population Parameters to Restrict an Attribute Figure 4 shows the GRE score as an attribute which could be attached to a student. Understanding the distribution of values within the value set of GRE scores allows application designers to create partitions in any one of a variety of ways. For example, assume that an application designer wanted to create an initial partition based on the requirement "accept all students who score in the top x% on the GRE, consider those who score between x% and y% and reject those who score in the bottom y%." Given this type of requirement, the domain model contains the appropriate information to use and an algorithm to produce the correct raw score numbers to achieve such a partition. Another way that these requirements are sometimes stated is to build a partition based on an absolute raw score. For example, a requirement like "accept all students who score above 1450 on the GRE" is easily displayed and modeled. Furthermore, this type of specification can be used interactively so that the designer can juggle between raw scores and percentiles until the partitions appropriate for the application domain are produced. Domain and Specification Models In this section we focus on relations between attributes within a single domain model class. For the purposes of this discussion we define the following attributes: \[(\text{Name}) \quad :\text{type} \quad \text{identifier}\] \[(\text{Gender}) \quad :\text{type} \quad \text{nominal_scale}\] \[:\text{values} \quad \text{(male female)}\] (Eye_color :type nominal_scale values (brown, blue, green)) (Benefits :type nominal_scale values (Soc_sec, RR, none)) (Age :type ratio_scale dimension (time) unit (year) granularity (1) derived (diff_date cur_date birth__date)) (Medicare_payment :type ratio_scale dimension (money) unit (dollar) granularity (.01)) popparms ((rain 1)(max 10000)(mean 225)) (Age_m type: ordinal_scale values (under65 65_and_over over) mapped_from age under65 (< 65) 65_and_over (>= 65)) Although many other constraints exist, domain model classes can be regarded as consisting of sets of attributes which are either required or might be included within a particular domain model. These constraints are expressed as follows: must_have(c, a) — attribute a must be used in class c in a model. applicable(c, a) — attribute a can be used in class c in a model depending on the choice of specification designer. cond_must_have(c, a, cond) — attribute a must be used in class c in a model if condition cond evaluates to true. cond_applicable(c, a, cond) — attribute a can be used in class c in a model if condition cond evaluates to true. Within any particular specification model, an attribute is simply classified as used within a class. used(m, c, a) — within model m, attribute a is used in class c in model m. The most straightforward relationship between a domain model and a specification model is that must_have attributes are used in all specification models and applicable attributes are selected by the specification designer. The following rules formalize the semantics of the four constraints on the use of attributes within classes listed above. (1) must_have(c, a) → ∀m used(m, c, a) (2) applicable(c, a) → ∃m used(m, c, a) (3) (cond_applicable c a ((p1 a1 v1)...(pn an vn))) → ∀m, object [(used m c a) → (used m c a1) ∧ ... ∧ (used m c an)] For example, in a domain model, name might be required for all specification models, while eye_color could be selected only if it were appropriate for the particular specification model. (person :must_have ((Name 0)) :applicable ((eye_color 0)) ...) The application of these constraints when cond is vacuously true is a fairly standard feature in most modeling languages of this type. However, name and eye_color are attributes which are total functions and are not as interesting as the cases that occur when the attributes are partial functions. Conditions for Function Evaluation Recalling that an attribute is a function which maps objects to a particular property, cond can be interpreted as the condition which must be satisfied for the attribute to be a total instead of a partial function. In other words, cond defines the subset which is the domain of applicability of the partial function. For example for a person class medicare_payment is only applicable if age is 65 or over and benefits is none. (4) (cond_must_have c a ((p1 a1 v1)...(pn an vn)) → ∀m,object [(used m c a1) ∧ ... ∧ (used m c an)] The domain modeling system is designed so that the conditions required to establish the proper domain for an attribute are automatically maintained. These conditions are constrained in such a way that tractability is maintained and are of the form ((p1 a1 v1)...(pn an vn)) , where p_i is the name of a predicate, a_i is the name of an attribute, and v_i is a value of the attribute. A user can create a specification model with any particular class hierarchy as long as the domain policies and constraints are satisfied. We are currently experimenting with ways to capture and verify domain modeling constraints by presenting redundant information in a variety of ways. We believe that many of the specification problems in large systems are created when value set changes cause a single case to be changed but fail to correct cases that were identified from a previous inference. 76 For example, if we assume that Medicare_payment is only applicable if age is 65 or over and benefits is none, the system can infer that Medicare_payment cannot apply to a person who is younger than 65. In fact, assume \[ \text{cond_applicable person Medicare_payment} \\ ((= \text{Age}_m \text{ 65 and over}) (= \text{Benefits none})), \] then \[ \forall m, \text{object} \\ ((\text{used m person Medicare}_m) \rightarrow \\ (\text{used m person Age}_m)\land(\text{used m person Benefits}) \\ (\text{instance m person object}) \land \\ (\text{in (Medicare}_m \text{ object) [1 10000]}) \\ \rightarrow (= (\text{Age}_m \text{ object}) \text{ 65 and over}) \land \\ (= (\text{Benefits object) none}))) \] (5) After Medicare_payment is used in a model, if user is trying to assign a Medicare_payment to a person who is younger than 65, using rule (5) will lead to a contradiction. A key point is that when people are presented with value sets they automatically and unconsciously perform substitutions such as the ones listed above. This is a reasonable way to build a model until a value set changes. In large systems, value sets are frequently changed. Consequently, conclusions that were drawn by using negation to infer values become invalid. We use the applicability of conditions and the system's knowledge of value sets to attempt to provide the proper cases for the domain modeler to check when conditions change. Discussion In this paper, we have presented the concept of modeling application domains in order to achieve the operational goals of program specification, code generation, and reverse engineering. The main concept is that multiple specification models can be created that are consistent and "correct" with respect to a domain model. Domain models represent information about a particular industry area. Specification models represent information about a particular system. The middle oval on the right side of figure 1 represents the process of code generation through program transformation. Given a specification model, executable code can be generated by performing a series of correctness-preserving transformations on the specification. The goal of this part of the project, which is not yet active, is to produce efficient code that satisfies the original specification. Domain and specification models are constructed by using a graphical interface to interactively create a set of rules based on attribute value set partitions and the preceding axioms. The system is being implemented using Motif GUI on SPARC workstations. Although it is currently operating in a single user mode, it is being designed to be accessed simultaneously by multiple domain modelers. We are also trying to accelerate the knowledge capture process by reverse engineering data models that have been captured by an existing EDS case tool and instantiating them into the appropriate domain models. Acknowledgments We wish to thank Betty Milstead and Raman Rajagopalan for their comments on earlier drafts of this paper. References
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19930008325.pdf", "len_cl100k_base": 4963, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21406, "total-output-tokens": 6080, "length": "2e12", "weborganizer": {"__label__adult": 0.00029468536376953125, "__label__art_design": 0.00032901763916015625, "__label__crime_law": 0.00028324127197265625, "__label__education_jobs": 0.00084686279296875, "__label__entertainment": 4.357099533081055e-05, "__label__fashion_beauty": 0.00013458728790283203, "__label__finance_business": 0.00022542476654052737, "__label__food_dining": 0.0002961158752441406, "__label__games": 0.0003521442413330078, "__label__hardware": 0.0006170272827148438, "__label__health": 0.000377655029296875, "__label__history": 0.00017273426055908203, "__label__home_hobbies": 8.535385131835938e-05, "__label__industrial": 0.00034236907958984375, "__label__literature": 0.0002143383026123047, "__label__politics": 0.00017583370208740234, "__label__religion": 0.0003209114074707031, "__label__science_tech": 0.01425933837890625, "__label__social_life": 7.43865966796875e-05, "__label__software": 0.00525665283203125, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.0002428293228149414, "__label__transportation": 0.00042510032653808594, "__label__travel": 0.00015842914581298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26406, 0.02898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26406, 0.65699]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26406, 0.89666]], "google_gemma-3-12b-it_contains_pii": [[0, 4417, false], [4417, 6982, null], [6982, 12695, null], [12695, 17493, null], [17493, 21341, null], [21341, 26406, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4417, true], [4417, 6982, null], [6982, 12695, null], [12695, 17493, null], [17493, 21341, null], [21341, 26406, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26406, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26406, null]], "pdf_page_numbers": [[0, 4417, 1], [4417, 6982, 2], [6982, 12695, 3], [12695, 17493, 4], [17493, 21341, 5], [21341, 26406, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26406, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7b9b968c8cdb1be970fa1e8bdc58b689641711d5
Abstract. BPEL/WSBPEL is the main approach for combining individual web services into integrated business processes. A BPEL/WSBPEL scenario allows for specifying which services will be invoked, their sequence, the control flow and how data will be exchanged between them. BPEL however does not include mechanisms for considering the invoked services’ Quality of Service (QoS) parameters and thus BPEL scenarios cannot customize their execution to the individual user’s needs or adapt to the highly dynamic environment of the WEB, where new services may be deployed, old ones withdrawn or existing ones change their QoS parameters. Moreover, infrastructure failures in the distributed environment of the web introduce an additional source of failures that must be considered in the context of QoS-aware service execution. In this thesis, it is proposed a framework for addressing the issues identified above; the framework allows the users to specify the QoS parameters that they require and it undertakes the task of locating and invoking suitable services. In this dissertation two strategies for selecting the most suitable service are considered: (a) a greedy strategy and (b) a partner link-level strategy. The proposed framework intercepts and resolves faults occurring during service invocation, respecting the QoS restrictions specified by the consumer. The latter also intercepts and resolves faults occurring during service invocation, respecting the QoS restrictions specified by the consumer. Finally, methods for tackling with syntactic differences between functionally equivalent services, broadening thus the pool of available services for each adaptation are considered. Finally, performance metrics for the proposed framework are presented, which validate its applicability to operational environments and present performance metrics for the proposed framework. 1 Introduction Web services have emerged as a new standard, having as main focus to allow applications over the Internet to communicate with each other, which are independent of execution platform, programming language and implementation details. The web service paradigm has been adopted by research community and industry alike, however a number of challenges still lie ahead for fully covering the needs of both service providers and consumers. [1] identifies a number of open issues in the current SOA state-of-the-art, spanning across four major categories namely service foundations (service oriented middleware backbone that realizes the runtime SOA infrastructure), service composition, service management and monitoring as well as service design and development. For service governance, in particular, [1] lists “service governance” as a major research challenge, stating that the potential composition of services into business processes across organizational boundaries can function properly and efficiently only if the services are effectively governed for compliance with QoS and policy requirements. Services must meet the functional and QoS objectives within the context of the business unit and the enterprises within which they operate. In this context, development procedures as well as composition and execution mechanisms need to take into account the QoS dimension of web services in order to formulate successful business processes that will satisfy users’ (either business or individuals) expectations. Regarding service composition into business processes, the predominant approach used nowadays is the formulation of BPEL/WSBPEL scenarios [2], in which the BPEL designer specifies the business process logic; this includes invocation of selected web services, control flow constructs and data flow arrangements in the form of result gathering and parameter passing, while provisions for exception handling (such as service unavailability or business logic faults) also exist. BPEL scenarios, however, do not include facilities either for specifying QoS parameters for services, or for dynamically selecting the web service to be called at runtime, therefore the BPEL scenario designer must select the concrete service implementation to be invoked in the context of the business process while creating the scenario, by examining the QoS parameters of functionally-equivalent services. This alternative, however, is not a viable one since (a) the same BPEL scenario may be used by different users with diverging or even contradictory requirements and (b) even if the “best choice” is made at some time point there is no guarantee that this choice will continue to be optimal in the future. Moreover, in the presence of failures, it would be desirable for the system to be able to locate and use “second best” choices automatically, provided that they deliver the required functionality and satisfy QoS restrictions. 2 Summary 2.1 Motivation and Challenges The main objective of web service technology and related research [3] is to provide the means for enterprises to do business with each other and provide joint services to their customers under specified Quality of Service (QoS) levels. The collaboration of web services, possibly provided by different companies, in order to create composite and potentially highly complex business process, elevates the need of a Business Process Management (BPM) [4], [5]. Trying to model real world business processes with BPEL scenarios a series of challenging issues may emerge. Specifically: 1. processes may be long-running, in the order of hours, days or even longer. Such issues commonly arise in cases where human intervention is required for the completion of all or some of the services that comprise the process. 2. BPEL scenarios may try to model stable and established processes that remain relatively unchanged. Examples of such processes are those that represent interactions with Government-based services, spanning the range of G2x and x2G acronyms. 3. as the complexity of the process and the number of cooperating services needed increase, so does the volatility of these services. New services implementing the same process may appear, existing ones may be decommissioned or the BPEL designer may not be aware of all the services that can be utilized at the time of the designing phase. 4. quality requirements for the process may change during the lifetime of the BPEL scenario. This may be due to different needs of end-users (a real-world counterpart of this case is one person sending a package using courier mail to minimize delivery time, whereas another person may use ordinary surface mail to pay less), or alterations in organizational policy. 2.2 Problem Identification and Objective In cases such as the above, the static nature of BPEL scenarios and their handling of BPEL engines fail to accommodate for the dynamic nature of real world processes. To cope with these situations, the BPEL scenario would have to be redesigned and re-deployed possibly forcing existing transactions to fail or be re-started. For accommodating different needs of end-users, the alternative approach of maintaining different versions of the BPEL scenarios could be also taken, with each version being targeted to a specific user category (e.g. “express delivery” vs. “economic delivery”); this arrangement, however, would increase development and maintenance costs and would weaken the overall system manageability. To tackle these issues, this dissertation proposes an approach that is relying on dynamic service selection mechanism based on functional and non-functional (quality) criteria for selecting the most suitable service per scenario invocation. Furthermore, this mechanism provides for non-existent or invalidated services allowing them to be replaced with existent and valid ones, choosing the optimal candidate per service invocation based on current criteria. The criteria can be different on each run and can provide for diverse needs depending on the invoker. So, the basic features and innovations this dissertation introduced were: - the concept of replacement candidate for web services was formalized considering criteria related to the specific BPEL scenario execution, instead of the generic functionality or behavior of the service. Replacement candidates are used for hot-swapping failed services within a BPEL transaction, allowing thus the BPEL scenario to complete its execution. The formalization introduced allows including more services in the “replacement candidate” pool and therefore formulating execution paths with better qualitative characteristics. the notion of service selection affinity was introduced, which allowed for maintaining the transactional characteristics of BPEL scenarios in the presence of adaptation an approach to bridging the syntactic differences between functionally equivalent services was proposed, which greatly enhances the maintainability of the equivalent services repository, trading off a degradation in performance, which has been quantified to be quite small. a method for distinguishing between system faults and business logic faults was proposed; this distinction is important since faults in the former category can be resolved by automatically invoking a replacement candidate for the failed service, while this is not possible for faults in the second category. a framework that enables the automatic resolution of system faults and the dynamic adaptation of BPEL scenario execution according to QoS criteria was proposed. The framework is independent of the particular BPEL execution engine used, and methods have been proposed for setting the QoS criteria granularity (for all scenarios executing in the system; for the scenario as a whole; for each individual service within a scenario). This framework includes provisions for maintaining the transactional characteristics of BPEL scenario execution, making use of the service selection affinity notion. the feasibility of the above was proved through a complete system implementation and quantification of its performance. the issue of BPEL scenario adaptation in the context of secure web services invocation was identified, and a system architecture for a system that supports such an adaptation was drafted. 2.3 Related Work In this section some related work is adduced in the following research directions: QoS management in web services composition: In [6] a framework is presented named AgFlow [7] as middleware platform that enables the quality-driven composition of Web services. In AgFlow, the QoS of Web services is evaluated by means of an extensible multidimensional QoS model. It presents two selection policies: the local optimization of individual tasks and a global planning. The first is similar to the one proposed in this thesis and it uses the Simple Additive Weighting [8] technique to select the optimal service for a given task. The proposed approach differentiate from this since we deal with already defined composition scenario and doesn’t propose a re-planning solution method in order to change the task execution order, or replace a set of task with another set. It uses a proxy-like service that is invoked for each individual task in the business scenario in order to discover the optimal services for each one of them based on a specific consumer’s quality policy at execution time. In [9] a web service proxy is introduced in order to perform a dynamic binding of related web services under specified user’s constraints. The selection of equivalent services is not only filtered by constraints but also it is measured the quality score for each equivalent service depending on a quality vector and a set of quality weights. In [10] the importance of qualify-able QoS aspect related to the issue of web services composition and monitoring is illustrated. It describes an algorithm capable of capturing and reflecting the state of web services involved in the integration process. **In exception management web services composition:** In this research work [11] a policy-driven approach is introduced to exception management. An exception handling policy language is designed, which defines deviation situations and the associated exception handlers. The proposed approach complements the above solution by discovering an optimal alternate service task to perform the alternative action mentioned. A remarkable research in this area has been and the one introduced in [12]. It’s presenting a component called BPBot (Business Process roBOT). A business process is executed by a collection of BPBots that are dynamically organized as a hierarchical structure. The proposed solution is not re-planning an execution path, but it discovers functionally and qualitatively equivalent services to perform the determined business tasks without changing the task execution sequence. Moreover, during this dissertation the author published relative papers ([20], [21], [22], [23], [24]) considering service BPEL scenario adaptation in the context of exception resolution and security issues in exception handling in [25]. **Semantic Web Services:** In the past few years, the issue of exception resolution in composite web services has drawn the researchers’ attention. A noteworthy approach to exception handling is the one undertaken by METEOR-S project [13], [14] in cooperation with WSMX (Web Services Execution Environment) [15]. WSMX contains the discovery component, which undertakes the role of locating the services that fulfill a specific user request. This task is based on the WSMO conceptual framework for discovery [16]. WSMO includes a Selection component that applies different techniques ranging from simple "always the first" to multi-criteria selection of variants (e.g., web services non-functional properties as reliability, security, etc.) and interactions with the service requestor. Both in the METEOR-S and other approaches, functional and non-functional properties are represented using shared ontologies, typically expressed using DAML+OIL [17] and the latter OWL-S. Such annotations enable the semantically based discovery of relevant web services and can contribute towards the goal of locating services with “same skills” [18] in order to replace a failed service in the process flow. The main difference of the research illustrated with the one referenced above is that selection of replacements to services that have failed within an execution plan is made dynamically, instead of using pre-determined exception resolution scenarios. Replacement service selection is based on both functional equivalence (performed through semantic matching) and qualitative replaceability (considering non-functional attributes). Furthermore, qualitative replaceability criteria may be defined by the composite service invoker, to more accurately specify which replacement service is the most suitable one in the context of the current execution. **2.4 Brief Description** **Service Quality Vectors** In order to enable the selection of the “most suitable” operation according to some QoS specification, the QoS attributes of the operations should be represented in an unambiguous and system-processable format, while additionally means for expressing QoS-related operation selection criteria should be afforded. For brevity, in the following we will consider only the QoS parameters cost, security, performance, response time and availability, adopting the definitions in [19]. For each such source, mappings between the domains employed by the source and numeric values are used. Table 1. Mapping of QoS values <table> <thead> <tr> <th>Parameter</th> <th>QoS provider 1</th> <th>QoS provider 2</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Cost</td> <td>10 €</td> <td>11 €</td> <td>1</td> </tr> <tr> <td>Security</td> <td>6 (out of 10)</td> <td>62 (out of 100)</td> <td>3</td> </tr> <tr> <td>Performance</td> <td>High throughput</td> <td>99%</td> <td>5</td> </tr> <tr> <td>Response time</td> <td>0.0001 ms</td> <td>Real-time</td> <td>1</td> </tr> <tr> <td>Availability</td> <td>High</td> <td>&gt; 95%</td> <td>4</td> </tr> </tbody> </table> In the approach illustrated here three vectors that define the QoS criteria for process invocation are considered; in other words it is defined a QoS specification as a triple (MAX, MIN, W), where MAX, MIN and W are quality vectors (defined below). The quality vector for the QoS attributes considered in this work can be defined as: Table 2. Quality Vector MAX = (costmax, secmax, perfmax, respmax, availmax) MIN = (costmin, secmin, perfmin, respmin, availmin) W = (costw, secw, perfw, respw, availw) ASOB-Framework Figure 2 illustrates the overall architecture of our approach to dynamic policy-driven execution of a business scenario QoS-aware and policy-adhering exception management techniques. The component undertaking this responsibility is the Alternative Service Operation Bind (ASOB). The BPEL scenario (SC) as crafted by the BPEL designer is processed by the ASOB preprocessor, which produces an ASOB-aware BPEL scenario (SC\textsubscript{ASOB}) as output, so that for each service, the ASOB middleware calculates an overall score which takes into account all the operations of the service that are listed in the BPEL scenario and the respective QoS weights that the client has specified at the pre-processing phase. \[ Sc_{WS} = \sum_{op \in Ops} \sum_{attr \in \{\text{cost,sec,\ldots}\} \in attr \text{w}_{op} WS,op} * QoS_w (op)_{attr} \] Depending on the score of each service, in case of a failure, ASOB replaces the failed one with the service that owns the highest score Sc. The interested reader will find in more depth the main processing of the ASOB framework at the main dissertation text. 3 Results and Discussion The contribution of the ASOB framework to the field is as follows: 1. it allows the BPEL scenario designer to specify the desired QoS parameters for each service. These parameters are specified through standard BPEL variables, thus the designer may examine scenario input parameters for setting them, tuning thus the adaptation of the particular BPEL scenario execution to the desires and needs of the scenario consumer. 2. it does not require any modification to the BPEL syntax or semantics. 3. it takes the execution flow specified by the designer as granted, and optimizes service selection within this flow, contrary to service composition approaches which define this flow dynamically. This is an important aspect in cases where execution flow is carefully crafted by the designer to reflect particularities of the business process, specialized exception handlers are used, etc. 4. it incorporates exception handling as an integral part of the adaptation process, allowing for switching to the “next best” solution when the originally selected candidate is unavailable. 5. it does not use pre-determined alternative paths, but selects services dynamically from a suitable registry. 6. It employs XSLT transformations through which the middleware bridges the syntactic differences between the service originally specified in the BPEL scenario and other services that are semantically equivalent but syntactically different. This arrangement offers to the middleware a wider range of choices, for the stage of deciding which service provider best matches the QoS specifications given in the BPEL scenario. 7. it considers service selection affinity, enabling the conducting of multi-operation transactions with providers. 8. it introduces the notion of the service replacement candidate, which relaxes the requirements for service equivalence. Service replacement candidates are computed for the context of a particular BPEL scenario and takes into account only the operations used in the scenario and not all operations offered by the services. This arrangement enables the middleware to avoid cases where some operation that is not used in a scenario breaks the equivalence of two services, and thus disallows the consideration of some alternates. 9. it elaborates on the management of consumer session memory, which supports the maintenance of service selection affinity. 10. it provides full details for the algorithms used by the middleware to process web service invocations. 11. it includes a partner link-level strategy for deciding which is the service provider that best matches the QoS profile specified in the BPEL scenario; the partner link-level strategy can significantly improve the service provider selection when a BPEL scenario uses multiple operations from the same service provider, while it may also prevent some cases where the greedy strategy is unable to find any appropriate execution path for servicing the scenario. Algorithms in pseudo-code can be found in the main text of this dissertation. 3.1 Performance Evaluation and Results Figure 2 illustrates the ASOB internal process time for single web service operation invocations, against the overall service repository (SR) size and the number of equivalent services present in the repository. The overhead increment, on the other hand, when the number of alternate services increases is considerable, mainly affecting the sorting of the candidate operation list (typically of complexity $O(n \log(n))$). ![Fig. 2. ASOB internal process time](image) Table 3. XSLT transformation overhead <table> <thead> <tr> <th>concurrent ASOB invocations</th> <th>20</th> <th>40</th> <th>60</th> <th>80</th> <th>100</th> </tr> </thead> <tbody> <tr> <td>time in msecs (average per transformation)</td> <td>17.8</td> <td>18.5</td> <td>34.5</td> <td>46.2</td> <td>61.7</td> </tr> </tbody> </table> Table 3 shows the overhead incurred by applying XSLT transforms on request and response SOAP messages, to resolve syntactical differences between operations that are *semantically* but not *syntactically* equivalent. Figure 3 illustrates the number of operation invocations that can be served in a unit of time against the number of concurrent invocations when (a) services are directly invoked and (b) when invocations are made through the ASOB middleware. ![Invocation Throughput](image1) **Fig. 3. Invocation throughput** ![BPEL Execution Time](image2) **Fig. 4. BPEL scenario execution time** Figure 4 illustrates the BPEL execution time of a BPEL scenario containing two web service invocations against the number of concurrent executions. The increment is very small (4%-9% without XSLT transformations, 8-16% with XSLT transformations). Figure 5 depicts the BPEL scenario execution throughput against the number of concurrent executions. The behavior is consistent with the previous diagrams. ### Conclusions Building processes that are able to cope with the dynamics of real world requirements has always been a challenging endeavor. The adoption of BPEL in the design and execution phases of business processes has already obtained gains in speed and reliability, but has not been able insofar to successfully address issues arising form the dynamic nature of the processes themselves, the diversity in user requirements and the inherent instability of distributed environments, which leads to a number of system faults. The framework presented in this dissertation addresses these shortcomings by employing a dynamic service selection mechanism based on QoS criteria for a BPEL process; these criteria are defined by the BPEL scenario designer and can be set to reflect the end-user requirements. Service attributes are stored in a repository that stores the services’ functional and non-functional (qualitative) characteristics. Updating the repository suffices to reflect changes in the real world (service introductions or withdrawals, changing of services’ QoS aspects etc). An exception resolution mechanism for faults owing to systemic reasons is also included, easing thus the work of the BPEL designer. The strategy employed by the presented framework for binding a partner link to a specific service provider can follow either (a) a greedy strategy, according to which the QoS aspects of only the first operation invoked for a particular partner link are examined to determine the binding or (b) a partner link-level strategy, which reviews all invocations collectively, avoiding suboptimal bindings and cases where the greedy strategy leads to inability to successfully conclude the BPEL scenario. Open issues in this field includes a detailed evaluation of the partner link-level strategy regarding (a) its performance, i.e. the time needed to determine the optimal binding for a partner link and (b) the quality of the execution plans it produces. Execution plan quality is a twofold aspect involving (i) the degree to which the bindings performed by the middleware correspond to the QoS specifications listed in the BPEL scenario and (ii) the number of cases where the partner link-level strategy bindings lead to successful execution of the BPEL scenario, contrary to the bindings of the greedy algorithm. Moreover, it could be investigates the collection and exploitation of statistics regarding the number of invocations for each particular operation in the context of a specific BPEL scenario, so as to use a more elaborate weight assignment in the phase of calculating the suitability scores of different bindings. References 17. DAML+OIL, Available at: http://www.daml.org/2001/03/daml+oil-index.html
{"Source-Url": "http://cgi.di.uoa.gr/~phdsbook/files/kareliotis.pdf", "len_cl100k_base": 4995, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26184, "total-output-tokens": 6820, "length": "2e12", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.0004744529724121094, "__label__crime_law": 0.0004553794860839844, "__label__education_jobs": 0.0020198822021484375, "__label__entertainment": 0.00012481212615966797, "__label__fashion_beauty": 0.00017321109771728516, "__label__finance_business": 0.0013055801391601562, "__label__food_dining": 0.0004391670227050781, "__label__games": 0.00047397613525390625, "__label__hardware": 0.00067901611328125, "__label__health": 0.000736236572265625, "__label__history": 0.0003533363342285156, "__label__home_hobbies": 8.0108642578125e-05, "__label__industrial": 0.0004749298095703125, "__label__literature": 0.0005826950073242188, "__label__politics": 0.000385284423828125, "__label__religion": 0.0003969669342041016, "__label__science_tech": 0.098388671875, "__label__social_life": 0.0001506805419921875, "__label__software": 0.0244903564453125, "__label__software_dev": 0.86669921875, "__label__sports_fitness": 0.0002187490463256836, "__label__transportation": 0.0005769729614257812, "__label__travel": 0.00024259090423583984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29658, 0.02888]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29658, 0.09076]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29658, 0.88569]], "google_gemma-3-12b-it_contains_pii": [[0, 2303, false], [2303, 5445, null], [5445, 8556, null], [8556, 11748, null], [11748, 15280, null], [15280, 16797, null], [16797, 19379, null], [19379, 21193, null], [21193, 22041, null], [22041, 24167, null], [24167, 27361, null], [27361, 29658, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2303, true], [2303, 5445, null], [5445, 8556, null], [8556, 11748, null], [11748, 15280, null], [15280, 16797, null], [16797, 19379, null], [19379, 21193, null], [21193, 22041, null], [22041, 24167, null], [24167, 27361, null], [27361, 29658, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29658, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29658, null]], "pdf_page_numbers": [[0, 2303, 1], [2303, 5445, 2], [5445, 8556, 3], [8556, 11748, 4], [11748, 15280, 5], [15280, 16797, 6], [16797, 19379, 7], [19379, 21193, 8], [21193, 22041, 9], [22041, 24167, 10], [24167, 27361, 11], [27361, 29658, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29658, 0.08475]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
616d6d30fda5820cab26f8d1f8ece59f8dc59b7e
Contents Chapter 1 Installing Cincom Smalltalk 1-1 System Requirements ................................................................. 1-1 ObjectStudio ............................................................................... 1-1 VisualWorks ............................................................................... 1-2 Getting Help ............................................................................... 1-3 Chapter 2 Installing VisualWorks 2-1 Running the VisualWorks Installer ............................................ 2-1 Installing Additional VisualWorks Components ..................... 2-2 Starting VisualWorks the First Time ........................................ 2-3 Project Manager .................................................................... 2-3 Launching from the Command Line ................................... 2-5 Loading Parcels .................................................................... 2-6 Setting Up a Network Environment ........................................ 2-8 Set VisualWorks Home Directory ....................................... 2-9 Uninstalling Products ............................................................ 2-10 Chapter 3 Installing ObjectStudio 8 3-1 Running the Installer ................................................................. 3-1 Installation Options .................................................................. 3-1 Components ........................................................................... 3-1 Program Group Options ....................................................... 3-2 Chapter 4 Thank You... 4-1 1 Installing Cincom Smalltalk This release of Cincom Smalltalk™ contains complete versions of VisualWorks® 7.8 and ObjectStudio® 8.3, including object engines, virtual image, and add-on products. The release contains new features, as well as many fixes. The release is distributed on two disk: - one CD containing VisualWorks 7.8 - one DVD containing ObjectStudio 8.3 and a collection of Smalltalk Daily podcasts, with James Robertson System Requirements ObjectStudio ObjectStudio 8.3 runs on these Microsoft Windows platforms - Microsoft Windows 7, XP, Vista, Server 2003 Disk and Memory Requirements - 512 MB of memory recommended, minimum - Apx. 435 MB disk space - 415 MB in Program Files (cincom/ObjectStudio8.3/) - 18 MB in the Home directory (ObjectStudio8.3/) - DVD-ROM drive VisualWorks VisualWorks 7.8 runs on workstations with the following minimum system configurations. **Disk and Memory Requirements** - 512 MB of memory recommended, minimum - Apx. 610 MB disk space for default installation - Apx. 780 MB disk space for full, single platform installation - Apx. 1.2 GB disk space for full installation with all platforms - CD-ROM drive (for installation) **Microsoft Windows** - A PC or compatible with an Intel Pentium compatible processor - Windows 7, XP SP2, Vista, Server 2003 **HP-UX** - HP 9000 Series 700 workstation - HP-UX Release 11.x **Sun Solaris** - 32-bit requires at least SPARC V7 processor architecture - 64-bit requires at least SPARC V9 processor architecture - Solaris 8 (SunOS 5.8) or better **IBM AIX** - AIX workstation with PowerPC processor - AIX release 5.3, 6.x, or 7.x **Apple Mac OS X - Aqua** - Mac OS X Leopard (10.5), or Snow Leopard (10.6) The Mac OS X object engine is now distributed as a universal binary that will run on either PowerPC or Intel Macintosh computers. **Apple Mac OS X - X11** - Mac OS X Leopard (10.5), or Snow Leopard (10.6) - X11 libraries for Mac OS X The Mac OS X object engine is now distributed as a universal binary that will run on either PowerPC or Intel Macintosh computers. **Linux x86/x86-64** - 32-bit requires a Intel Pentium compatible processor - 64-bit requires an AMD x86-64 compatible processor - Linux kernel version 2.4 or later - GNU glibc version 2.2 or later **Linux PowerPC** - A PowerPC compatible processor - Linux kernel version 2.4 or later - GNU glibc version 2.2 or later --- **Getting Help** If, after reading this document, you need additional help: - Commercial licensees can contact Cincom Technical Support. Cincom provides help on product installation. For other issues, send email to helpna@cincom.com. - Non-commercial licensees can get help on-line from the resources listed in the VisualWorks *Application Developer’s Guide*. Before contacting Technical Support, please be prepared to provide the following information: - The release number, which is displayed when you start VisualWorks. - Any modifications (patch files, auxiliary code, or examples) distributed by Cincom that you have imported into the image. - The complete error message and stack trace, if an error notifier is the symptom of the problem. To do so, use Copy Stack, or select and copy the text in the error window, and paste the text into a file that you can send to Technical Support. - The hardware platform, operating system, and other system information you are using. You can contact Technical Support using any of the following methods: <table> <thead> <tr> <th>E-mail</th> <th>Send questions about VisualWorks to: <a href="mailto:helpna@cincom.com">helpna@cincom.com</a>.</th> </tr> </thead> <tbody> <tr> <td>Web</td> <td>Visit: <a href="http://supportweb.cincom.com">http://supportweb.cincom.com</a> and choose the link to Support.</td> </tr> <tr> <td>Telephone</td> <td>Within North America, call Cincom Technical Support at (800) 727-3525. Outside North America, contact the local authorized reseller of Cincom products.</td> </tr> </tbody> </table> VisualWorks can be installed either from CD or by download from the Cincom Smalltalk website (non-commercial only). The VisualWorks installer is the recommended option for most users. The installer launches automatically from the distribution CD, or can be downloaded from the Cincom Smalltalk Download site. Experienced VisualWorks users may prefer simply to extract files from the CD or website. Configuration details, such as setting paths and file associations, must be performed manually. Detailed instructions for this installation style are provided on the download page. Running the VisualWorks Installer The VisualWorks Installer can be run from either the Cincom website (non-commercial version only) or a distribution CD (commercial or non-commercial versions). - To install from the web, visit the Cincom Smalltalk Download site. Select Cincom Smalltalk, and then select to install the Net Installer. Once installed, the Installer starts. - To install from the Cincom Smalltalk CD, insert the CD in a drive. On many systems the Installer starts automatically. If it does not, start it using the method appropriate to your platform: - Windows: Double-click on the installWin.bat script file. - UNIX/Linux: Execute the installer shell script installUnix. - Mac OS X: Double-click on the installMacOSX.command file. Upon startup, the Installer provides installation options. Select and follow the instructions for either of the installation options: - **“Typical” Installation**, which installs the most popular components for the current platform, or - **“Custom” Installation**, which gives you complete control over the components to install and the installation location. Select an option click **Next**. Follow the onscreen instructions to complete the installation. After all components have been installed, the Installer indicates successful completion. Click **Exit** to finish. This completes the installation. For Mac OS X, Linus, and Unix installations an informational screen is displayed with instructions for setting your UNIX system variables. This information is also saved in the text file `userActions.txt`, located in the install directory. **Installing Additional VisualWorks Components** After the initial VisualWorks installation, you can use the Installer application again to install additional add-on components. 1 If you installed from the Cincom Smalltalk CD, load it in your computer’s CD-ROM drive. 2 Start the installer: Windows: Go to Start > Programs > VisualWorks 7.8 > Install/Uninstall UNIX: Execute the script vw7.8nc/Install_Uninstall MacOS: Double-click on the installation image file vw7.8nc:image:install.im 3 Once the Welcome screen appears, select Custom Install, follow the initial steps as described in the previous section, clicking Next until you reach the Components to Install screen. 4 Select the components you wish to add, and click Next. 5 When the installation is complete, click Close to exit. --- Starting VisualWorks the First Time Depending on your operating system, there may be several ways to launch a session. The preferred method is to launch the Project Manager, and create or open a project image from that central point. On Windows and MacOS platforms, a desktop icon is available to launch the Project Manager. On all platforms, command-line execution is an option as well. Refer to the Application Developer’s Guide for the full range of these options. Project Manager The VisualWorks Project Manager is a simple application (LaunchPad.im) that helps you manage (create, launch or delete) your VisualWorks development projects. Each project is created as a Smalltalk image file in its own directory, which the manager creates in a user-writable location separate from the VisualWorks installation. The VisualWorks installer places a **VisualWorks Projects** launcher on the desktop (a shortcut on Windows, or an applet on Mac OSX), which starts the LaunchPad application. With the LaunchPad application, you can - Create and launch a new project (with the `[+]` button) - Launch an existing project (with its arrow button) - Remove an existing project (with its `[-]` button) - Change the VisualWorks Projects root directory (drop-down icon at top-right) The default VisualWorks Projects root directory is: - on Windows, a subdirectory of the standard My Documents folder, e.g., `C:\Documents and Settings\<username>\My_Documents\VisualWorks Projects` - On Mac OSX and Linux/Unix platforms this is a subdirectory of the standard $HOME location, e.g., `/Users/<username>/VisualWorks Projects` The VisualWorks Projects root directory is persisted in the environment variable, VWPROJECTS. This is managed automatically by the LaunchPad application on Windows (through the Windows registry) and on Mac OS X (in the VM's .plist file). On Linux and Unix platforms, you manage this environment variable in your shell scripts the same way you currently manage setting the $VISUALWORKS environment variable. Launching from the Command Line To start VisualWorks, you run the object engine (also called the virtual machine) with the image file passed as the argument: ``` object_engine image_file ``` On MS-Windows systems, the virtual machine name is visual.exe, and on MacOS and Unix system it is simply visual. By default, the virtual machine is installed in the bin/<platform> subdirectory of the root VisualWorks installation directory. The initial image file on all platforms is visual.im, (visualnc.im for non-commercial) and is installed in the image subdirectory. The image is exactly the same on all platforms. This file should be write-protected, and you should never save over it. Instead, you will want to save one or more “working” images and use those for your development work. To launch VisualWorks the first time then, using this command line interface, start by changing to the image subdirectory, and execute the object engine with the image as argument. For example, on Windows: ```>` c d c:\vw7.8nc\image\ > ..\bin\win\visual.exe visual.im ``` and on a UNIX or Linux system: ``` $ cd /usr/local/vw7.8nc/image $ exec ../bin/linux86/visual visual.im ``` Note that the paths may be different on your system. This approach makes the image directory the current directory for execution, so images will be saved there by default. On Mac OS X, you must use the open command: ``` user% open -a visual.app visual.im ``` On some platforms, there are several engines you can use, as described in the Application Developer's Guide. For development work, it is recommended that you use the engines named vw<platform>, such as vwnint.exe for Windows platforms, and vwlinux86 on Linux. Using these engines can make debugging easier in case of an engine crash. When successfully launched, the VisualWorks splash screen is displayed, and then, the VisualWorks Launcher and a Workspace are displayed. ![Screenshot of VisualWorks splash screen with VisualWorks splash screen and Workspace window.](image) **Loading Parcels** VisualWorks is divided into separate parcels, which are external Smalltalk binary and source code components (also known as packages). By selectively loading and unloading parcels, you can control the size of the image, adding only the functionality you need. Loading parcels is much faster than loading and compiling Smalltalk source code. To load a parcel/component that has already been installed by the Cincom Smalltalk Installer: 1. Start VisualWorks, and open the Parcel Manager (click on System > Parcel Manager in the Launcher): ![Parcel Manager](image) 2. Browse the categories (folders) of parcels under the **Suggestions** tab, especially the **Essentials** and **Developer Tools** categories. VisualWorks has default parcel paths for many add-on products, but if the path for the product you are installing is either not set, or is set incorrectly, the parcel will not appear in the parcel list. In this case, an additional path needs to be added. To add or correct the parcel path for the product you are installing, use the **Parcel Path** page in the Settings Tool (System > Settings). 3. To load a parcel in the Parcel Manager, select the desired parcel and then pick **Parcel > Load**. A dialog may open, explaining that additional code may be loaded. Typically you should click the **yes to all** button to continue. Additional configuration may be required by add-on products. If so, instructions are provided in the configuration or installation instructions for that product. Each parcel file (.pcl) has an associated source file (.pst) that holds the source for all the code in the parcel. Both files are effectively binary and must not be altered except by the parcel publishing mechanism. If you extract parcels from an archive (zip) format, you should disable any conversion options provided by your archiver. For example, if you use WinZip, turn-off Tar file smart CR/LF conversion. Failure to do so will result in errors when trying to browse the source for a parcel within VisualWorks. ## Setting Up a Network Environment The section Starting VisualWorks the First Time (above) includes instructions for configuring a stand-alone, single-user environment. In a networked environment there are additional considerations. The following recommendations are targeted at this networked style of configuration. Here is a recommended setup: 1. Make all the original installation files and directories read-only. While this is a good idea in a single-user environment as well, it is especially important in multi-user environments. Allowing several developers to write to the same files will cause serious data corruption errors. 2. Each user creates directories for their own images and parcels. Typically, this will be on the users’ local drives or in their private working area of a network drive. For example: On Windows: ``` c:\vwwork\myimages c:\vwwork\myparcels ``` On UNIX/Linux ``` <yourhome>/myimages <yourhome>/myparcels ``` 3. Set up a launcher mechanism (e.g., shortcuts on Windows, or execution scripts on UNIX) to run the shared virtual machine, but with the programmer’s personal image directory as the “current” directory. For example, in a Windows shortcut, specify the user’s personal image directory as the Start in: directory. On UNIX systems, a startup command file can be created in the user’s bin/ directory which can be executed while the personal image directory is “current,” but invoking the shared object engine. (Examples of both of these setups are included by the installer.) Refer to the VisualWorks Application Developers Guide for more setup details. 4 Start VisualWorks on the original image (visual.im), and open the Settings Tool (System > Settings). On the Parcel Path page, add your parcels directory (created in step 2). This will include the user’s personal working parcels in lists of parcels available for loading. You can drag the new name to the top of the list to have it searched first. 5 Select File > Save Image As... in the Visual Launcher, and save a working image. Enter a name for the image, such as working, including path information to your own image directory (step 2). Because the original image is a read-only (step 1) file, you will not be able to save over it. 6 When saving a parcel, programmers specify the path to their personal parcels directory. Specifying a relative pathname, especially one relative to the VisualWorks home directory, facilitates moving the image to other platforms. The directory path specified is remembered and proposed as the path in subsequent saves of that parcel. 7 When starting VisualWorks, make the directory containing your image file the current directory before launching VisualWorks. **Set VisualWorks Home Directory** In order to correctly find additional files, the VisualWorks Home directory must be properly set. For client installations, this is typically configured correctly during installation (Windows and Mac OS), or is set in the startup script (Unix/Linux). For network installations, in which VisualWorks is run from a shared server installation, the home directory must be set in the client. To set the home directory for the current session, select File > Set VisualWorks Home in the Launcher window. The Settings Tool opens on the home directory page: Set the VisualWorks Home Directory to the root VisualWorks installation directory, typically c:\vw7.8nc on Windows systems or /usr/local/ vw7.8nc on UNIX or Linux systems. Then click OK. On Windows systems, the VisualWorks Home is saved in the system registry. On UNIX and Linux systems, it needs to be set in a system variable, as described in an information screen at the end of the installation (and in the file userActions.txt). Uninstalling Products The VisualWorks Installer comes with an uninstall option. To use it: 1 Windows: From the Start menu, select Programs > VisualWorks 7.8nc > Install/ > Uninstall. UNIX: Execute the script ~vw7.8nc:/Install_Uninstall. MacOS: Double-click ~vw7.8nc:image:install.im 2 On the Install or Uninstall page of the Installer, select Uninstall and click Next. The Installer will display all VisualWorks installations in the drop-down menu. Select the product you wish to uninstall and click Next. 3 The Uninstaller will prompt you for the disposition of various aspects of the VisualWorks installation, such as whether you want to delete non-empty directories. Answer these prompts accordingly. 4 When the Uninstaller is finished, you may need to manually remove files and/or directories, such as directories containing files that you created using VisualWorks. The procedures described in this section install ObjectStudio 8.3 from the Cincom Smalltalk DVD. The installation is performed using InstallShield, which sets up the required directory structure on the specified disk drive and copies the ObjectStudio files into that structure. Instructions are displayed by the installer as responses are needed. Running the Installer The installer launches when you mount the ObjectStudio 8.3 DVD. Alternatively, you can execute the ObjectStudio installer directly from the DVD by running `\ostudio\disk1\SETUP.EXE`. The installation proceeds through several pages of instructions, collecting installation parameters. Once the information is gathered, the necessary directories are created and the ObjectStudio files are copied into them. Because the installation involves installing a few Windows DLL files, the installation recommends that you close all other applications during the installation. Installation Options Components Several components are optional. By default, all are selected for installation. You can deselect components that you do not expect to use. Brief descriptions of the components are provided in the installer. For full descriptions, refer to the ObjectStudio documentation after completing the installation. You can install any components later by rerunning the installer. **Program Group Options** You have the choice of installing ObjectStudio in either: - Common program group, which makes ObjectStudio available to all users of this computer, or - Personal Program group, which makes ObjectStudio available to the currently logged in user only. Thank You... ... for installing and trying Cincom Smalltalk. We hope, and expect, that you will find this to be an enjoyable and productive development environment. There are a variety of resources available to help you become productive with VisualWorks and ObjectStudio. Complete documentation is provided for both products. *The VisualWorks Walk Through* provides a simple overview of building an application in VisualWorks. A variety of web sites also provide information for VisualWorks developers. Visit the Cincom Smalltalk Wiki: http://www.cincomsmalltalk.com for information and additional links.
{"Source-Url": "http://www.cincomsmalltalk.com/main/documentation/VisualWorks/Install.pdf", "len_cl100k_base": 4580, "olmocr-version": "0.1.49", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 38695, "total-output-tokens": 5543, "length": "2e12", "weborganizer": {"__label__adult": 0.00026726722717285156, "__label__art_design": 0.00033402442932128906, "__label__crime_law": 0.0001329183578491211, "__label__education_jobs": 0.0005402565002441406, "__label__entertainment": 5.2809715270996094e-05, "__label__fashion_beauty": 8.64863395690918e-05, "__label__finance_business": 0.0002734661102294922, "__label__food_dining": 0.00014412403106689453, "__label__games": 0.0005517005920410156, "__label__hardware": 0.0005402565002441406, "__label__health": 8.827447891235352e-05, "__label__history": 8.058547973632812e-05, "__label__home_hobbies": 5.9723854064941406e-05, "__label__industrial": 0.00015056133270263672, "__label__literature": 0.0001270771026611328, "__label__politics": 7.617473602294922e-05, "__label__religion": 0.00023090839385986328, "__label__science_tech": 0.0007143020629882812, "__label__social_life": 6.371736526489258e-05, "__label__software": 0.03826904296875, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.0001208782196044922, "__label__transportation": 0.00016427040100097656, "__label__travel": 0.00010979175567626952}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21581, 0.02178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21581, 0.28243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21581, 0.82741]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 1691, false], [1691, 1691, null], [1691, 2485, null], [2485, 3639, null], [3639, 5078, null], [5078, 5584, null], [5584, 6921, null], [6921, 7951, null], [7951, 9402, null], [9402, 10211, null], [10211, 12234, null], [12234, 13089, null], [13089, 14158, null], [14158, 16077, null], [16077, 17862, null], [17862, 18842, null], [18842, 19346, null], [19346, 19346, null], [19346, 20460, null], [20460, 20971, null], [20971, 21581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 1691, true], [1691, 1691, null], [1691, 2485, null], [2485, 3639, null], [3639, 5078, null], [5078, 5584, null], [5584, 6921, null], [6921, 7951, null], [7951, 9402, null], [9402, 10211, null], [10211, 12234, null], [12234, 13089, null], [13089, 14158, null], [14158, 16077, null], [16077, 17862, null], [17862, 18842, null], [18842, 19346, null], [19346, 19346, null], [19346, 20460, null], [20460, 20971, null], [20971, 21581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21581, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 1691, 3], [1691, 1691, 4], [1691, 2485, 5], [2485, 3639, 6], [3639, 5078, 7], [5078, 5584, 8], [5584, 6921, 9], [6921, 7951, 10], [7951, 9402, 11], [9402, 10211, 12], [10211, 12234, 13], [12234, 13089, 14], [13089, 14158, 15], [14158, 16077, 16], [16077, 17862, 17], [17862, 18842, 18], [18842, 19346, 19], [19346, 19346, 20], [19346, 20460, 21], [20460, 20971, 22], [20971, 21581, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21581, 0.01674]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
9cc67cedccea831e48b4a10dee6b6be53ff46ce8
SIMD TYPES: THE MASK TYPE & WRITE-MASKING ABSTRACT This paper describes a template class for portable SIMD Mask types. Most importantly it shows how conditional code can be expressed with SIMD types. Different variants of a syntax for write-masking will be discussed. CONTENTS 1 About This Document 1 2 General Introduction to Conditionals 1 3 Conditionals in SIMD Context 2 4 The Vc::Mask<T> Class 5 5 Write-Masking 12 6 Masked Gather & Scatter 15 7 Conclusion 16 A Example: Mandelbrot 17 B Acknowledgements 17 C References 18 1 ABOUT THIS DOCUMENT This document is derived from a larger document about the Vc library. For the sake of simplicity, I refrained from changing the naming conventions of types/functions in this paper: - I want to focus on functionality first. We can “correct” the names later. - It is easier to find the reference to an existing implementation. Disclaimer: I did not get everything “right” in the Vc implementation yet. Some details of the interface definitions I present here do not fully reflect the current state of the Vc SIMD types. 1.1 shorthands in the document - $\omega_T$: number of scalar values (width) in a SIMD vector of type $T$ (sometimes also called the number of SIMD lanes) 1.2 relation to n4184 This document builds upon the Vector $<T>$ type described in N4184. Many design decisions are not discussed in this document because they have been covered in N4184 before and can be applied analogously to Mask $<T>$. 2 GENERAL INTRODUCTION TO CONDITIONALS Conditional statements are some of the most important language elements in C++. if statements enable programs to follow different code paths depending on arbitrary boolean conditions. In most cases an if statement is translated as a branching instruction. These instructions can be expensive on modern processors, if the branch prediction unit chooses the wrong branch. In such a case the pipeline has to be flushed and execution must restart at the other branch. This can incur penalties on the order of 100 cycles. In order to overcome costly pipeline flushes on incorrect branch prediction, conditional move instructions have been introduced. A conditional move instruction typically executes a load or register copy iff one or more specific flag(s) is/are set. Thus, an optimizing compiler might translate the code ```c++ if (condition) { x = y; } ``` into a compare instruction and a subsequent conditional move instruction. Not every conditional jump results from if statements. Conditional jumps are used for loop exit conditions in while or for statements. Furthermore, switch statements describe jumps into one code section out of several ones, where each one can be identified via one or more integral value(s). Instead of a switch statement, the logic can alternatively be expressed as several if statements. This is functionally equivalent, but often compilers optimize switch statements via jump tables, while if cascades typically are translated as consecutive compares and jumps. 3 CONDITIONALS IN SIMD CONTEXT The SIMD types, as defined in N4184 do not return booleans from the compare operators. Instead they return Vector<T>::MaskType, which is an alias for Mask<T>. This mask type is the equivalent of a Vector<bool> type, but with additional type information about the associated Vector<T>::EntryType. (The need for this additional type information will be discussed in Section 4.) Thus, operations that return a definitive true or false answer with scalar types, return multiple true and/or false values in one return value with SIMD types. Obviously, these mask types cannot work directly with the builtin conditional statements in C++. For SIMD code we have two principal choices for the semantics of if, for, while, and switch. 1. By enhancing the language it is possible to overload the meaning of conditional statements with operands of mask type. This has been implemented in Cilk Plus for the array notation extension [1]. Conditional statements subsequently do not disable a branch unless all entries of the mask are false (though essentially this is an optional optimization). Instead, all code branches are executed, but with some vector lanes implicitly disabled. Consider the example code in Listing 1 on a system with $\omega_{\text{int}} = 4$ and $a = \{1, 2, 3, 4\}$, $b = \{7, 0, 7, 7\}$: The expression $a < b$ then returns a mask with 4 boolean values: \{true, false, true, true\}. The compiler therefore has to translate the if-branch (line 3) into instructions that modify $a$ only at the indexes 0, 2, and 3. Subsequently, $a$ will be $a = \{2, 2, 4, 5\}$. The else-branch (line 5) then may only modify the SIMD vector entry at index 1. Thus $a$ must become $a = \{2, 1, 4, 5\}$, which is the return value of the function $f$. 2. The alternative keeps the semantics of the existing conditional statements unchanged. Then, mask types can only be used for conditional statements if a reduction function from a mask to a single boolean value is used (see Section 4.7). Still, the functionality described above (modifying a subset of a SIMD vector, selected via a mask) can be implemented via write-masking expressions (see Section 5). 3.1 Consequences of Implicit Masking Consider the implications of if statements that accept SIMD masks. The code example in Listing 2 is a small modification of the example in Listing 1 that would be equivalent for scalar types. But with SIMD vector types both of the two return statements in the code must be taken. It is certainly possible to define that this code blends the SIMD vectors from the two return statements according to the implicit masks in the if and else branches. But already a seemingly small change, such as returning an int instead of int_v (Listing 3) leads to unresolvable ambiguity: Should the function return +1 or −1? Similar ambiguity issues occur with non-complementary masked return statements and function calls inside the branches. Throwing exceptions and locking/unlocking mutexes would even have to be disallowed altogether. Listing 1: Example code relying on overloaded semantics for if statements with mask arguments. ```c int_v f(int_v a, int_v b) { if (a < b) { a += 1; } else { a -= 1; } return a; } ``` Listing 2: Code example that shows unclear return semantics: both branches must execute but from where does the function return and what is the return value? ```c int_v f(int_v a, int_v b) { if (a < b) { return a + 1; } else { return a - 1; } } ``` ```cpp int f(int_v a, int_v b) { if (a < b) { return +1; } else { return -1; } } ``` Listing 3: Code example that shows unresolvable ambiguity: both branches must execute but there can be only one return value because the return type is a scalar `int`. There is a more fundamental uncertainty resulting from implicit masking via if statements on SIMD vector masks: How should different SIMD vector types interact? An if statement from `int_v` comparison returns $\mathcal{W}_{\text{int}}$ boolean answers. If the branch contains code with `short_v` or `double_v`, should it be implicitly write-masked or not? If yes, how? There is no natural and obvious behavior for applying write masks of different $\mathcal{W}_T$. This shows that if statements with non-boolean arguments limit the language features that are allowed in the if/else branches. This makes the feature much less intuitive. The implicit mask context changes the semantics significantly in different regions of the source code. And the problem is aggravated if a developer requires else if or switch statements. ### 3.2 DESIGN DECISION FOR VC For the Vc library I therefore decided that the semantics of if, for, while, and switch must not change for explicit SIMD programming.\textsuperscript{1} Everything else would be too surprising and unintuitive to users, especially developers that read existing code without prior knowledge about SIMD programming. This may sound obvious, but consider that many developers will start from a scalar implementation of their algorithm. In the scalar code the conditional statements correctly express the logic of the algorithm. When a developer subsequently vectorizes the code (s)he starts with replacing scalar types with the Vc vector types. At this point it may appear like a logical simplification of the vectorization process to keep the conditional statements unchanged in order to minimize the effort for the user. But, as discussed above, this comes at a considerable cost in consistency of semantics.\textsuperscript{2} Thus, \textsuperscript{1} This is nice, because otherwise a pure library solution would not be possible. \textsuperscript{2} There is not really a precedent in C++ for such differences in semantics / allowed operations for certain code regions. The transactional memory extensions for C++ [N3999] may introduce local semantics where actions inside a transaction are re- part of the issue is the question whether it is more important to ease initial vectorization of an algorithm or whether maintenance effort is more important. Even then, whether implicit write-masking via conditional statements eases initial vectorization at all certainly depends on the algorithm: The restricted semantics might lead to an even larger effort required for converting a given scalar code to SIMD code. 4 The Vc::Mask<T> Class Analogous to the Vector<T> class discussed in N4184, there needs to be a type that acts as a SIMD vector of booleans. This is necessary for attaching the SIMD context only to types and never to some implicit context. There are three main approaches: - Reuse/Specialize the Vector<T> class (Vector<bool>). - Define a new class (Mask<T>) with a type as template parameter. - Define a new class (Mask<Size>) with a size as template parameter. 4.1 why Vc::Vector<bool> is not enough The type bool is part of the integral types in C++. Since values of type bool “participate in integral promotions” [2, §3.9.1] they can be used in any expression where an int can be used. Therefore, it appears as if the interface provided by Vector<T> is a good fit for boolean values, too. The additional functionality a SIMD vector of booleans should provide (such as the population count or reductions) could still be defined as non-member functions. But, considering that $\omega_T$ may be different for different $T$ it follows that $\omega_{\text{bool}} = \max\{\omega_T | \forall T\}$. Otherwise Vector<bool> would only be usable for a (target-dependent) subset of Vector<T> types. This definition of Vector<bool> implies that $\omega_{\text{bool}}$ may be greater than $\omega_T$ for some types $T$. Consider an SSE target, where $\omega_{\text{short}} = 8$, $\omega_{\text{float}} = 4$, and $\omega_{\text{double}} = 2$. Thus, $\omega_{\text{bool}}$ would need to be 8 (16 if Vc::Vector<signed char> were supported by Vc) and store 50% or 75% unused data for masks interacting with float_v --- 3 “A prvalue of type bool can be converted to a prvalue of type int, with false becoming zero and true becoming one.” [2, §4.5] and `double_v`, respectively. Considering the implementation implications, this issue turns out to have serious efficiency implications, as well: With the SSE instruction set boolean vectors are stored in the 128-bit SSE registers with 64/32/16/8 bits all set to either 0 or 1 for every associated value in the value vector. Thus, the hardware generates and expects booleans in different bit representations, depending on the SIMD vector type (or more accurately: \( \text{sizeof}(T) \)). In addition to the size issue, there is good reason to use a single `bool` return value for the equal and not-equal comparison operators (see Section 4.6). Thus, `Vector<bool>` would need to specialize these functions, which is certainly possible, but, to a certain degree, defeats the purpose of using a generic class. ### 4.2 Vc::Mask<T> definition As discussed in Section 4.1, it is beneficial to define several mask types instead of a single boolean vector type. By looking at the SSE instruction set, we have seen that `Mask<Size>` would suffice to define the minimal set of mask types for this target. But, consider that the AVX instruction set uses \( \omega_{\text{float}} = 8 \) and \( \omega_{\text{double}} = 4 \) on top of the SSE vector sizes. Using the SIMD vector size as template parameter for the mask type thus would lead to subtle portability issues (this is the same issue I discussed in N4184 for `Vector<T, Size>`): Consider the return types of the expressions \( \text{int_v()} == \text{int_v()} \) and \( \text{float_v()} == \text{float_v()} \). With the SSE target they would both return the same type `Mask<4>`, whereas with AVX the types would differ: `Mask<4>` and `Mask<8>` respectively. The general solution (`Mask<T>`) therefore uses a different mask type for every SIMD vector type `Vector<T>`. That way the types are different for every target and the user will be forced to use explicit type conversions. Listing 4 shows the definition of the SIMD mask type. Except for the `EntryType` member type all member types in Listing 4 are implementation-defined. This is analogous to the definition of the `Vector<T>` class in N4184. The different types are used for abstracting the following concepts: *VectorType* (line 8) This is the type that the implementation uses to store a SIMD vector of booleans. For some implementations this type may be equal to `Vector<T>::VectorType` but there is no such requirement. --- 4 Implicit and explicit conversions between `Mask<T>` and `Mask<U>` can be a no-op whenever \( \text{sizeof}(T) = \text{sizeof}(U) \land \omega_T = \omega_U \). namespace Vc { namespace target_dependent { template <typename T> class Mask { implementation_defined data; public: typedef implementation_defined VectorType; typedef bool EntryType; typedef implementation_defined EntryReference; static constexpr size_t MemoryAlignment = implementation_defined; static constexpr size_t Size = implementation_defined; static constexpr size_t size() { return Size; } template<typename T> constexpr size_t Mask<T>::MemoryAlignment; template<typename T> constexpr size_t Mask<T>::Size; typedef Mask< float> float_m; typedef Mask< double> double_m; typedef Mask< signed long long> longlong_m; typedef Mask< signed long> long_m; typedef Mask< unsigned long long> ulonglong_m; typedef Mask< signed long> long_m; typedef Mask< signed int> int_m; typedef Mask< unsigned int> uint_m; typedef Mask< signed short> short_m; typedef Mask< unsigned short> ushort_m; typedef Mask< signed char> schar_m; typedef Mask< unsigned char> uchar_m; } // namespace target_dependent } // namespace Vc Listing 4: SIMD mask class definition EntryType (line 9) This is an alias for bool. The member type is defined for generality/interface compatibility with Vector<T>. This type signifies the conceptual entry type. The actual entries in VectorType may use a different binary representation than bool. EntryReference (line 10) This type is used as the return type of the non-const subscript operator. It is therefore used to reference a single boolean entry in the internal mask representation. Note that the most compact binary representation for a SIMD vector of booleans uses a single bit per boolean value. In this case there cannot be a type that represents the actual bits of the boolean value of a single mask entry. Thus, EntryReference can also be a wrapper type that can access (read and write) individual bits of such a mask via the assignment operators and cast-to-bool operator. The Mask<T> type needs a single data member of an implementation-defined type (line 5). This member defines the size and alignment of the Mask<T> type. The number of entries in the SIMD vector, in general, is different from sizeof(Mask<T>), which is why the Size constant (line 13) defines this value. For compatibility with STL containers, Mask<T> contains the size() member function, which also returns the number of scalar entries in the SIMD vector. The Mask<T> type also defines a MemoryAlignment static data member, just as Vector<T> does. Analogously, its value is the alignment requirement of pointers to EntryType (i.e. bool) in aligned load and store calls (Section 4.4). Implementation experience tells that in most cases the alignment of Mask<T> will not be equal to Mask<T>::MemoryAlignment. This is due the SIMD mask register either using several Bytes or only a single bit per boolean entry. Finally, analogous to the type aliases for Vector<T>, the mask types that the Vc library implements are aliased to the type names float_m, double_m, ... (lines 20–31). The constructors for the `Mask<T>` class need to replicate the semantics of the `bool` type as much as possible. The necessary declarations are shown in Listing 5. The default constructor of `Mask<T>` initializes the value of all entries in the mask to `false`. This is required for compatibility with the expression `bool()`, which constructs a `bool` with the value `false`. The copy and move constructors and operators are omitted for the same reason as for `Vector<T>` [N4184]. The constructor on line 2 initializes a mask object with all values set to the boolean value passed in the argument. Therefore, this constructor implements a broadcast from one scalar value to all entries in a SIMD vector. Note that, in contrast to the broadcast constructor from `Vector<T>`, the broadcast constructor of `Mask<T>` is declared as `explicit`. This is a deviation from the behavior of the scalar `bool` type. But, for boolean vectors the usefulness of a broadcast is mostly limited to initialization of mask objects. If a developer really needs to use a mask with all entries set to either `true` or `false`, then it is very likely that a scalar control-flow statement (such as `if`) is much better suited for the task. On the other hand, if implicit conversions from scalar `bool` to `Mask<T>` were possible, a user might fail to notice that an expression produces a `bool` instead of the intended mask object. Finally, the two constructor functions in lines 3 and 4 implement implicit and explicit (`static_cast`) conversions between mask objects. The two functions, as declared in Listing 5, are ambiguous. They need to be adjusted, such that the implicit constructor only participates in overload resolution if $\mathcal{W}_U = \mathcal{W}_T$ for all possible targets. According to the discussion of implicit conversions in N4184 this can be decided via the following `enable_if` expression: \[ \text{enable_if<differs_only_in_signedness<U, T>::value>} \] The `explicit` constructor then simply requires the inverse condition to `enable_if`. --- 5 The object representation of any type in C++ takes up $N$ bytes, where $N$ is integral. This is also evident from the `sizeof` operator which returns a `size_t` denoting the number of bytes in the object representation of the type. 4.4 Loads and Stores Mask types can implement load and store functions, reading from / writing to arrays of EntryType (which is bool). These functions can be useful to write code that is independent of the SIMD register width and to interface with non-SIMD code (or I/O in general). Listing 6 shows the declaration of the necessary functions. The Flags argument is analogous to the one for the Vector<T> load/store functions. The default uses unaligned loads and stores and can be set to aligned loads and store via the second argument. ``` enum Flags { none, // Disable all operations. // Other flags might be added in the future. }; explicit Mask(const bool *mem); template<typename Flags> explicit Mask(const bool *mem, Flags f); void load(const bool *mem); template<typename Flags> void load(const bool *mem, Flags); void store(bool *) const; template<typename Flags> void store(bool *mem, Flags) const; ``` Listing 6: Declaration of the Mask<T> load and store functions. 4.5 Logical and bitwise operators ``` enum Flags { none, // Disable all operations. // Other flags might be added in the future. }; explicit Mask(const bool *mem); template<typename Flags> explicit Mask(const bool *mem, Flags f); void load(const bool *mem); template<typename Flags> void load(const bool *mem, Flags); void store(bool *) const; template<typename Flags> void store(bool *mem, Flags) const; ``` Listing 7: Declaration of logical and bitwise operators for Mask<T>. Listing 7 shows the declaration of the operators for logical and bitwise operations. Each operator simply applies the operation component-wise. There is no need for non-member overloads as was required for Vector<T>, because the conversion rules are much simpler for different vectors of booleans. The implicit and explicit conversion constructors fully suffice. 4.6 Comparison Operators Listing 8 shows the declaration of comparison operators that I implemented for Vc::Mask. Note, that the return type is a scalar bool and not a SIMD type. Returning another mask type would make the compare operator basically an alias for the xor operator. Typically, it is more interesting to determine whether two given mask are equal (or not) and this requires a single boolean. ``` bool operator==(Mask rhs) const; bool operator!=(Mask rhs) const; ``` Listing 8: Declaration of the Mask<T> comparison operators. It is certainly possible to define a meaning for relational compare operators (less/greater). The most obvious definition would be an interpretation of the boolean entries as bits of an integer and then compare the integers. Up to now I did not come across a use case for such operators, though. I am looking for input from the community on this question. 4.7 Reduction Functions In order to use a mask object in an if statement or loop condition there needs to be a reduction function from the multiple boolean values in the mask to a single bool. There are four useful reduction functions: - **all_of**: Returns true iff all entries in the mask are true. - **any_of**: Returns true iff at least one entry in the mask is true. - **none_of**: Returns true iff all entries in the mask are false. - **some_of**: Returns true iff there is at least one entry that is true and at least one entry that is false (note that this is always false for Vc::Scalar::Mask<T>). The usefulness of the first three functions should be obvious. The **some_of** reduction, on the other hand, is not used that often. It is a useful check for knowing whether some conditions in the SIMD lanes diverged, though. For example, it could signify that a program still needs to continue iterating, but at least one vector lane is idle and a reorganization of the data vectors might increase the throughput. The template functions that reduce a mask object need to be declared in such a way that they do not participate in overload resolution. Listing 9: Declaration of the Mask<T> reduction functions. unless the template argument actually is a Mask<T> type (from any internal namespace). In addition to the declarations for the Vc::Mask types, the reduction functions are also declared for bool arguments. That way the functions can be used in generic code where scalar types and Vc::Vector types can be used at the same time. 5 WRITE-MASKING The term write-masking is used to denote the expression that disables an arbitrary set of vector lanes for writes to the destination register (or memory location). This is equivalent to the conditional move operation for scalars, applied to several values in parallel. Hardware support for write-masking requires a rather simple operation: instead of writing all bits from some temporary buffer to the destination register, some lanes are disabled, thus keeping the old value in the destination register unchanged. But, from the language side, this operation has only been implemented via implicit masking (such as the masked if statements in Cilk Plus [1]) or blend functions, which essentially implement the SIMD equivalent of the C++ ternary operator (conditional operator). 5.1 conditional operator For SIMD blend operations, the conditional operator (a < b ? 1 : -1) would be a very natural solution. It is straightforward to translate this conditional expression from scalar context into SIMD context. The operator expresses, that for a given condition, its result should be the value of either the first or the second expression after the question mark. In the SIMD case, where a boolean is replaced by a vector of booleans, the conditional operator states that the results of the first expression must be blended with the results of the second expression according to the mask in the conditional expression before the question mark. But with the current C++ standard, overloading the conditional operator is not allowed [2, §13.5]. According to Stroustrup [5] “there is no fundamental reason to disallow overloading of ?::”. Therefore, until C++ gains this ability, conditional operators have to be replaced by a function call for supporting SIMD types. For the Vc library, I defined the function $$\text{Vector<T>} \ iif(\text{Mask<T>}, \ \text{Vector<T>}, \ \text{Vector<T>})$$ The name $\text{iif}$ is an abbreviation for $\text{inline-if}$. To allow generic use of this function, Vc provides the overload $$T \ iif(\text{bool}, T, T)$$ Thus $\text{iif}$ can be used in template functions where both $\text{bool}$s and Vc mask types may be used as the first argument to $\text{iif}$. Listing 10 shows how $\text{iif}$ is used inside the Kalman-Filter. The $\text{float}_t$ type can be defined to anything that returns either a boolean or a Vc mask on $\text{operator<}$. Thus the implementation of the algorithm is generically usable for SIMD and scalar types. 5.2 write-masked assignment operators The $\text{iif}$ function would suffice to translate any scalar conditional code to a vectorized code. But it is neither a good general interface, nor does it properly express intention of the code, hiding behind unnecessarily complex expressions. Therefore, I created a new syntax for the $\text{Vector<T>}$ types to express conditional assignment with any assignment operator: x(x < 0) *= -1; This line of code reads as: multiply x with -1 where x is less than 0. The general syntax is `vector-object (mask-object) assignment-operator initializer-clause`. The Vector<T> class template therefore declares the function call operator as shown in Listing 11. This operator returns a temporary object which stores a (non-const) lvalue-reference to the Vector<T> object and a copy of the mask object. The WriteMaskedVector class template overloads all assignment operators which implement the write-masked assignment to the Vector<T> object. In addition to assignment operators the WriteMaskedVector can also implement the increment and decrement operators. 5.2.1 alternative: `Vc::where` The function call operator syntax has a significant downside: It is impossible to write generic functions with conditional assignment that work with SIMD vector types and fundamental types. It would require an operator overload for fundamental types, or rather a change to the language specification. Therefore, I worked on alternative solutions: ```cpp Vc::where(x < 0, x) *= -1; // variant (1) Vc::where(x < 0) | x *= -1; // variant (2) Vc::where(x < 0) (x) *= -1; // variant (3) ``` The goal was to have a function/expression that can return a WriteMaskedVector object for vector types and fundamental types. - The first variant uses less “magic” but does not have such an obvious connection between the modified variable x and the assignment operator. - The second variant states more clearly that an assignment to x is executed. But it requires an operator between the where function and the assignee that has lower precedence than assignment operators. In any case, this operator will be deprived of its normal semantics, which is a potentially confusing solution. - The third variant is a compromise of the first two variants. It uses the function call operator of the return type of the where function to make it clearer that assignment is applied to the \( x \) variable. All three variants of the \texttt{where} function can be overloaded with fundamental types. All four solutions for write-masking (\texttt{where} and \texttt{Vector\textless T\textgreater ::operator()}) can be translated to optimal SIMD code and thus only differ in syntax and semantics. I am looking for feedback from the community on the preferred solution for an interface for write-masking. 5.2.2 RETURN TYPE OF MASKED ASSIGNMENT OPERATORS The assignment operators that are declared in the \texttt{WriteMaskedVector} type can return either: - A reference to the \texttt{Vector\textless T\textgreater} object that was modified. - A temporary \texttt{Vector\textless T\textgreater} object that only contains the entries where the mask is \texttt{true}. - The \texttt{WriteMaskedVector} object. - Nothing (\texttt{void}). The most sensible choice seems to be a reference to the modified \texttt{Vector\textless T\textgreater} object. But then the statement \( (x(x < 0) *= -1) \) \( += 2 \) may be surprising: it adds 2 to all vector entries, independent of the mask. Likewise, \( y += (x(x < 0) *= -1) \) has no obvious interpretation anymore because of the mask in the middle of the expression. If we consider that a write-masked assignment is used as a replacement for an if-statement, using \texttt{void} as return type is a more fitting choice. An if-statement has no return value. By declaring the return type as \texttt{void} the above expressions become ill-formed, which seems to be the best solution for guiding users to write maintainable code and express intent clearly. 6 MASKED GATHER & SCATTER Finally, let us look at masked gather and scatter operations. (Gather/scatter was introduced in N4184.) A gather expression creates a temporary \texttt{Vector\textless T\textgreater} object that can be assigned to an lvalue. If the user wants to assign only a masked subset of the gathered values, the write-masked assignment as described in Section 5 suffices. But write-masked gather is special in that there are memory reads which are unnecessary (and thus should be omitted for performance reasons) and potentially even invalid, out-of-bounds accesses. Therefore, we rather want write-masked assignment from a gather operation to propagate to the gather function itself. Then the gather function can use the mask to omit loads for the SIMD lanes that will not be used on assignment. The scatter function, called from a scatter expression, must use the mask information for the same reasons: it should avoid unnecessary stores and must omit out-of-bounds stores. But for scatters the scatter expression is on the left hand side of the assignment operator and thus basically follows the same logic as normal write-masking. To support masked gathers, the WriteMaskedVector class declares an assignment operator for an rvalue-reference to SubscriptOperation: ```cpp template <typename T, typename I, typename S> void operator=(SubscriptOperation<T, I, S>&); ``` The operator will call `gatherArguments` on the `SubscriptOperation` object and use that information to execute a masked gather and assign the result to the referenced `Vector<T>` object. Note that this only allows direct assignment from the gather expression. The user can not execute operations in addition (though this could be supported via expression templates). ## Conclusion I have presented the Mask<T> class and associated functions and operators that can be used to vectorize conditional statements with little effort and in an understandable and intuitive syntax. There are still a few open questions on how to create the best write-masking syntax. Also there are some useful functions that I have implemented in Vc, such as population count, index of first `true` value, subscript operator for reading and setting individual mask entries, and a few more that are not described here. This document is a work in progress on the mask type, as I am looking for guidance how to proceed. typedef SimdArray<int, float_v::Size> IV; for (int y = 0; y < imageHeight; ++y) { const float_v c_imag = y0 + y * scale; for (IV x = IV::IndexesFromZero(); any_of(x < imageWidth); x += float_v::Size) { const std::complex<float_v> c(x0 + x * scale, c_imag); std::complex<float_v> z = c; IV n = 0; auto inside = norm(z) < 4.f; while (any_of(inside && n < 255)) { z = z * z + c; where(inside) | n += 1; inside = norm(z) < 4.f; } IV colorValue = 255 - n; colorizeNextPixels(colorValue); } } Listing 12: A Vc implementation of the Mandelbrot algorithm. ![Graph showing speedup and runtime for different implementations] Figure 1: Runtime of the Vc implementation of the Mandelbrot algorithm normalized to an optimized implementation using float and int. B ACKNOWLEDGEMENTS - This work was supported by GSI Helmholtzzentrum für Schwerionenforschung and the Hessian LOEWE initiative through the Helmholtz International Center for FAIR (HIC for FAIR). • Thanks to all the useful and encouraging feedback from Vc users in the community. C REFERENCES
{"Source-Url": "http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4185.pdf", "len_cl100k_base": 7504, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 46369, "total-output-tokens": 8782, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0003020763397216797, "__label__crime_law": 0.00026535987854003906, "__label__education_jobs": 0.00021755695343017575, "__label__entertainment": 5.143880844116211e-05, "__label__fashion_beauty": 0.00012958049774169922, "__label__finance_business": 0.00012755393981933594, "__label__food_dining": 0.0003600120544433594, "__label__games": 0.0004935264587402344, "__label__hardware": 0.0011224746704101562, "__label__health": 0.00030040740966796875, "__label__history": 0.0001895427703857422, "__label__home_hobbies": 7.814168930053711e-05, "__label__industrial": 0.00035119056701660156, "__label__literature": 0.0001690387725830078, "__label__politics": 0.0002498626708984375, "__label__religion": 0.0004458427429199219, "__label__science_tech": 0.00496673583984375, "__label__social_life": 5.513429641723633e-05, "__label__software": 0.0030002593994140625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002694129943847656, "__label__transportation": 0.0004949569702148438, "__label__travel": 0.00020563602447509768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34377, 0.0171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34377, 0.44142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34377, 0.85079]], "google_gemma-3-12b-it_contains_pii": [[0, 532, false], [532, 2448, null], [2448, 4794, null], [4794, 6557, null], [6557, 8995, null], [8995, 11154, null], [11154, 13759, null], [13759, 15004, null], [15004, 16935, null], [16935, 19224, null], [19224, 21062, null], [21062, 23124, null], [23124, 24543, null], [24543, 26428, null], [26428, 28344, null], [28344, 30404, null], [30404, 32405, null], [32405, 33475, null], [33475, 34377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 532, true], [532, 2448, null], [2448, 4794, null], [4794, 6557, null], [6557, 8995, null], [8995, 11154, null], [11154, 13759, null], [13759, 15004, null], [15004, 16935, null], [16935, 19224, null], [19224, 21062, null], [21062, 23124, null], [23124, 24543, null], [24543, 26428, null], [26428, 28344, null], [28344, 30404, null], [30404, 32405, null], [32405, 33475, null], [33475, 34377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34377, null]], "pdf_page_numbers": [[0, 532, 1], [532, 2448, 2], [2448, 4794, 3], [4794, 6557, 4], [6557, 8995, 5], [8995, 11154, 6], [11154, 13759, 7], [13759, 15004, 8], [15004, 16935, 9], [16935, 19224, 10], [19224, 21062, 11], [21062, 23124, 12], [23124, 24543, 13], [24543, 26428, 14], [26428, 28344, 15], [28344, 30404, 16], [30404, 32405, 17], [32405, 33475, 18], [33475, 34377, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34377, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3333ffd14c31a559855136fed50888cbbbab253d
[REMOVED]
{"Source-Url": "https://hal.science/hal-00199198v1/file/main.pdf", "len_cl100k_base": 7033, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39624, "total-output-tokens": 9200, "length": "2e12", "weborganizer": {"__label__adult": 0.0004041194915771485, "__label__art_design": 0.0006513595581054688, "__label__crime_law": 0.0005602836608886719, "__label__education_jobs": 0.0016317367553710938, "__label__entertainment": 0.00011527538299560548, "__label__fashion_beauty": 0.00023603439331054688, "__label__finance_business": 0.00039076805114746094, "__label__food_dining": 0.0005054473876953125, "__label__games": 0.0008759498596191406, "__label__hardware": 0.0015163421630859375, "__label__health": 0.0013065338134765625, "__label__history": 0.0005483627319335938, "__label__home_hobbies": 0.0002727508544921875, "__label__industrial": 0.001064300537109375, "__label__literature": 0.00043845176696777344, "__label__politics": 0.0004038810729980469, "__label__religion": 0.0007920265197753906, "__label__science_tech": 0.412841796875, "__label__social_life": 0.00020623207092285156, "__label__software": 0.00928497314453125, "__label__software_dev": 0.564453125, "__label__sports_fitness": 0.00042819976806640625, "__label__transportation": 0.0010242462158203125, "__label__travel": 0.0002751350402832031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27870, 0.03844]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27870, 0.41087]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27870, 0.82614]], "google_gemma-3-12b-it_contains_pii": [[0, 1011, false], [1011, 3552, null], [3552, 6542, null], [6542, 8740, null], [8740, 11995, null], [11995, 14565, null], [14565, 16399, null], [16399, 18834, null], [18834, 21392, null], [21392, 23625, null], [23625, 26652, null], [26652, 27870, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1011, true], [1011, 3552, null], [3552, 6542, null], [6542, 8740, null], [8740, 11995, null], [11995, 14565, null], [14565, 16399, null], [16399, 18834, null], [18834, 21392, null], [21392, 23625, null], [23625, 26652, null], [26652, 27870, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27870, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27870, null]], "pdf_page_numbers": [[0, 1011, 1], [1011, 3552, 2], [3552, 6542, 3], [6542, 8740, 4], [8740, 11995, 5], [11995, 14565, 6], [14565, 16399, 7], [16399, 18834, 8], [18834, 21392, 9], [21392, 23625, 10], [23625, 26652, 11], [26652, 27870, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27870, 0.04505]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
970ff35649fc6b761e876d593a83968b85afef43
TOWARD SOFTWARE ENGINEERING PRINCIPLES BASED ON ISLAMIC ETHICAL VALUES SHIHAB A. HAMEED Electrical and Computer Engineering Department, Faculty of Engineering, International Islamic University Malaysia, P.O. Box 10, 50728 Kuala Lumpur, Malaysia. E-mail: shihab@iiu.edu.my ABSTRACT: Software is the core for computer-based applications which became an essential part for critical control systems, health and human life guard systems, financial and banking systems, educational and other systems. It requires qualified software engineers: professionally and ethically. Literature review (L.R.) and survey results show that software engineering professionals are facing several ethical-related problems which are costly, harmful and affected high ratio of people. Professional organizations like ACM, IEEE, ABET and CSAC have established codes of ethics to help software engineering professionals to understand and manage their ethical responsibilities. Islam considers ethics an essential factor to build individuals, communities and society. Islamic ethics are set of moral principles and guidance that recognizes what is right behavior from wrong, which are comprehensive, stable, fair, and historically prove success in building ethically great society. The estimated 1.3 billion of Muslims with tens of thousands of software engineers should have an effective role in software development and life, which requires them to understand and implement ethics, specially the Islamic ethics in their work. This paper is a frame-work for modeling software engineering principle. It focuses mainly on adopting a new version of software engineering principle based on Islamic ethical values. KEYWORDS: Ethics, Software engineering ethics, Islamic ethics, Computer crime. 1. INTRODUCTION In this computer and information era, computer-based applications became an essential part of human life. It is the core for critical control systems, health and human life guard systems, financial and banking systems, scientific and educational systems, entertainment and games, and other systems related to different aspects of human life. Software is the core for such computer-based systems. Software development requires qualified professional and ethical software engineers. Computer ethics is one of the essential branches of ethics that is growing and changing rapidly as computer technology grows and develops. In Stanford encyclopedia of philosophy [1], computer ethics might be understood narrowly as the efforts of professional philosophers to apply traditional ethical theories or virtue ethics to issues regarding the use of computer technology; or might be understood broadly to include standards of professional practice, codes of conduct, aspects of computer law, public policy, corporate ethics beside certain topics in the sociology and psychology of computing. Since information technology has begun to affect community life, family life, human relationships, education, freedom, etc; so the computer ethics can be understood as the branch of applied ethics which studies and analyzes such social and ethical impacts of information technology. Walter Maner [2] in mid 1970s defined the computer ethics as one which examines "ethical problems aggravated, transformed or created by computer technology". Deborah Johnson [3] thought that computers gave a "new twist" to old ethical issues which were already well known. Moor's [4] defined computer ethics as a field concerned with "policy vacuums" and "conceptual muddles" regarding the social and ethical use of information technology. Moor's way is very powerful, suggestive, broad enough to be compatible with a wide range of philosophical theories and methodologies, and it is rooted in a perceptive understanding of how technological revolutions proceed. This very broad view of computer ethics employs concepts, theories and methodologies from applied ethics, sociology of computing, technology assessment, computer law, and other relevant disciplines [5]. This way of understanding computer ethics is reflected in recent developments such as Brey's "disclosive computer ethics" methodology [6] and the emerging research field of "value-sensitive computer design" [7, 8, 9]. In Gotterbarn's view, computer ethics should be viewed as a branch of professional ethics, which is concerned primarily with standards of practice and codes of conduct of computing professionals [10]. Gotterbarn has been involved in a number of activities, such as co-authoring the third version of the ACM Code of Ethics and Professional Conduct and working to establish licensing standards for software engineers [11, 12] based on this view. Sommerville [13] in his software engineering book denoted that: computer science is concerned with theory and fundamentals; system engineering is concerned with all aspects of computer-based systems development including hardware, software and process engineering; software engineering is concerned with the practicalities of developing and delivering useful software. Software engineers must accept that their job involves wider responsibilities than simply the application of technical skills. They must also behave in an ethical and moral responsible way if they are to be respect as professionals. To understand the computer and software engineering ethics, we have to understand the concept of ethics, ethical problems and its role in human life. One of the recognizable and effective concept of ethics is the Islamic concept, which is still not studied broadly by computer and software engineering professionals. 2. ETHICS AND ETHICAL PROBLEMS The results of joining several conferences, having several discussions with experts and professionals, and reading different articles related to ethics can be concluded as: ethics have several definitions, which reflect the philosophers or authors viewpoints and their culture, but there is a common area between all these viewpoints. So ethics can be defined as “Set of principles of right conduct”, “Theory or system of moral values”, or “motivation based on ideas of right and wrong”. Wikipedia encyclopedia [14] shows that: Socrates was one of the first Greek philosophers to encourage both scholars and the common citizen to turn their attention from the outside world to the condition of man. Aristotle posited an ethical system that may be termed "self-realizationism"; when a person acts in accordance with their nature and realizes their full potential, they will do good and be content. People are daily facing ethical issues at their life; but how many of us know how to deal with them? Several surveys were done which shows a whole array of issues being faced by employees such as stealing, lying, fraud and deceit [15]. Internationally, the ethical values are also deficient. In a survey of 300 companies across the world, over 85% of senior executives indicated that the following issues were among their top ethical concerns: employee conflicts of interest, inappropriate gifts, sexual harassment, and unauthorized payments [16]. A survey of 2,000 major US corporations revealed that the following ethical problems concerned managers: drug and alcohol abuse, employee theft, conflicts of interest, quality control issues, discrimination in hiring and promotion, misuse of proprietary information, abuse of company expense accounts, plant closings and layoffs, misuse of company assets, and environmental pollution [17]. In computer, and software development; there are several problems related to ethical issues. These issues include professional responsibilities, social responsibility, quality as moral issue, software ownership and intellectual property rights, privacy, computer crimes, confidentiality, responsibility and liability, professional competence, impact on society and work place, security and reliability, and safety [18]. Ethical related problems in computer and software are very costly, harmful and affected high ratio of people. The Federal Bureau of Investigation (FBI) study shows that in 2006 the estimated computer crimes’ cost was USD 67.2 billion yearly. On the other hand, software engineers participate in developing advanced software as a core for all intelligent and mass-destruction weapons systems. The unethical usage of such weapons causes hundreds of thousands of innocent victims as well as the huge destructions for wealth and environment, which means that software engineer participates indirectly in such crimes and destructions. A survey study done on internet’s usage and ethical problems in education; the sample is selected from higher education institutions in Malaysia. Internets’ users (students, academic staffs and non-academic staffs) represent different nations, races, cultures, genders, experiences, ages and qualifications; this makes the selected sample of users more representing the population that we need to study. Survey results in Table 1 show that: - 73% of total users consider sexual related data is harmful data factor. - 51% of total users consider Anti-religious related data is harmful data factor. - 42% of total users consider Advertisements and commercial announcement related data is harmful data factor. - 19% of total users consider Anti-Culture related data is harmful data factor. - 19% of total users consider political related data is harmful data factor. - 14% of total users consider security related data is harmful data factor. It also shows that 75% of the users get such harmful data by e-mails. <table> <thead> <tr> <th>Harmful Data Factors</th> <th>Female %</th> <th>Male %</th> <th>Total %</th> </tr> </thead> <tbody> <tr> <td>Anti Religious</td> <td>0.37</td> <td>0.58</td> <td>0.51</td> </tr> <tr> <td>Anti-Culture</td> <td>0.16</td> <td>0.2</td> <td>0.19</td> </tr> <tr> <td>Sexual</td> <td>0.76</td> <td>0.71</td> <td>0.73</td> </tr> <tr> <td>Political</td> <td>0.24</td> <td>0.16</td> <td>0.19</td> </tr> <tr> <td>Security</td> <td>0.16</td> <td>0.13</td> <td>0.14</td> </tr> <tr> <td>Advertisement</td> <td>0.5</td> <td>0.38</td> <td>0.42</td> </tr> </tbody> </table> 3. ETHICS IN ISLAM Islam is the last religion revealed by the God (Allah): “This day, I have perfected your religion for you, completed my favour upon you, and have chosen for you Islam as religion” [Qur’an 5:3]. Islam is basically based on two sources: The Qur’an [19] and the Sunnah of Prophet Muhammad (peace be upon him or, which is mainly defined by Muslim scholars as: “all what prophet Muhammad says, acts, or agreed on”. Sunnah is documented in six authenticated resources (Sahih al-Bukhari, Sahih Muslim, Sunan Abi-Daud, Jamea al-Termethi, Sunan Ibn-Maja, and Sunan al-Nisaae). The general understanding of ethics in Islam can be express as a “set of moral principles and guidance that recognizes what is right behavior from what is wrong or what one should do or not”. Qur’an and Sunnah, show that all the Muslims’ life should be guided by Islamic ethics [20-23]. Allah said “Verily this Qur’an doth guide to that which is most right (or stable)” [Qur’an 17:9]. Allah uses the term akhlaq or khuluq in the Qur’an to refer to the ethics. The importance of ethics in Islam is shown when Allah prescribe prophet Muhammad that he is with a great ethics: ”Prophet of Allah had been raised to a great spiritual dignity” [Qur’an 68:4]. Also, prophet Muhammad said, “I was sent to complement the best of ethics”. The Qur’an represents the main dimension for the concept of ethics in Islam; when Aisha, the wife of prophet Muhammad, was asked about the ethics of Prophet; she replied: “His ethics was the Qur’an” [20]. Allah orders the Muslims to follow and obey prophet Muhammad as a model “You have indeed in the Messenger of Allah an excellent example” [Qur’an 33:21]. Allah describes people of the best nation as: “You are the best of peoples, evolved for mankind, enjoining what is right (ma’ruf), forbidding what is wrong (munkar), and believing in Allah” [Qur’an 3:110]. The Qur’an and Sunnah use set of ethical terms to describe the concept of goodness such as: Sīdīq (Truth), Khaïr (Goodness), Birr (Righteousness), Qist (Equity), ‘Adl (Equilibrium and Justice), Haqq (Truth and Right), Ma’ruf (Known and approved), Amanah (Honesty), Ikhlas (Sincerity), and Taqwa (Piety). Pious actions are described as salahat and impious actions are described as sayyi'at. [24]. Some of these terms are repeated in tens of Qur’anic verses as well as the Sunnah. Table 2 shows a survey result for the frequency of some ethical terms used in the Qur’an; which show that Islam supports and rewards people for all goodness and warns, prohibits or punishes people for badness. Historically, many of Muslim’s scientists and scholars have great effort in the field of ethics. They wrote many books and articles to explain the concept of ethics in Islam. They consider ethics as the best honorable science or the crown of sciences, which leads to bring success, happiness for individuals, communities and society. Alfairuzabady [25] and Ibn Mandhor [26] mention that linguistically ethics means your default behavior “tab’a or sajiyyah”, kindness (moroa’a) or religion, which reflect the mankind natural characteristics that is straightforward consistent besides the acquired characteristics that became as natural characteristics [25-28]. On the other hand Ibn Miskawah [29] and Abo-hamid Al-Ghazali [23] define ethics as a fixed situation of mankind soul and according to it, the mankind acts or behaves easily and simply without need for thinking or his acts become as default. Abd al-karim Zaydan [30] view ethics as set of fixed characteristics and meaningful values in mankind soul and according to the act consider accepted as good or rejected as bad so that he will perform or reject [28-30]. Table 2: Frequency of ethical related terms in the Qur’an. <table> <thead> <tr> <th>Ethical Related Terms (Good Ethics)</th> <th>Frequency</th> </tr> </thead> <tbody> <tr> <td>Truth (Sidq)</td> <td>&gt;110</td> </tr> <tr> <td>Fair and right (‘Adil, Haq)</td> <td>&gt; 300</td> </tr> <tr> <td>Goodness (Khair, Maaroorf)</td> <td>&gt; 180</td> </tr> <tr> <td>Love &amp; good dealing (Hub, Husn)</td> <td>&gt; 180</td> </tr> <tr> <td>Forgiveness &amp; Kindness (Afo, Ghafoor, Raoof)</td> <td>&gt; 350</td> </tr> <tr> <td>Merciful (Raheem)</td> <td>&gt; 250</td> </tr> <tr> <td>Keep promises &amp; Sincere (Ahd, Ikhlas)</td> <td>&gt; 40</td> </tr> <tr> <td>Wisdom (Hikma)</td> <td>&gt; 160</td> </tr> <tr> <td>Science &amp; Education (Elm, Taaleem)</td> <td>&gt; 800</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Ethical Related Terms (Bad Ethics)</th> <th>Frequency</th> </tr> </thead> <tbody> <tr> <td>Lie (Katheb)</td> <td>&gt; 240</td> </tr> <tr> <td>Unfair (Dhulm)</td> <td>&gt;210</td> </tr> <tr> <td>Hypocrisy (Nefaq, Fitnah)</td> <td>&gt;55</td> </tr> <tr> <td>(Fush, Fusog)</td> <td>&gt; 80</td> </tr> <tr> <td>Plotting (Makr, Iftera)</td> <td>&gt;110</td> </tr> </tbody> </table> Islam considers ethics as an essential factor in developing or rebuilding the society based on understanding of the Qur’an and Sunnah. This ethical rebuilding of human behavior will bring benefit, peace, and prosperity to mankind [31]. The ethical behavior affected by set of factors, which can be classified according to their level of effect into: global, nation, community, family, and individual. Historically, Islamic system is the only system that produces encyclopedic scientists such as Al-Khawarizmi, Ibn-Rushd, Ibn-Hayyan, Ibn-Sena, Ibn-Albetar, and Ibn-Alhaitham. Each one of them was scientists in several fields such as fiqh, hadith, language and art, mathematics, chemistry, physics, medicine, or astronomy. They were models and behave according to the Islamic ethics. 4. WHY ISLAMIC ETHICAL PRINCIPLES ARE NEEDED According to our experience, analysis of the literature and survey results of ethical terms in the Qur’an and the Sunnah, there is lack of knowledge and misunderstanding about Islam and Islamic ethics by many of non-muslims. We can summarize the main characteristics for Islamic ethical principles as: − Historically, Islamic ethical principles was tested in real life and shown that it is the suitable solution to convert society to the best. The clear example shown when Islam converts the Bedouin society in the Arab-land into modern society within two decades, then build a great nation, which leads and develop main part of world (with great ethics such as justice, fairness, honest, truth, goodness) as shown in the Umayyad, Abbasid and Andalusia eras. − Islamic ethics are comprehensive, which organize the relation between mankind and Allah, mankind them self, mankind and other creatures, and mankind and environment. Allah said to the prophet “We have not send thee but as messenger to all mankind, giving them glad tiding, and warning them against sin” [Qur’an 34:28], also “We send thee not, but as mercy for all creatures” [Qur’an 21:107]. - Islamic ethical principles are stable and standard. It deals with people in justice, fairness and equality regardless of their race, relationship, nation, religion, or color. Allah said “Verily this Qur’an doth guide to that which is most right (or stable)” [Qur’an 17:9]. The prophet said “all people are equal; there is no difference between Arabic and non-Arabic except in taqwa (piety). - Islamic ethical principles work toward reactivating the purity (fitra) of people as they created by Allah and out of devil’s affect. “So set your face towards the religion of pure Islamic monotheism; Allah’s fitra with which he has created mankind” [Qur’an 30:30]. - Islamic ethics rebuild the society through building individuals; starting before or from day of birth and continue through all his life. - Islamic methodology of life is guided by Islamic ethics. It associated theoretical principles with implementation through set of worshiping and dealing acts. There are more than fifty verses in Qur’an mentioned to “those who believe and do deeds of righteousness”. - Islamic ethical principles associate mankind acts with his intension, which is known by Allah. “Except as Allah wills; for he knoweth what is manifest and what is hidden” [Qur’an 87:7]. The prophet also said “All your acts are associated with your intentions”. - Islam considers human life is a challenge between mankind and the devil. Allah supports the mankind with forgiveness and mercy using (tawba and isteghfar). Also Allah duplicating rewards for good deeds and canceling sins when we make istighfar or tawba. “Verily the devil (satan) is an enemy to you: so treat him as enemy” [Qur’an 35:6]; also “Allah who forgiveth sin, accepteth repentance” [Qur’an 40:3]. The unethical behavior for some Muslims can be consider as one of the essential reasons for their weakness, which also leads to the unfair concept about Islam and Islamic ethics by some of the non-Muslims. Several lectures and articles show that Muslims’ population is approximately 1.3 billions which represent more than 20% of the world population. They are distributed mainly in more than 60 countries. They are dealing with computer and IT related applications directly or indirectly. Yearly, many Muslims graduated from computer and information technology programs within hundreds of universities in the Islamic world and other universities as well as training centers. This offers tens of thousands of Muslims as computer and software engineering professionals. This shows that the Muslims especially the software engineering professionals should have an effective role in computer and software engineering field and its related code of ethics. Conferences, discussions, and literature reviews show that Muslims’ researchers have simple effort in computer and software engineering ethics and they still do not adopt standard code of ethics based on Islamic values. Also, there is a lack of an efficient and effective comprehensive database, e-learning tool, and textbooks related to Islamic ethical values in computer and software engineering. 5. SOFTWARE ENGINEERING PROFESSIONALS ETHICS Software engineering professionals have specialized knowledge and often have positions with authority and respect in the community so, they are able to have a significant impact upon the world, including many of the things that people value [32]. Computer professionals find themselves in a variety of professional relationships with other people [33, 34] that involve a diversity of interests, and sometimes these interests can come into conflict with each other. Professional organizations in the U.S., like ACM (Association for Computing Machinery) and IEEE (Institute of Electrical and Electronic Engineers), have established codes of ethics, curriculum guidelines and accreditation requirements to help computer professionals understand and manage ethical responsibilities. In addition, both the ACM and IEEE have adopted Codes of Ethics for their members [13]. ABET (Accreditation Board for Engineering Technologies) has long required an ethics component in the computer engineering curriculum. In 1991, CSAC/CSAB (Computer Sciences Accreditation Commission / Computer Sciences Accreditation Board) also adopted the requirement that a significant component of computer ethics be included in any computer science degree granting program that is U.S. accredited. IEEE and ACM are two of the main professional committees in field of computer and engineering. They work toward define standard principles for software engineer in term of professional and code of ethics. They produce early versions and try to upgrade it from time to time. Major revisions were made between version 3.0 that was widely distributed and version 5.2, the recently approved version [18]. The preamble was significantly revised to include specific standards that can help professionals make ethical decisions. The short version of the code summarizes aspirations at a high level of abstraction. Software engineers shall commit themselves to making the analysis, specification, design, development, testing, and maintenance of software a beneficial and respected profession. In accordance with their commitment to the health, safety, and welfare of the public, software engineers shall adhere to eight principles [35]. We can summarize the results of L.R. and survey for the ethical related problems for software engineering professionals as: - Although there is big effort done by many international organizations but we still have several problems related to ethics in computer and software engineering. - Many of the software engineering professionals are still participating in developing software to support many of computer-based system that cause huge destruction for human, health, wealth and environment. - There is no standard code of ethics or principles for software engineering professionals based on Islamic values. - Lack of knowledge (especially software engineering professionals, students and lecturers) about real Islamic ethical values and capability of implementing it in real life. - Lack of dedicated database and E-learning tool for Islamic ethical values. - Lack of guidelines to enhance curriculum with Islamic ethical values especially for software engineering related courses. - There is no Ethical Evaluation Model for software engineering professionals based on the defined Islamic code of ethics. 6. FRAMEWORK FOR MODELING SOFTWARE ENGINEERING PRINCIPLES BASED ON ISLAMIC ETHICAL VALUES To solve the ethical related problems for software engineering professionals and to help Muslims to understand the Islamic ethics we propose this framework for modeling software engineering principles based on Islamic values as shown in Fig. 1. The main objectives for this framework can be summarized as: − Offering solutions for some of the problems related to ethics in software engineering. − Offering standard code of ethics or principles for software engineering professionals based on Islamic ethical values. − Offering a suitable advising and warning for software engineering professionals to avoid participation directly or indirectly in harming of innocents or destruction of health, wealth and environment. − Offering a valuable knowledge about real Islamic ethics to clarify the current cloudy picture about Islam or Islamic ethics, especially by the non-Muslims. − Offering a comprehensive database and web-based E-learning tool for ethics, Islamic ethics, and software engineering professional ethical principles which offer ethical guidelines for a high ratio of people, especially 1.3 billion Muslims with tens of thousands of computer and software engineering professionals to guide them in their work and life based on Islamic values. − Offering guidelines for curriculum developer to enhance computer and software engineering courses with Islamic ethical values. − Offering an effective mathematical / statistical evaluation model for software engineering based on Islamic ethical values, which is a supportive tool for SW quality management. − Offering a path or guidance for people specially the software engineering professionals to reactivate their good ethics and show them how to implement theoretical aspects of good ethics practically. In this research paper we focus on defining a new version of software engineering principles based on Islamic values. The other objectives will appear later in other publications. Fig. 1. Framework for modeling SWE principles based on Islamic ethical values. 7. SOFTWARE ENGINEERING PRINCIPLES BASED ON ISLAMIC ETHICAL VALUES This research paper work toward defining novel version of standard code of ethics or principles for SWE professional based on Islamic ethical values. It represents the integration between the Islamic ethics (according to Qur’an and Sunnah) and current software engineering professionals' ethical principles. Section 3 and 4 in this paper summarize: the concept of ethics in Islam, its importance in enhancing individual’s behavior, and its main characteristics. Software engineering professionals have to commit themselves to follow these principles in all software development phases: communications, data collection, analysis and requirements, design and specifications, construction, testing and maintenance. These following proposed ethical principles are guidance for software engineering professionals especially the Muslims: − Work as vicegerent of Allah: The main objective of creating all mankind by God is to worship him; by developing and reconstructing the earth for the best (as vicegerent or caliph) through their good acting and deeds. Allah said “I have only created Jinn and Men, that they may serve me” [Qur’an 51:56]. “Allah said to the angels: I will create a vicegerent on earth” [Qur’an 2:30]. − Spend your age in performing goodness and collect your wealth in ethically legal ways: the age and wealth for any mankind are predetermined and fixed by Allah SWT before they born; but they are responsible for their acts and decisions as shown in the following verses of Qur’an. “To every people is a term appointed: when their term is reached, not an hour can they cause delay, nor an hour can they advance” [Quran 7:34]. “For them a substance determined” [Qur’an 37:41], “there is no moving creature on the earth but its sustenance dependeth on Allah” [Qur’an 11:6]. ”verily, we showed him the way, weather he be grateful or ungrateful” [Qur’an 29:3]. − No secret act and each act with good intention: God knows all what we declared or keep it secretes as well as all our acts are associated with our intentions (niyyah); so we have to be clear in our work. "Allah he kneweth what is manifest and what is hidden” [Qur’an 87:7]. The prophet said “All your acts are associated with your intention”. − Performing duty is a worship: Software engineering professionals have to know that, performing their duty is a worship and Allah SWT will reward them for goodness and punish them for evil/sinful deeds. Allah SWT said in Qur’an “Then shall anyone who has done an atom’s weight of good, see it and anyone who has done an atom’s weight of evil, shall see it” [Qur’an 99:7-8]. The prophet in his Hadith said “work is worship”. − Understand and follow the standard ethics, especially Islamic ethics: Software engineering professionals have to understand the standard Islamic ethics (based on Quran and Sunnah). They have to consider it as the highest standard that they should follow in their life and work. “Verily this Qur’an doth guide to that which is most right (or stable)” [Qur’an 17:9]. “The religion be for Allah is Islam” [Qur’an 3:19]. − Work in consistent with goodness of Ummah or Nation interests: Software engineering professionals have to work consistently with the Ummah (Nation) interest, which are based on Islamic ethical values and should not make harm for it. “Every Muslim is shepherd (leader) and he is responsible for that which he shepherds”. 48 − Work in consistent with goodness of community or organization interests: Software engineering professionals have to work in a manner that is in the best interests to their committee or organization and in consistent with the nation interest based on Islamic standards. The prophet said “those who cheat us are not part of us (our Ummah)”. − Meet the highest professional: Software engineering professionals have to ensure that their products and related modifications meet the highest professional standards and not conflict with ethical values. The prophet said in a Qudsi’s Hadith: “Allah loves those who accomplish their job in its best (perfect) manner”. − Fair judgment: Software engineering professionals have to maintain integrity and independence in their professional judgment and have to be fair according to ethical values. Allah SWT Said in Qur’an “when you judge between others you judge with justice”.[ Qur’an 4:58] − Management with honesty: Software engineering professional managers and leaders have to subscribe to and promote an ethical approach to the management of software development and maintenance. They have to show the honesty (amanah) and equity in performing their duty. − Work with highest profession: Software engineering professionals have to advance the integrity and reputation of the profession consistent with the Ummah (Nation) interest. They have to do their best using their highest profession. − Be cooperative and supportive: Software engineering professionals have to be fair to and supportive of their colleagues and avoid the selfish. Allah mentions in Qur’an: “Help you one another in virtue, righteousness and piety (bir and taqwa); but do not help one another in sin and transgression” [Qur’an 5:2]. − Lifelong learning: Software engineering professionals have to participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession. The prophet said ”seek knowledge from birth to death”. − Protect the confidentiality: Software engineering professionals have to protect the confidentiality and security for the client, employer or community and Nation (Ummah). − Remember the Judgment day: Software engineering professionals have to know that doing goodness and producing useful knowledge will be rewarded by Allah in their life and after death to the Day of Judgment. 8. CONCLUDING REMARKS Software is the core for any computer–based system, which affect all aspects of our life. Software development is a complex, expensive, and ethical engineering task which requires qualified software engineering professionally and ethically. Although ethical and professional principles for software engineering professionals were adopted by professional organizations and committees such as IEEE, ACM, ABET; but L.R., studies and survey results show that software engineering professionals still facing many ethical related problems. Islamic sources (Qur’an and Sunnah) provide a high standard of ethics for individual, communities and Ummah (nation) levels. Islamic ethics are stable, comprehensive, fair and standard ethics which are suitable for all nations and times; and when followed leads to create a ethically great society. Since there is lack of effort in considering Islamic ethics in developing software engineering principles, we propose a framework for modeling software engineering principles based on Islamic ethical values. The paper proposes adopting new software engineering principles based on Islamic ethical values. This effort can help in solving many of the current ethical related software development problems. It offers a good opportunity for software engineers specially Muslims to understand and implement such standard and comprehensive ethical values in their life as well as having their right role in software development. ACKNOWLEDGEMENT The Malaysian Ministry of Higher Education (MOHE) and IIUM University kindly provided funding for the research through the "IIUM Fundamental Research Grant System" (IFRGS) PROJECT NO: 0703-76 REFERENCES [19] The Nobel Qur’an, English Translation of the meaning and commentary, King Fahh complex for printing holy Qur’an, KSA, 1417 H.
{"Source-Url": "http://journals.iium.edu.my/ejournal/index.php/iiumej/article/download/99/61", "len_cl100k_base": 6984, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 29203, "total-output-tokens": 8852, "length": "2e12", "weborganizer": {"__label__adult": 0.002399444580078125, "__label__art_design": 0.001312255859375, "__label__crime_law": 0.004039764404296875, "__label__education_jobs": 0.01230621337890625, "__label__entertainment": 0.000186920166015625, "__label__fashion_beauty": 0.0006213188171386719, "__label__finance_business": 0.0020999908447265625, "__label__food_dining": 0.0013217926025390625, "__label__games": 0.003330230712890625, "__label__hardware": 0.003276824951171875, "__label__health": 0.002460479736328125, "__label__history": 0.0008978843688964844, "__label__home_hobbies": 0.0002703666687011719, "__label__industrial": 0.0018205642700195312, "__label__literature": 0.005931854248046875, "__label__politics": 0.003963470458984375, "__label__religion": 0.13134765625, "__label__science_tech": 0.036895751953125, "__label__social_life": 0.0008325576782226562, "__label__software": 0.0175323486328125, "__label__software_dev": 0.7646484375, "__label__sports_fitness": 0.000579833984375, "__label__transportation": 0.0015363693237304688, "__label__travel": 0.0005617141723632812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37159, 0.03689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37159, 0.70249]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37159, 0.9276]], "google_gemma-3-12b-it_contains_pii": [[0, 2926, false], [2926, 6531, null], [6531, 9974, null], [9974, 13562, null], [13562, 16648, null], [16648, 19983, null], [19983, 23336, null], [23336, 25372, null], [25372, 25451, null], [25451, 28913, null], [28913, 32216, null], [32216, 34793, null], [34793, 37159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2926, true], [2926, 6531, null], [6531, 9974, null], [9974, 13562, null], [13562, 16648, null], [16648, 19983, null], [19983, 23336, null], [23336, 25372, null], [25372, 25451, null], [25451, 28913, null], [28913, 32216, null], [32216, 34793, null], [34793, 37159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37159, null]], "pdf_page_numbers": [[0, 2926, 1], [2926, 6531, 2], [6531, 9974, 3], [9974, 13562, 4], [13562, 16648, 5], [16648, 19983, 6], [19983, 23336, 7], [23336, 25372, 8], [25372, 25451, 9], [25451, 28913, 10], [28913, 32216, 11], [32216, 34793, 12], [34793, 37159, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37159, 0.16352]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
0cf9ed62786bd4bbc2fa0d949cf6e2e717badae4
Basics A Prolog program is a set of rules which are built from atoms of a first order logic (FOL) theory. Formally, a rule is of the form \[ h : - l_1, \ldots, l_n \] where \( h \) is an atom and each \( l_i \) is an atom or an expression of the form \(-a\) or \(+a\) for some atom \( a \). Eclipse Prolog — a Prolog interpreter — will be used during the course of the class. Given a Prolog program, we can compile it and ask questions which will be answered with yes/no or one (or many) variable assignment. The intuitive is that each Prolog program represents a knowledge base (KB) and each query posed to the KB represents a theorem which needs to be evaluated against the KB. Knowledge Representation in Prolog To represent our knowledge about a domain of interest, we first need to identify its objects, the properties of them, and the relationships between the objects. For example, to describe the family relations, we need the following - The individuals such as \textit{tom}, \textit{marry}, ... - The properties of each individual such as \textit{gender}, \textit{age}, ... - The relationships between individuals such as \textit{father}, \textit{mother}, \textit{son}, \textit{daughter}, ... For example, let us consider a family of four individuals: John, Marry, Tom, and Celine. John is the father of Tom and Celine and Marry is the mother of Tom and Celine. Tom is a boy and Celine is a girl. Tom is older than Celine. Given the above information can we answer the following questions: - Is John a male? - What is the gender of Celine? • Are Celine and Tom siblings? • Who is a brother of Celine? • Who is a brother of Tom? • etc. To represent the fact that John is the father of Tom and Celine and Marry is the mother of Tom and Celine in Prolog, we write \[ \begin{align*} \text{father}(\text{john}, \text{tom}). \\ \text{father}(\text{john}, \text{celine}). \\ \text{mother}(\text{marry}, \text{tom}). \\ \text{mother}(\text{marry}, \text{celine}). \end{align*} \] Here, \textit{father} and \textit{mother} are predicate symbols and \textit{john}, \textit{marry}, \textit{tom}, and \textit{celine} denote the four individuals of the family. These are \textit{constants} – in Prolog, a string starts with a lower case letter denotes a constant. We edit a file called \texttt{p1.pl} which consists of the above facts and compile the file using Eclipse Prolog. Now that we have the program. We can ask questions about the individuals and their relationships. To ask, whether \textit{john} is a father of \textit{tom}, we write in the query entry of Eclipse \[ \text{father}(\text{john}, \text{tom}). \] Try the above and see what happens. Here is what you will see in the \textbf{Results} screen: \texttt{?- father(john, tom).} \texttt{Yes (0.00s cpu)} To ask who is the father of \textit{tom}, we write \[ \text{father}(X, \text{tom}). \] Here, \textit{X} is a \textit{variable}. In Prolog, a string beginning with a upper case letter denotes a variable. Instead of the ‘yes/no’ answer we get the following: \texttt{?- father(X, tom).} \texttt{X = john} \texttt{Yes (0.00s cpu)} Notice the difference between answer for queries with variables and answer for queries without variable. Queries with variables are answered with an assignment for variable \((X = \text{john}, \text{meaning that} \text{john} \text{is an answer for the query} \textit{father}(X, \text{tom}). \text{For queries without variables, the answer is } \textit{yes/no}.\) How do we represent the other facts? We add the above to the program `p1.pl` and call the new program `p2.pl`. We compile the file and ask questions. You can try with your questions to see how Prolog answer them. Let us figure out what happened if we ask the program the following questions: - Is John a male? `?male(john).` NO - What is the gender of Celine? `?` - Are Celine and Tom siblings? `?sibling(celine, tom).` NO - Who is a brother of Celine? `?brother(X, celine).` NO - Who is a brother of Tom? `?brother(X, tom).` NO **What is wrong?** The problem is that we have in our mind a lot of information, such as John – being a father – is normally a male; or Celine and Tom – having the same parent – are siblings. However, this information is not represented in our program. For this reason, the answer will be NO. We can, of course, represent the fact that John is a male and Marry is a female using the facts: ``` male(john). female(marry). ``` Asking now whether John is a male, we will get the correct answer. Observe that to record the gender of someone, denoted by `x`, we need either `male(x)` or `female(x)`. This solution will not be good if we have a large number of individuals. **Can we do it better than that?** A reasonable assumption would be that every individual is either a male or a female. So, we can choose to represent the ‘female’ individuals and write some rules to deduce that someone who is not a female is a male. This can be written as follows: ``` male(X) :- not female(X). ``` The `not` in the above rule is called the *negation-as-failure* operator. The rule is read as if `female(X)` cannot be proven then `male(X)` is true. Suppose that we have the program `p3.pl` as follows: ``` father(john, tom). father(john, celine). mother(marry, tom). mother(marry, celine). female(celine). female(marry). male(X) :- not female(X). older(tom, celine). ``` Now, if we ask our program about the gender of every individual, we will get the correct answer. As you can see, a Prolog program can be extended incrementally by adding new atoms (facts) and rules. Now, suppose that we wanted to add to p3.pl also some information about the addresses of the four individuals. John and Marry are in Las Cruces whereas their two children are studying in California, San Francisco, to be precise. We could do so by adding the facts: livein(john, las_cruces). livein(marry, las_cruces). livein(tom, san_francisco). livein(celine, san_francisco). As you can see, constants can also contain the underscore symbol. Let p4.pl be the program consisting of p3.pl and the above four atoms. Let's compile it and ask queries about the gender of the individuals in the story, we get the same result. We also can ask queries about who is living where. The program will also return the correct answers. Now, let us ask the query ? male(las_cruces). We get the answer ‘YES’. This is certainly not what we wanted but why it is so? The culprit is the rule that defines whoever is not a female is a male! Of course, the error is ours. For Prolog, a string like ‘john’ is not much different than the string ’las_cruces’ except that the former is shorter and they contain different characters. How will you solve this problem? This is the first homework for this class. The second homework asks you to extend the program so that we can answer the questions about relationship between members of an extended family. **Homework 1 (Question 1):** Extending the program p4.pl with necessary rules and facts so that the following queries will be answered correctly: - Questions about gender of people? - Questions about who is living where? - Questions about the relationships between Tom and Celine. **Homework 1 (Question 2):** Let the program resulting from Question 1 be p5.pl. Extending the program with the following facts: - John has a brother Paul. - Marry has a brother Ringo and a sister Ono. - Paul is living in San Francisco. - Everyone lives in San Francisco or Las Cruces. Define the relationship *uncle* and *aunt*. Your program should be able to answer queries about the relationship between the new individuals and the previously mentioned ones. It should also giving correct answer about the living place of each individual. Furthermore, the answers should not be changed if we add some facts about tables and chairs in the class. Negative Information The negation-as-failure operator \(+ is a convenient way for us to write rule under the closed-word-assumption which states that an atom is false if it cannot be proven to be true. In using \(+ we have to pay attention that no cycle between atoms can occur as in the following case: \[ \text{male}(X) \leftarrow \text{person}(X), \neg \text{female}(X). \] \[ \text{female}(X) \leftarrow \text{person}(X), \neg \text{male}(X). \] The intuition of the above rules is clear. Unlike an earlier program, where we have only the first rule. If we have a person John and do not specify the gender of John, asking \(?\text{male}(\text{john})\) will lead to an error. (Try it!) Prolog allows us to define recursive function in a very straightforward way. In Prolog, it is very easy to define the relationship ancestor (Try to write the definition for this in first order logic!). Intuitively, we can define the ancestor in two steps: 1. If \(X\) is a parent of \(Y\) then \(X\) is an ancestor of \(Y\); 2. If \(X\) is an ancestor of \(Y\) and \(Y\) is an ancestor of \(Z\), then \(X\) is an ancestor of \(Z\). This can be translated into the following Prolog rules: \[ \text{ancestor}(X,Y) \leftarrow \text{parent}(X,Y). \] \[ \text{ancestor}(X,Z) \leftarrow \text{ancestor}(X,Y), \text{ancestor}(Y,Z). \] The first rule is the base rule and the second rule represents the second rule. It represents the idea of an inductive definition: the base case and the recursive case. Add the above rules to the information about John’s family, we get the program p5.pl as follows. \[ \text{father}(\text{john}, \text{tom}). \] \[ \text{father}(\text{john}, \text{celine}). \] \[ \text{mother}(\text{marry}, \text{tom}). \] \[ \text{mother}(\text{marry}, \text{celine}). \] \[ \text{parent}(X,Y) \leftarrow \text{father}(X,Y). \] \[ \text{parent}(X,Y) \leftarrow \text{mother}(X,Y). \] \[ \text{ancestor}(X,Y) \leftarrow \text{parent}(X,Y). \] \[ \text{ancestor}(X,Z) \leftarrow \text{ancestor}(X,Y), \text{ancestor}(Y,Z). \] Let us compile the program and run the query \(?\text{ancestor}(\text{marry}, \text{tom})\) in Eclipse. The program responds with 'More' – meaning that the answer to the query is 'yes' and there might be another answer – click on 'More', we get an error “Overflow ...". This means that we probably get into an infinite loop!! More seriously, if we exchange the place of the last two rules and creates the program p6.pl as follows: \[ \text{father}(\text{john}, \text{tom}). \] \[ \text{father}(\text{john}, \text{celine}). \] \[ \text{mother}(\text{marry}, \text{tom}). \] \[ \text{mother}(\text{marry}, \text{celine}). \] \[ \text{parent}(X,Y) \leftarrow \text{father}(X,Y). \] \[ \text{parent}(X,Y) \leftarrow \text{mother}(X,Y). \] \[ \text{ancestor}(X,Y) \leftarrow \text{parent}(X,Y). \] \[ \text{ancestor}(X,Z) \leftarrow \text{ancestor}(X,Y), \text{ancestor}(Y,Z). \] father(john, tom). father(john, celine). mother(marry, tom). mother(marry, celine). parent(X, Y) :- father(X, Y). parent(X, Y) :- mother(X, Y). ancestor(X, Z) :- ancestor(X, Y), ancestor(Y, Z). ancestor(X, Y) :- parent(X, Y). the same query will not be answered correctly. Why? This is a well-known problem in Prolog. The main reason is that Prolog uses a fixed order (top-to-bottom) in selecting rules in query answering. As a rule of thumb, the basic rule of a recursive definition always needs to be placed before the general rule. To avoid the infinite loop that might appear as in p5.pl, we need stop the application of the general rule when the answer is already found. Two ways: (i) use the cut operator which is denoted by !; (ii) strengthen the definition such that it is not applicable only when the base case cannot be applicable. ancestor(X, Y) :- parent(X, Y), !. ancestor(X, Z) :- ancestor(X, Y), ancestor(Y, Z). This solution is shown in p7.pl. The cut operator ! causes Prolog to commit all the choices made since the parent goal was invoked and discard all the other alternatives. Here, it causes Prolog to commit to the choice made when ancestor(marry, tom) is invoked and discard other possibilities that might exist if the second rule can be selected. The other solution is to change the second rule as follows ancestor(X, Y) :- parent(X, Y). ancestor(X, Z) :- parent(X, Y), ancestor(Y, Z). The program p8.pl contains this solution. We will continue in this class with recursive definition. We will move to another important type of data in Prolog called list. In Prolog, a list is a special type of term. It is a recursive data structure consisting of pairs (whose tails are lists). A list is either the atom [] called nil, or a pair of the form [H|T] whose tail is a list. The notation: [a, b, c] is a shorthand for the list [a][b][c][nil]]. There are several useful functions that operate on lists. For example, we can compute the number of elements in a list using the following rules: list_length([], 0). list_length([H|T], N1) :- list_length(T, N), N1 is N+1. Notice that we write “N1 is N+1” instead of using N1=N+1. This is because N1=N+1 will mean that N1 is a term that can be unified with the term N+1. ‘N1 is N+1” on the other hand asks Prolog to evaluate the term N+1 and assigned its value to N1. (See using compile scratch-pad). Your job in this class is to define the following predicates: - \textit{list\_member} (X, Y) which is true if and only if X is a member of the list Y. - \textit{list\_append} (X, Y, Z) which is true if and only if Z is a list constructed by appending the elements of Y to X. - \textit{list\_delete} (X, Y, Z) which is true if and only if Z is a list consists of elements in X which does not appear in Y. - \textit{list\_intersection} (X, Y, Z) which is true if and only if Z is a list consists of elements appearing in both X and Y. - \textit{list\_no\_duplicate} (X, Y) which is true if and only if Y is a list consists of elements in X without duplicates. In this class, we will practice writing some recursive definitions which – hopefully – will provide us a better understanding of Prolog. In particular, we will see some examples that explain why our program – although looking perfectly fine – can get into an infinite loop. We will take some simple problems that are well-known to all of us and write programs to solve them. **Example 1** Defining non-negative integers. Isn’t it trivial to define the set of non-negative integers? Well, the set of non-negative integers is \{0, 1, 2, \ldots, n, \ldots\}. We can come up with something like: \% num(N-) iff N is a non-negative integer \[ \text{num}(0). \] \[ \text{num}(N1) :- \text{num}(N), \text{N1 is N+1}. \] Notice that the line beginning with \% is a comment. The comment here states that we define a predicate \textit{num}(-), where the minus – stands for output, which is true iff its output is a non-negative number. Compiling the program and post the goal ?\text{num}(X) . to Prolog, it will correctly generate all the number 0,1,2,\ldots However, when we ask the query “?\text{num}(3) .”, the program will correctly answer with 'yes' and then says that there is more solutions. Ask for more and we get into an infinite loop. Why? The reason is that in the second rule, we ask Prolog to 'guess' a number and then check whether that number plus 1 equals the original number. To fix the program of getting into the loop, we could rewrite the second rule as follows: num(0). num(N1):- N1 > 0, N is N1-1, num(N). Asking now ?num(3). and the system correctly get out of the loop. However, if we ask for all $X$ by posting the goal ?num(?). we will get into trouble. In this case, we define a predicate which is correct when the input is given. The comment should be changed to % num(N+) iff the input is a non-negative integer The above two programs exhibit a property of Prolog whose root is the answering mechanism implemented in many PROLOG systems. It says that the system is sound but incomplete. While the first program is able to generate the set of non-negative integers, the second one provides a “better answer” to the query whether a number is a non-negative number. This suggests that we should use the first program only in situation when we would like to generate all the numbers. The second program is more appropriate when we want to check for a number being non-negative. It is important to realize when/where/how to write program that terminates when you need it to. **Checking for prime numbers** We will now write a program that determine whether a number $N$ is a prime number or not. The most primitive algorithm can be described by the formula $$(\forall M, M > 1, M < N, N \text{ is not divisible by } M) \rightarrow N \text{ is a prime number}.$$ So, we can implement this step-by-step as follows. First, we define the predicate `divisible` as follows. % divisible(X+,Y+) is true iff X is divisible by Y divisible(0,Y). divisible(X,Y):- X > 0, X1 is X-Y, divisible(X1,Y). Try out this program and notice the difference when you run divisible(1,3), divisible(100,4), and divisible(10, X). Next we define the predicate `compoundNumber(N)` that states that $N$ is a compound number. The obvious way is to define a predicate `less(M,N)` that find a number less than equal $N$ and check for divisibility. % less(X+, Y+) is true iff X is smaller than Y less(0, 1). less(0, Y):- Y > 0, Y1 is Y - 1, less(0, Y1). less(X, Y):- X > 0, Y > 0, X1 is X-1, Y1 is Y-1, less(X1, Y1). The next rule defines the `compNum(N)` predicate using the above idea: % compNum(X) is true if X is a compound number compNum(X):- less(Y, X), Y > 1, Y < X, divisible(X, Y). This rule does not work properly because the variable Y has to be guessed and less(X, Y) does not work properly when it has to guess. For this reason, we develop a new set of rules. The idea is to put the boundary to the possible values that we need to check for being a divisor of X. The rules are as follows. % compoundNumber(X+) is true if X is a compound number compoundNumber(X):- X >= 0, X1 is X - 1, compoundInBound(X, X1). compoundInBound(X, Y):- Y > 1, Y < X, divisible(X, Y). compoundInBound(X, Y):- Y > 1, Y < X, \+ divisible(X, Y), Y1 is Y - 1, compoundInBound(X, Y1). In the above, compoundInBound(X, Y) defines a predicate that is true if some integer between 2 and Y is divisible by X. The two rules check for the first divisor of X in the range 2...X - 1. Finally, the rules for checking the prime number is as follows. % primeNumber(X) is true if X is a prime number primeNumber(X):- num(X), !, \+ compoundNumber(X). Try to run this program and see what happens!
{"Source-Url": "https://www.cs.nmsu.edu/~tson/classes/fall04-579/prolog-note.pdf", "len_cl100k_base": 5033, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20134, "total-output-tokens": 5619, "length": "2e12", "weborganizer": {"__label__adult": 0.00030994415283203125, "__label__art_design": 0.00032591819763183594, "__label__crime_law": 0.0003933906555175781, "__label__education_jobs": 0.005252838134765625, "__label__entertainment": 9.042024612426758e-05, "__label__fashion_beauty": 0.0001685619354248047, "__label__finance_business": 0.0002092123031616211, "__label__food_dining": 0.0004954338073730469, "__label__games": 0.0012006759643554688, "__label__hardware": 0.0011425018310546875, "__label__health": 0.0004787445068359375, "__label__history": 0.0002803802490234375, "__label__home_hobbies": 0.00020039081573486328, "__label__industrial": 0.0006480216979980469, "__label__literature": 0.00045990943908691406, "__label__politics": 0.00026798248291015625, "__label__religion": 0.0006127357482910156, "__label__science_tech": 0.042572021484375, "__label__social_life": 0.00018227100372314453, "__label__software": 0.01216888427734375, "__label__software_dev": 0.931640625, "__label__sports_fitness": 0.00031256675720214844, "__label__transportation": 0.0005450248718261719, "__label__travel": 0.00018262863159179688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 18516, 0.00885]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 18516, 0.89883]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 18516, 0.90046]], "google_gemma-3-12b-it_contains_pii": [[0, 1560, false], [1560, 3518, null], [3518, 5381, null], [5381, 7846, null], [7846, 10760, null], [10760, 12859, null], [12859, 15279, null], [15279, 17391, null], [17391, 18516, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1560, true], [1560, 3518, null], [3518, 5381, null], [5381, 7846, null], [7846, 10760, null], [10760, 12859, null], [12859, 15279, null], [15279, 17391, null], [17391, 18516, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 18516, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 18516, null]], "pdf_page_numbers": [[0, 1560, 1], [1560, 3518, 2], [3518, 5381, 3], [5381, 7846, 4], [7846, 10760, 5], [10760, 12859, 6], [12859, 15279, 7], [15279, 17391, 8], [17391, 18516, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 18516, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
d0c2a610032333c8d198c1beafa0e1866e98e679
Software Challenges and Keys to Success Sept 2016 Joe Heil Lead of the Naval Software Community of Practice (SW COP) Lead of the Naval Syscom System Engineering Stakeholders Group Software Working Group (SESG SW-WG) Chief Engineer and Principal Software Engineer for the Strategic and Computing Systems Dept Naval Surface Warfare Center Dahlgren Division (NSWCDD) 540-653-1937 joe.heil@navy.mil Presenter Bio & Presentation Sources Presenter: Joe Heil - Over 30 years of applied Naval Warfare Systems Software Development and Leadership - 20 years as a SW engineer for the Tactical Tomahawk Missile Weapon Control System (TTWCS) - Software Developer, Group Lead, Branch Head, and Software Integrated Product Team (IPT) Lead - **Current software leadership responsibilities:** - Chief & Principal Software Engineer for the NSWCDD* Strategic and Computing Systems Dept - Lead: Naval Software Community of Practice (SW COP) - Lead: Naval System Engineering Stakeholder Group (SESG) Software Working Group (SW-WG) - **Primary Goal:** Improving naval software system acquisition and engineering success via increased awareness and application of best software engineering practices Presentation Information Sources - **First hand experience leading complex software intensive warfare system development efforts** - **Over last few years:** Chaired/Panel-Member for 300+ Warfare System Project Reviews - Wide range of development approaches (Strategic, Waterfall, Agile, Rapid, Prototyping, etc) - Wide range of Warfare Systems (Missiles, Guns, Lasers, Directed Energy, Non-lethal, Simulations, etc) - **Numerous Studies & Reports** - Defense Science Board, Gov’t Accounting Office, DASN/RDTE, Crosstalk, etc * NSWCDD: Naval Surface Warfare Center, Dahlgren Division Presentation Objectives and Content ❖ OBJECTIVES: Provide awareness of: − Common software system acquisition and engineering challenges − Proven techniques for software system acquisition and engineering success − Naval Software Improvement and Collaboration Efforts ❖ CONTENT − Context information − Challenges − Keys to Success − Improvement Interactions − Summary NSWCDD provides full spectrum system engineering & development for Naval warfare systems. This includes hands-on development of system and software requirements, architecture, design, and code; as well as system integration, test, and operational support responsibilities. **Context: Naval Surface Warfare Center Dahlgren Division (NSWCDD)** **Software Intensive Warfare System Development and Success** <table> <thead> <tr> <th>Rapid Development</th> <th>Tactical Development</th> <th>Strategic Development</th> </tr> </thead> <tbody> <tr> <td>Detect-Track-Engage Systems (Lasers, Guns, Non-Lethal)</td> <td><strong>Surface Warfare Mission Module (Littoral Combat Ship)</strong>&lt;br&gt;Tomahawk Weapon Control System&lt;br&gt;Anti-Submarine Warfare System&lt;br&gt;Others..</td> <td><strong>Submarine Ballistic Missile Mission Planning and Fire Control Systems</strong></td> </tr> </tbody> </table> *Also develop significant Simulations and Models: Ship Motion, Missile Flight, System Interfaces, Modeling/Sim Framework* **Demonstrated Complex Mission-Critical Software Engineering Success** - **Operational Success:** Thousands of successful Tomahawk Strikes and Battle Management System precision Strikes - **Operational Success:** Detect-track-engage systems (Gunslinger, Wolfpack, Battle Management System,..) - **Rapid technology transfer:** Deployed Laser Weapon System Quick Reaction Capability (LWC-QRC) - Decades worth of successful ballistic missile test launches - Award winning (Al Gore Hammer Award) modeling and simulations - **High Quality Software:** Defect ratios consistently less than 1 defect per KSLOC (thousand lines of code) - **Open Architected Systems:** Common, scalable, reliable, multi-platform capable software architectures *Not the complete set of NSWCDD software systems and products; just a small sample* Software Intensive Warfare Systems Challenges **Software Size, Reliance, Complexity, and Cyber Threats** **Cost, Schedule, & Technical Challenges** **Government on-house applied Software Expertise** Challenges include ever **increasing:** - Demand for faster and cheaper development and delivery of systems to meet emergent needs & threats - Pressure to “cut corners” and not utilize rigorous and disciplined data-driven best-practices/processes - Cyber warfare threats and associated Information and Software Assurance requirements - System and System-of-System (SOS) complexity and inter-dependencies - Rapid evolution of software technologies and methodologies **Primary Goal** Consistently deliver high quality and reliable software systems as efficiently as possible * Verified in documented performance result reports of DOD software system acquisition Software Challenges: Project Execution Limited use of mature data-driven best-practice based software project execution and continuous improvement Poor software effort estimation and tracking processes (non metrics based) Subjective assessment of cost, schedule, quality, and security versus metrics based Program Leadership with limited applied sw expertise; treat software as a “black-box” Poor communication Poor requirements management; significant requirements volatility Software architects not included in early system engineering phases Poor software architecture: not enough time spent on the initial arch/design; jump-into-coding Misuse of software prototyping (failure to throw away or formalize prototype code) Lack of formalized risk management processes Limited utilization of government in-house applied software expertise; over-reliance on industry Not a complete set, but includes some of the common major contributors to cost, schedule and technical failures Based on Numerous DOD/DON Studies/reports and first hand experience (300+ project reviews over last few years) Software Challenges: Cyber Threats Reactive cyber threat mitigation process - Focus on Information Assurance (IA) versus **Software Assurance (Quality, Security, and Resiliency)** - Numerous disjointed and non-coordinated “Cyber/IA/IT” organizations; proliferation of policy - Numerous cyber-tools; challenges to acquire/fund/train/integrate tools into development processes - Software security and resiliency were not designed-into legacy systems / cannot afford to re-architect - RDTE systems treated same as Business IT systems; cumbersome certification req’s, limits flexibility and responsiveness **Current “find-fix” cyber mitigation approach is re-active, slow, and costly** Cyber vulnerabilities are increasing Resiliency is not designed-in Very expensive to fix vulnerabilities after Deployment System Vulnerability Window: Months to Years <table> <thead> <tr> <th>Threat Created</th> <th>Threat Deployed</th> <th>Threat Disclosed</th> <th>Threat Disclosed</th> <th>Correction Available</th> <th>Correction Deployed</th> </tr> </thead> </table> Unclassified Distribution A: Approved for public release; distribution is unlimited. Keys to Software Engineering Success* Mature Documented Data-Driven Processes Work-Unit Based Software Effort Estimation and Tracking Open Architecture Agile/Rapid Development (Build-a-Little Test-a-Little) Simulations and Data Extraction Government and Industry Software-Development Teams Engineer-In Software Assurance (Security, Quality and Resiliency) Communication & Collaboration * Not the complete set of best practices; just a selected subset of proven techniques Keys to Software Success Mature Documented andMeasured Processes Capability Maturity Model Integrated (CMMI) (A Framework for continuous improvement) 1. Define / Refine Process & Metrics 2. Estimate Cost, Schedule, Quality 3. Track Actual Cost, Sched, Quality 4. Analyze Metrics 5. Optimizing (Metrics Driven) Technical Expertise coupled with Data Driven Processes facilitates consistent delivery of high quality software systems on schedule and within budget. Keys to Project Execution Success Best-Practice-Based System Engineering Processes System Engineering includes Cost, Schedule, & Technical Performance Best Processes: - Project Planning - Project Monitoring and Control - Risk Management - Integrated Product Teams (IPT) - Configuration Management - Requirements Development and Management - Product Architecture - Trade Studies - Product Integration - Verification - Valiation Of Projects deploying the most SE best practices, over 50% delivered higher project performance. System Engineering best practices improves project execution. System Engineering best practices include Project Planning & Control. # Metrics Driven Project Execution **GOAL:** Deliver High Quality Products on Schedule and Within Budget <table> <thead> <tr> <th>Question</th> <th>Metric Examples (Not the complete set)</th> </tr> </thead> <tbody> <tr> <td>Is the scope of the effort understood?</td> <td>Requirements Impacted (New, Modified, Deleted)</td> </tr> <tr> <td></td> <td>Estimated Hours/Dollars per Activity (SE, SW, Test, etc)</td> </tr> <tr> <td>Are the requirements understood, documented, allocated, and stable?</td> <td>Requirements Volatility over time</td> </tr> <tr> <td></td> <td>Requirements Allocation to Org’s and Eng/Test Activities</td> </tr> <tr> <td>Is the project adequately staffed?</td> <td>Staffing profiles (by discipline and org)</td> </tr> <tr> <td></td> <td>Skill sets required versus on-board and available</td> </tr> <tr> <td>Is the project on schedule?</td> <td>Integrated Master Schedule and Detailed Schedules</td> </tr> <tr> <td></td> <td>Planned vs. Actual Progress w/ variance explanations</td> </tr> <tr> <td>Is the project on budget?</td> <td>Planned vs. Actual Cost w/ variance explanations</td> </tr> <tr> <td>Is the project meeting quality goals?</td> <td>Defect Detection and Closure Trends, Defect Ratios</td> </tr> <tr> <td></td> <td>Requirements and Tests Passed vs. failed</td> </tr> <tr> <td>Is the project FORMALLY managing risks?</td> <td>Open versus Closed Risk Trends</td> </tr> <tr> <td>Is the project continuously improving?</td> <td>Cost, Schedule, and Quality variance reductions</td> </tr> </tbody> </table> **Metrics should be:** - Utilized in all activities, and by all organizations - Proactively and regularly utilized (not just at milestone reviews) - Documented, well defined, value added, and easily understood - Supported by a Software Level Work Breakdown System (WBS) - Capable of being rolled up to higher levels of abstraction - Continuously assessed and improved SLOC based estimation approach relies on several _unrealistic_ assumptions: - Software engineers can accurately estimate hundred-of-thousands to millions of SLOC level efforts - SLOC based productivity factors (SLOC per Hour) are based on accurate & relevant historical data - SLOC productivity is indicative of other engineering activity productivity (Req's, Design, Test, etc) - Constant effort relationship between SW activities and other engineering activities' Software Success Work-Unit Based Effort Estimation and Tracking For Each Software Component and each SW development Activity (Requirements, Design, Code, Test): **DEFINE WORK-UNITs (WU)** Define Productivity (P) (Hours per Work Unit) **ESTIMATE WORK UNITS** - Utilize historical data from previous similar efforts **CALCULATE EFFORT (Hours)** Development Activity Hours = Estimated Work-Units * Productivity Factor SW Component Hours = Sum of ALL Dev Activity Hours Incremental Build Hours = Sum of ALL SW Component Hours **IMPROVE ESTIMATION** Compare Estimates to Actuals - Revise Work Units - Revise Productivity Factors **TRACK PROGRESS** Track Work Units Produced Track Actual Productivity Revise as required **Work Unit Based Estimation & Tracking** <table> <thead> <tr> <th>Activity</th> <th>Estimation Method</th> </tr> </thead> <tbody> <tr> <td>REQ's</td> <td>Hours per Requirement, Hours per Interface, etc.</td> </tr> <tr> <td>DESIGN</td> <td>Hours per Object, Hours per DODAF view, etc.</td> </tr> <tr> <td>CODE</td> <td>Hours per Object, SLOC per Hour</td> </tr> <tr> <td>TEST</td> <td>Hours per Test Development, Hours per Execution</td> </tr> </tbody> </table> Define a “Work Unit(s)” per SE activity Define associated productivity factor Per each Work Unit **System Eng “V” Chart** Government Develops and owns the core Architecture, Interfaces, and software Industry develops the “Plug and Play” Sensors, Launchers, Weapons Components OA: A system composed primarily of common software that can be utilized across a wide range of platforms with minimal changes Aligns with ASN RD&A Memo to the SYSCOMs/Chief of Naval Research “Use of In-House Engineering and Technical Requirements” 23 Feb 2012 Software Development Best Practices ACTIVITIES: - Mission Level Req’s - Mission Level Architecture - Mission Level Interfaces - System Level Req’s - System Level Architecture - System Level Interfaces - Early System Level Testing - Prototypes, Models, Simulations, Frameworks, ... - Software Level Req’s - Software Level Architecture - Software Level Interfaces - Early Software Component Level Testing - Representative (if possible) HW and SW Components; - Utilize Simulations and I/F drivers (GO AND FAULT PATHS) - Cyber security and System Resiliency Testing - Software Components - Software Components - Early Software Component Level Testing - Representative (if possible) HW and SW Components; - Utilize Simulations and I/F drivers (GO AND FAULT PATHS) - Cyber security and System Resiliency Testing - Integrated System Build - Integrated System Test Builds - Software / System Integration Testing - Actual and Representative HW and SW Components; - Augmented by Simulations and I/F drivers - Cyber Security, Penetration Testing, and System Resiliency Testing - Platform based System Testing - Actual Hardware and Software - Operational Support - Well defined SW Support Activity - Quick recovery from exploited threats Program Plan must include post-delivery support approach: Organizational responsibilities, funding, problem tracking User-Centered and Risk Focused System/Software Engineering - Complete Requirements Traceability and Configuration Management - Formal Risk Management - Adherence to Data-Driven Best Practices - Early & Often Verification and Validation - Prototyping - Models and Simulations - Automated testing - Data Extraction & Analysis Tools Design-In - Software Security and Resiliency - Automated Testing in early activities - Utilize Cyber Security Tools Defense-In-Depth: - Protect-Detect-Isolate-Endure-Recover - Limit interfaces to external systems - Identify and harden (firewall) critical control points (interfaces) - Add processing to Detect cyber intrusions - Isolate and limit consequences (protect mission critical components)- - Endure through the Cyber intrusion to successfully complete the mission - Return the system to a trusted state The Diagram represents a flowchart of the various stages and activities involved in software development, including mission level, system level, and software level requirements. It highlights the importance of early testing, including prototypes, models, and simulations, as well as the need for integrated system builds and testing. The diagram also emphasizes the importance of adhering to data-driven best practices and ensuring effective risk management throughout the development process. Additionally, it underscores the necessity of protective measures such as limiting interfaces to external systems, identifying and hardening critical control points, and adding processing to detect cyber intrusions. The final phase includes operational support and well-defined support activities to ensure a quick recovery from exploited threats. **Rapid Software Development Goals** - **Get Capability to Warfighter More Quickly** - Short-term releasibility - Providing an early version of the software with a subset of its ultimate functionality - Requirements flexibility - Ability to quickly change requirements while the software development is in progress. **NOTE:** Agile development may not be the appropriate approach for all projects. Program Leaders must assess if Agile is the best approach for their specific program needs and structure. Rapid Software Development GOAL: Each sprint is working software, subset of capabilities, & capable of being delivered Requires frequent regular communication between user, sponsor, & developers - Rapid /Agile development is NOT an excuse to not have: - Documented data-driven processes - Cost, schedule, and technical performance measures - Requirement, Arch, Design and Test documentation **EACH SPRINT** - **Daily Scrum Meeting** - **Product Backlog** - **Sprint Backlog** - **24 Hours** - **2-4 Weeks** - **Potentially Shippable Product Increment** - **REQ** - **Test** - **Design** - **Code** Image available at www.mountaingoatsoftware.com/scrum Copyright © 2005, Mountain Goat Software **Software Success Simulations and Data Extraction** **Key** - Tactical Computer Software Configuration Items (CSCIs) - Simulations - Data Extraction **Simulations** Must support both “go path” and “fault’ path scenarios - Fault Paths: Send data out of sequence, out of bounds, at high rates, etc **Data Extraction** All interface data, critical internal states and data **Data Reduction Program** - Must include effort to develop/modify simulations in cost and schedule planning - Developed using disciplined processes; and ideally, by independent team Many DOD/DON Program Managers have limited applied software expertise. Must have some gov’t engineers responsible for actual development (not just contractor oversight) - Provides Program Manager with **business and technical advantages** - Facilitates controlling cost: government is **not solely reliant on industry expertise** - Provides Industry with a **true technical peer** to help negotiate cost, schedule and technical approach **Government hands-on software development is required to:** - Perform as a smart buyer and successfully team with industry - Maintain expertise with the latest technologies **Gov’t In-House Applied Expertise Pipeline** **System Development Responsibility & Complexity** - **Mission and Systems-Of-Systems Level** - **System Level** - **System Component Level** - **System Sub-Components Level** **Time / Experience** **Hands-On Applied Expertise at all levels of complexity** **Software Development Responsibility** - **DEFINE System Req’s** - **ARCHITECT System & Software** - **DEVELOP System & Software** - **INTEGRATE & TEST System** **Gov’t Software Experts team with Industry SW Experts to Define, Design, Develop, and Deliver Software Systems** **Teaming With Industry** **Hands-On Development** - Perform as a smart buyer and successfully team with industry - Maintain expertise with the latest technologies Government and Industry Software Development Teaming Win - Win – Win - Win - **Warfighter** - Faster receipt of capabilities - Increased capabilities - Higher quality and more reliable systems - **Government Program Offices** - Improved Technology, Cost, and Schedule Estimates and Assessments - Increased and maintained corporate knowledge - Increased acquisition leverage and flexibility (more competition) - **Industry** - Improved proposal assessments (smarter partner, not just lowest bid wins) - Reduced risk (smarter partner, improved requirements, government accountability) - *More profit* (less dollars on rework; increased system production and upgrades) - **Taxpayer** - Better utilization of tax dollars - High quality, reliable, secure systems = Better protection of serving family members *This software development teaming approach has been consistently successfully utilized for some of the Navy’s most critical warfare systems* ACTIVITIES: - Mission Level Req’s - Mission Level Architecture - Mission Level Interfaces - System Level Req’s - System Level Architecture - System Level Interfaces - Software Level Req’s - Software Level Architecture - Software Level Interfaces - Software Components - Software Components - Integrated System - Integrated System - Integrated System - System Testing - Operational Support Include senior level software architects during initial SE efforts Throughout the system engineering process: Utilize integrated multi-discipline product teams: Include: User & Operator Rep’s, System Engineering, Software Engineering, Test, Logistics, Safety, etc. Communicate, Communicate, Communicate! Identify & mitigate cost, schedule and technical risks Daily “stand-up” meetings Weekly: discipline specific (SE, SW) Focused cost, schedule and technical Performance and Risk meetings Structured & Metrics Based Communication Software Assurance: Best Practices “Engineer-In” Quality, System Survivability, Security, and Resiliency **GOAL:** Define, develop and deliver **high quality, secure and resilient systems** Programs should designate a Chief SOFTWARE Architect in addition to Chief System Engineer Define Assurance and Resiliency Requirements **Architect/Design resilient “defense-in-depth” systems** Protect-Detect-Isolate-Endure-Recover Utilize tools to remove vulnerabilities during development Define and utilize metrics and tools to quantify sw vulnerability, survivability, resiliency, and risk Utilize data-driven best software engineering practices Include independent Audits to ensure best practices --- **Train the workforce on both sw best practices and Cyber vulnerabilities, threats, tools, secure coding, resilient design, implementation** Software Assurance Software reliance, complexity and cyber threats are increasing Software Assurance focus must be more than just cyber security Information Assurance (IA) compliance does NOT equal cyber security There is no single “silver bullet” to ensure software quality and security SW Assurance must be addressed throughout the system engineering life-cycle Software Assurance requires application of software best practices, tools, and measures Cannot “test-in” SW assurance, must “design-in” quality, security, and resiliency Key to Success: Increased communication and collaboration SW Assurance Definition (DOD): The level of confidence that software functions as intended and is free of vulnerabilities, either intentionally or unintentionally designed or inserted as part of the software throughout the lifecycle. (per DoDI 5200.44) SW Assurance Definition (Software Engineering Institute) "Application of technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner, are free from accidental or intentional vulnerabilities, provide security capabilities appropriate to the threat environment, and recover from intrusions and failures." Software System Characteristics Goals (more than just “secure”) - Secure, Safe, Reliable, Modular, Maintainable, Scalable, Portable, Defect Free - Resilient- Meets war fighter mission critical needs despite cyber intrusion Facilitate Awareness and Application of Best Software Engineering Practices via Increased Communication and Collaboration Naval Software Community of Practice (SW COP) Executive Summary - **Background:** In 2009, NSWCDD initiated the establishment of a Naval Software Community of Practice (SW COP) - **Goal:** Improve Naval warfare software system’s cost, schedule, and technical performance. - Share best practices, processes, tools, techniques and artifacts - Provide resources to help solve complex technical software problems - Provide access to software engineers with specialized expertise - Provide awareness of software laws, policies, guides and requirements - Promote maintaining government in-house applied software expertise - **Approach:** Increase communication and collaboration between government sw experts - **Participation** 280+ participating software experts from 17 different organizations - **Results:** 800+ artifacts posted to the knowledge sharing site - **Results:** 850+ hours saved via collaboration **PRODUCTS:** SW COP provided inputs to DoD and DoN policies and guides: - OSD Program Protection Policy (PPP) SW Assurance (IN-WORK) - DOD SEI SW Sustainment Study (In-Work) - Office of the Secretary of Defense (OSD) Defense Acquisition Guide (DAG) - Naval Open Architecture (OA) Contracts Guide - Naval Open Architecture (OA) Metrics Guide - Naval System Engineering Guide: Software Sections - Naval Guidebook for Acquisition of Software Intensive Systems (SW Guidebook) Keys to Software System Acquisition Success Summary Gov’t SW engineers Hands-On Full Spectrum Engineering Technical Expertise coupled with Data-Driven Continuous Improvement Applied Technical Expertise 1. Define & Refine Process & Metrics 2. Estimate: Cost, Schedule, Quality 3. Track Actuals: Cost, Schedule, Quality 4. Analyze Metrics Management Processes METRICS DRIVEN High quality products Delivered on time & within budget Continuous data-driven improvement Technical Processes Project Reviews Quarterly Execution Reviews I. SYSTEM ANALYSIS & CONTROL 1. Define & Refine Process & Metrics 2. Estimate: Cost, Schedule, Quality 3. Track Actuals: Cost, Schedule, Quality 4. Analyze Metrics Applied Technical Expertise Gov’t Experts Teaming With Industry Software Development Responsibility HANDS-ON DEVELOPMENT DEFINE System Req’s ARCHITECT System & Software DEVELOP System & Software INT/TEST System RESULT: Government Owned & Controlled System Arch & Software Best Practices Maintaining and utilizing gov’t in-house applied software engineering expertise Disciplined data-driven project execution and continuous improvement Disciplined requirements management Formal Risk Management Open architected and defense-in-depth architected systems Agile / Rapid / Build-a-little Test-a-little development methodologies Design-in Software quality, Security and Resiliency Increased communication and collaboration; sharing of best-practices Utilize Data-Driven Best Practices and Maintain In-House Applied Software Expertise Data Driven Success “In God we trust, all others must bring data”. W.E. Demming “For every opinion, there is an equal and opposite opinion; but, for every fact there is not an equal and opposite fact” L. Albuquerque “Without data, you are just another person with an opinion”. Unknown “You cannot expect what you do not inspect” MARCOR proverb “Trust but verify” Ronald Reagan Software Challenges Increasing Software Size and Complexity Many Warfare System Program Managers do not have applied expertise with software engineering. Software is often treated as a “black box”; software size and complexity is not understood. Software Challenges Dept Of Defense (DOD) Software System Acquisition Results DOD SOFTWARE SYSTEM ACQUISITION PERFORMANCE - 100% Capabilities Delivered - 96% Development $ Spent - 50% Operational Testing - 50% Exceeded Nun-McCurdy - 84% On Schedule And Budget - 39% Not Delivered - 40% Spent on Rework Due to SW Problems - 50% Failed Initial Op-Tests - 50% Exceeded Cost Threshold Failures: Cost, Schedule or Technical Performance Success: Cost, Schedule or Technical Performance References: Secretary of Defense (SECDEF), SECDEF Memo: "Department of the Navy Acquisition", December 2008. Senator Carl Levin, U.S. Senate Committee of Armed Services Press Release, March 2009 Software intensive warfare system engineering efforts are not consistently successful with regards to cost, schedule, and technical performance **Assertion:** Cannot assume that all cyber vulnerabilities and threat vectors are known. Cannot assume that systems are 100% secure and can be protected from penetration. **Utilize Defense-In-Depth and Design-In System Resiliency** **Protect-Detect-Isolate-Endure-Recover** - Limit interfaces to external systems - Identify and harden (firewall) critical control points (interfaces) - Add processing to Detect cyber intrusions - Isolate and limit consequences (protect mission critical components) - Endure through the Cyber intrusion to successfully complete the mission - Return the system to a trusted state Software Success Open Architecture (OA) Characteristics Open Architected Software System: A system composed primarily of common software that can be utilized across a wide range of platforms with minimal changes * Reference: OA Architectural Principles and Guidelines v 1.5.6, 2008, IBM, Eric M. Nelson, Acquisition Community Website (AC) DAU Navy OA Website Composability The System Provides Recombinant Components that can be Selected and Assembled in Various Combinations to Satisfy Specific Requirements Interoperability Ability of Two or More Subsystem to Exchange Information and Utilize that Information Open Standards Standards that are Widely Used, Consensus Based, Published and Maintained by Recognized Industry Standards Organizations Maintainability The Ease With Which Maintenance of a Functional Unit can be Performed in Accordance With Prescribed Requirements Extensibility Ability to add new Capabilities to System Components, or to add Components and Subsystems to a System Modularity Partitioning into Discrete, Scalable, and Self-Contained Units of Functionality, With Well Defined Interfaces Diagram Key - is Enabled by - is Facilitated by Open Systems Facilitate Achieving the Following: - Reduce life cycle costs, Reduce acquisition time - Increase system reliability, maintainability, and quality - Increase competition (small business develops components) Best Practices: Communication Well defined and *documented* Statement of Work (SOW): - Clearly Defined Tasking, Roles, Responsibilities and *Deliverables* - Protect and Ensure government ownership rights of software deliverables Establish and maintain frequent periodic structured communication - Structured (standardized) agenda and data/metric based reporting (not subjective) - Maintain planned vs. actual cost, schedule, and technical performance metrics Document and track Risks: Utilize Risk Management Tools (e.g. Risk Exchange) Ensure Cross Organizational and Engineering Discipline Communication - Utilize Integrated Product Teams (IPTs): Customer, Users, System Eng’s, Software, Test, etc Daily “stand-up” meetings with Program and Technical Leads: Short, Concise, Risk focused Weekly specific engineering discipline (e.g. SE, SW) meetings - Cost, Schedule, Technical Performance and Risk meetings Event Driven Project Reviews: Technical Reviews, Delivery Readiness, Project Completion *Communicate, Communicate, Communicate!* *Identify, communicate, and mitigate cost, schedule, and technical risks*
{"Source-Url": "https://dauaa.org/wp-content/uploads/2016/11/HTFPrezSept2016.pdf", "len_cl100k_base": 6467, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 64415, "total-output-tokens": 7871, "length": "2e12", "weborganizer": {"__label__adult": 0.0003170967102050781, "__label__art_design": 0.00023365020751953125, "__label__crime_law": 0.000507354736328125, "__label__education_jobs": 0.0015363693237304688, "__label__entertainment": 5.632638931274414e-05, "__label__fashion_beauty": 0.00013077259063720703, "__label__finance_business": 0.0008840560913085938, "__label__food_dining": 0.0002899169921875, "__label__games": 0.0007991790771484375, "__label__hardware": 0.0005578994750976562, "__label__health": 0.0001932382583618164, "__label__history": 0.00013911724090576172, "__label__home_hobbies": 6.502866744995117e-05, "__label__industrial": 0.0003659725189208984, "__label__literature": 0.00011271238327026369, "__label__politics": 0.0002846717834472656, "__label__religion": 0.00017690658569335938, "__label__science_tech": 0.0031185150146484375, "__label__social_life": 9.334087371826172e-05, "__label__software": 0.00829315185546875, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.000247955322265625, "__label__transportation": 0.0004477500915527344, "__label__travel": 0.00014543533325195312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32512, 0.00637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32512, 0.04485]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32512, 0.86584]], "google_gemma-3-12b-it_contains_pii": [[0, 399, false], [399, 1784, null], [1784, 2153, null], [2153, 2426, null], [2426, 3914, null], [3914, 4780, null], [4780, 5881, null], [5881, 7058, null], [7058, 7539, null], [7539, 8003, null], [8003, 8663, null], [8663, 11843, null], [11843, 12309, null], [12309, 13505, null], [13505, 13922, null], [13922, 17059, null], [17059, 17577, null], [17577, 18282, null], [18282, 18841, null], [18841, 20204, null], [20204, 21178, null], [21178, 22104, null], [22104, 22951, null], [22951, 24409, null], [24409, 24531, null], [24531, 25930, null], [25930, 27475, null], [27475, 27857, null], [27857, 27857, null], [27857, 28104, null], [28104, 29389, null], [29389, 30003, null], [30003, 31394, null], [31394, 32512, null]], "google_gemma-3-12b-it_is_public_document": [[0, 399, true], [399, 1784, null], [1784, 2153, null], [2153, 2426, null], [2426, 3914, null], [3914, 4780, null], [4780, 5881, null], [5881, 7058, null], [7058, 7539, null], [7539, 8003, null], [8003, 8663, null], [8663, 11843, null], [11843, 12309, null], [12309, 13505, null], [13505, 13922, null], [13922, 17059, null], [17059, 17577, null], [17577, 18282, null], [18282, 18841, null], [18841, 20204, null], [20204, 21178, null], [21178, 22104, null], [22104, 22951, null], [22951, 24409, null], [24409, 24531, null], [24531, 25930, null], [25930, 27475, null], [27475, 27857, null], [27857, 27857, null], [27857, 28104, null], [28104, 29389, null], [29389, 30003, null], [30003, 31394, null], [31394, 32512, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32512, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32512, null]], "pdf_page_numbers": [[0, 399, 1], [399, 1784, 2], [1784, 2153, 3], [2153, 2426, 4], [2426, 3914, 5], [3914, 4780, 6], [4780, 5881, 7], [5881, 7058, 8], [7058, 7539, 9], [7539, 8003, 10], [8003, 8663, 11], [8663, 11843, 12], [11843, 12309, 13], [12309, 13505, 14], [13505, 13922, 15], [13922, 17059, 16], [17059, 17577, 17], [17577, 18282, 18], [18282, 18841, 19], [18841, 20204, 20], [20204, 21178, 21], [21178, 22104, 22], [22104, 22951, 23], [22951, 24409, 24], [24409, 24531, 25], [24531, 25930, 26], [25930, 27475, 27], [27475, 27857, 28], [27857, 27857, 29], [27857, 28104, 30], [28104, 29389, 31], [29389, 30003, 32], [30003, 31394, 33], [31394, 32512, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32512, 0.04771]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
7ee2321224bb1c4439f388b9531e68f180053d2a
Secure Coding. Practical steps to defend your web apps. Copyright SANS Institute Author Retains Full Rights This paper is from the SANS Software Security site. Reposting is not permitted without express written permission. Interested in learning more? Check out the list of upcoming events offering "()" at http://software-security.sans.orghttp://software-security.sans.org/events/ Survey on Application Security Programs and Practices A SANS Analyst Survey Written by Jim Bird and Frank Kim Advisor: Barbara Filkins February 2014 Sponsored by Hewlett-Packard, Qualys and Veracode ©2014 SANS™ Institute This is the SANS Institute’s second survey on application security programs and practices. In this year’s survey, we wanted to uncover answers to the following questions: - How widespread are application security programs, and how mature are the programs that are in place today? - How effective are these programs? - What practices and tools are organizations relying on the most today, and what are they finding the most useful? - How is secure coding training for developers being done, and how effective is this training? - How are people justifying spending on Appsec, and where are they spending most of their efforts? Does this spending align with organizational risk? - What will the future of Appsec look like? Are organizations planning to invest more in Appsec? And what programs or technologies are on their future roadmaps? We asked some of the same questions in our first survey on application security practices, just in a different way. Some of the trends we identified include the following: - There was a significant improvement in the number of organizations implementing application security programs and practices. The percentage of organizations that have an active Appsec program increased from 66% last year to 83% this year—and many of the organizations that do not have a program in place yet are at least following some kind of ad hoc security practices. - Organizations continue to rely heavily on dynamic testing, vulnerability scanning and penetration testing to find security vulnerabilities. - Organizations are testing more frequently. In this year’s survey, more than one-third are doing continuous, ongoing security testing of their applications, whereas only 23% indicated doing so in our 2012 survey. - The primary focus of most Appsec programs continues to be web applications, because this is where organizations see the highest security risks. - Organizations continue to face the same kinds of challenges in getting management buy-in for application security programs. But the leading inhibitor for putting effective Appsec programs in place is now a shortage of application security skills, whereas in last year’s survey, the leading inhibitor was management buy-in and funding. In this year’s survey, organizations also ranked technical resources to maintain security in production as their fourth most difficult problem. 1 www.sans.org/reading-room/analysts-program/sans-survey-appsec The 488 respondents to this survey represented a broad range of industries. In this year’s survey, financial services (17%), government (15%), “other” (13%) and high-tech firms (9%) led the way; similarly, in last year’s survey, financial services and government were tied at 17% each and high-tech followed the “other” category. Although not the next in terms of representation, it is noteworthy that 6% of respondents came from application development houses. Figure 1 illustrates the diversity of the industries represented in this survey. Application security is a consideration for every organization, regardless of size. Small and mid-size organizations and large enterprises were all included in the survey, as illustrated in Figure 2. One-quarter of the respondents worked in very large enterprises of more than 15,000 people, and almost 39% were from organizations with 1,000 or fewer people, lending a representative sampling of organizational size to the survey results. We also asked participants to identify the principal role they play in their organization (whether as a consultant or an employee). Most respondents were from the security community, as shown in Figure 3. Security analysts or security managers made up 44% of the sample. Software developers (developers, engineers, architects and testers) accounted for 12%, and IT managers and executives also accounted for 12%. IT operations was also well represented, with 14% of the respondents in system admin or network engineering. Approximately 28% of the respondents are in a management or executive role. To further refine our understanding of survey responses, we wanted to know how big the application development teams were in our responding organizations. Figure 4 shows the number of developers employed by responding organizations. Although 10% of respondents had software development organizations with more than 2,000 developers, 30% of development teams were small, with fewer than 25 developers—and 6% of respondents had no developers at all, relying completely on third parties for software development. There seems to be a distinction between the practices for designing and developing applications. Although most organizations design their systems internally, either using their own employees (75%) or consultants (38%), fewer use internal employees (52%) or consultants (33%) to develop the applications after design, as shown in Figure 5. Only 18% of respondents hire third-party firms to complete their application design work, and 22% hire third parties to do their development work. A full 41% of respondents also rely on commercial off the shelf (COTS) applications, and just under 24% of firms rely on open source software. Application Development Priorities Where are organizations spending most of their development dollars? Web applications and business-critical apps, which are often the same, (both at 67%) stand well above the others as recipients of development dollars, as shown in Figure 6. *Figure 6. Software Development* Mobile applications (28%) are becoming a major focus for organizations, ahead of spending on legacy apps (25%). Application Security Risks In last year’s survey, we asked what kinds of applications posed the biggest security risks to an organization. In order, the results were: - Customer-facing web apps (by far the highest risk) - Internal web apps - Mobile apps - Legacy apps and CRM/databases (usually accessed through Web and mobile channels) This year’s survey didn’t distinguish between types of web apps, but it’s clear that the highest security risk continues to come from web applications, with 38% selecting this as their biggest application risk area, and business-critical applications (19%), as shown in Figure 7. ![Pie chart showing application security risks] Mobile risk has slipped in the ranking, with only 6% feeling that to be their biggest risk; only 7% see cloud-based services as a major security risk. Organizations also continue to downplay the risks of working with third parties, whether COTS providers (8%) or outsourced development organizations (3%). Application Security Programs We wanted to know how many Appsec programs are in place, how long they have been in place, how administrators justify their programs, and what practices and tools people rely on the most. Maturity of Appsec Programs Almost 74% have programs that have been in place for at least one year, and more than one-third (37%) have programs that have been running for more than five years (see Figure 8). Even in organizations that don’t have a formal program today, most (79% of those without a formal program) are following ad hoc Appsec practices. The number of organizations with an active Appsec program has increased significantly over the past year. Table 1 shows how the maturity of programs has changed since our 2012 survey. Table 1. Growth in Appsec Programs <table> <thead> <tr> <th>How Long Has Your Appsec Program Been in Place?</th> <th>2012</th> <th>2014</th> </tr> </thead> <tbody> <tr> <td>No formal program</td> <td>34.3%</td> <td>16.9%</td> </tr> <tr> <td>Less than 1 year</td> <td>9.8%</td> <td>9.0%</td> </tr> <tr> <td>1 to 5 years</td> <td>32.9%</td> <td>36.7%</td> </tr> <tr> <td>More than 5 years</td> <td>22.9%</td> <td>37.3%</td> </tr> </tbody> </table> Justification of Appsec Program Support Earlier this year, John Pescatore at the SANS Institute analyzed the different approaches and tools that organizations can use to secure management support and funding for an application security program. He reported the following options: - Using a publicized incident to illustrate risk/benefit - Managing regulatory pull—meeting regulatory requirements such as PCI, NIST, HIPAA, FDA and NERC - Taking advantage of industry governance standards (ITIL, COBIT and ISO 27034) - Capability Maturity Models (Cigital’s BSIMM or OWASP’s OpenSAMM) - Industry benchmarking Our survey results quantify the use of these options. Organizations are taking proactive and reactive approaches to justifying application security spending, as illustrated in Figure 9. ![Figure 9. Justifications for Appsec Spending](image-url) Risk analysis based on industry benchmarks is used to justify spending by 43% of organizations, and 21% benchmark spending to justify their programs. Reactive approaches include justifying spending in response to audit findings (39%), a security incident (26%) and customer demands (25%). Responding to customer demands is a driver that we identified in last year’s survey: Organizations, especially large enterprises, are being pushed more by their customers, and are, in turn, pushing their software and software-as-a-service (SaaS) suppliers to implement responsible Appsec programs. Costs for Appsec programs are being included in general IT security programs 33% of the time, in regulatory compliance programs 31% of the time, and in specific IT programs or project budgets 27% of the time. Only 17% of Appsec costs are included in software quality spending. Most of the justifications for Appsec spending are focused on security, compliance and risk management—not on enabling the business or supporting the business strategy. Spending on application security programs will continue to lag until the information security team can make an explicit connection not just to incidents and hacking or staying up-to-date on compliance requirements, but also to enabling business strategy and meeting customer demands. **Support of Appsec Programs** On the whole, Appsec initiatives seem to be aligned with where organizations are spending development and IT dollars—and to where organizations see the greatest risk. The priorities for Appsec security spending are highlighted in Figure 10 and align closely with development spending (Figure 6) and perceived risks (Figure 7). Most organizations are focusing their Appsec programs where it makes the most sense today, on where they are spending most of their development dollars: web apps (80%) and business-critical apps (72%), which are often the same. But they are also trying to keep pace with emerging threats. While only 27% of development/IT resources are being spent on developing mobile apps, 35% of organizations are focusing their Appsec attention on mobile security issues; and application security focus on cloud implementations (23%) matches the amount of development and other IT resources spent in this area (19%). However, even though 23% of respondents rely heavily on third-party software products and services (COTS, cloud-based services and open source software), they are not taking enough responsibility for ensuring the security of third-party solutions. Only 23% of security programs include COTS. The same is true for cloud services, and only 14% focus on open source software. This situation should improve as the security industry continues to highlight the risks of relying on outsource and third-party providers and the open source community to police themselves. For example, a recent study conducted by Sonatype and Aspect Security on the use of open source software found that more than 50% of Global 500 organizations are using open source code with known security vulnerabilities. In 2013, OWASP added the use of insecure third-party software components to the OWASP Top 10 risk list, a widely used application security risk management tool. The Financial Services Information Sharing and Analysis Center (FS-ISAC) published a set of guidelines that banks and other organizations can use to assess the application security programs of their software and software service providers, and SAFEC ode and the Cloud Security Alliance released a new set of guidelines for securing cloud applications. 4 www.owasp.org/index.php/Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities 6 https://cloudsecurityalliance.org/media/news/safecode-csa-secure-development-cloud Organizations are using multiple technologies and services in the attempt to protect their applications. In last year’s survey, we found that the technologies or practices most used by organizations in their security programs were (in order of use): static analysis testing, dynamic analysis testing, pen testing, third-party assessments, application firewalls and virtual patching. This year we asked organizations to rate which Appsec tools and practices they found the most useful. The tools and practices that ranked the highest include application penetration testing, testing with dynamic analysis (DAST) or vulnerability scanning tools, and using application firewalls to detect or block attacks, as shown in Figure 11. But organizations are not getting as much value as they should out of other practices, especially virtual patching, secure DevOps, static analysis (SAST) and threat modeling. Virtual Patching Virtual patching builds on the effective use of application firewalls, as well as application security testing, and requires the close coordination of Infosec and Operations. It involves setting up an application firewall in blocking mode, testing and finding vulnerabilities in an online application, taking the testing results and creating signatures or rules for the firewall to block attacks against these vulnerabilities, and implementing these rules in production. Virtual patching is intended to be a temporary solution until the development team can fix the code—or for use when the organization doesn’t have access to the code (for example, patching a security vulnerability in commercial third-party software). But it’s time-consuming and difficult to scale, even when using dynamic testing tools and firewalls that are designed to work together. Secure System Operations/Devops With continued adoption of Agile development and the demand for faster time-to-delivery, we expect more organizations to take up Devops practices such as “infrastructure as code” and Continuous Delivery or Continuous Deployment, which build on standardized configuration management for infrastructure and applications, automated deployment and fast feedback loops between operations and development. Security checks and balances can—and should—be built into all of the steps involved, from automated security testing in Continuous Integration through to deployment checks and run-time security self-tests (following the example of Netflix’s Simian Army). Static Analysis While Infosec can run a dynamic scan or pen test on the system and pass the results back to development to be fixed, SAST (scanning source code or binaries for common security vulnerabilities and bug patterns) requires more hands-on involvement from developers because it works directly on the code. Developers have to assist with setup, take the time to review and understand what the tools find and then weed through all of the false positives before they can begin triage, fix bugs and roll out patches. Although suppliers continue to improve the speed and accuracy of SAST tools and make them easier to use, developers need security training or expert help to understand what the tools are telling them, which vulnerabilities are important, why they need to be fixed and how to fix them. Developers—and managers—need to be convinced that all of this is worth their time. Although bridging the gap between Infosec and development teams and getting developers to use static analysis testing effectively can take time and effort, it can also pay dividends by providing a much faster feedback loop. By running static analysis checks frequently, developers can find out quickly when they have made a mistake—and they can fix the problem while they are still working on the code, rather than waiting days or weeks or months for the results of a penetration test. Finally, the cost of static analysis tools is an issue for many organizations. Good commercial tools are expensive and are generally out of the reach of all but large enterprises, which account for only 25% of the respondents to this survey. Threat Modeling Static analysis testing is one way that organizations can solve security problems early in development. Threat modeling is another. More than 75% of the organizations surveyed design applications in-house. However, only a small percentage of them do threat modeling or find it useful. Threat modeling—understanding and managing security threats in application architecture and design through a structured process that involves developers and security experts working together—demands a significant commitment from the development organization. The shortage of application security skills noted earlier is also a major limiting factor here. It is difficult to find security engineers who understand application design and architecture and application architects who understand security risks in application design. Organizations need less-expensive alternatives to threat modeling in order to identify and manage application security risks up front. Most enterprises whose main business is not selling software or SaaS cloud services should at least focus on higher-level strategic threat modeling to understand what threat actors will likely target the organization and which applications are likely to be the targets of attack. They can then use this information to prioritize Appsec initiatives across the application portfolio and to build a business case for funding them. Smaller software development organizations, especially Agile development teams, should adopt lighter weight, incremental approaches to add security risk and threat analysis into architecture and design. Threat modeling, as it is commonly described, is a formal, document-heavy security walkthrough of system design artifacts and does not work well for teams following Agile development practices, where design details are worked out iteratively and incrementally and the design is always in flux. Dr. Gary McGraw, for one, has recently outlined a simpler, more scalable method for application risk assessment called a “Security Architecture Survey.” As he points out, although this kind of analysis is less comprehensive and less robust than more formal techniques, organizations are more likely to do it because this analysis is much less expensive and more scalable. --- We asked our respondents how frequently they assess the security of their business-critical applications that were in production. Figure 12 shows the frequency of testing reported in this survey. The frequency at which organizations are doing security testing has increased significantly over the past year, as illustrated in Table 2, which shows our 2012 survey results compared to the 2014 results. Table 2. Comparison of Testing Results <table> <thead> <tr> <th>Frequency of Security Testing for Applications in Production</th> <th>2012</th> <th>2014</th> </tr> </thead> <tbody> <tr> <td>No security testing done</td> <td>13.5%</td> <td>2.7%</td> </tr> <tr> <td>Only when applications are updated, patched or changed</td> <td>21.3%</td> <td>10.1%</td> </tr> <tr> <td>Every year</td> <td>14.3%</td> <td>19.5%</td> </tr> <tr> <td>Every three months</td> <td>18.0%</td> <td>12.1%</td> </tr> <tr> <td>Once a month</td> <td>9.5%</td> <td>8.1%</td> </tr> <tr> <td>Ongoing, continuous testing</td> <td>23.3%</td> <td>35.6%</td> </tr> </tbody> </table> Only a small percentage of the organizations surveyed are not doing application security testing today (2.7%). More organizations are taking advantage of automated testing tools and practices and SaaS testing services to do ongoing, continuous testing. This is especially important where development teams are adopting Agile development methods to make continuous incremental changes to software. Training in secure software development ranked low in the list of practices that organizations find useful. Figure 13 shows the distribution of secure code training programs. Slightly fewer than 26% of organizations had ongoing secure coding training programs that were working well or were mandated for all development. But almost half of organizations (41%) have programs that are not consistently implemented or are not consistently being followed, and another 27% did not train developers in secure coding at all. In rating the effectiveness of their organization’s Appsec programs, approximately 28% felt that their programs were exceptional (3%) or above average (25%). The majority of respondents felt that their programs needed improvement (54%) or even complete rework (10%), as shown in Figure 14. Breaches Caused by Application Vulnerabilities The lack of effective Appsec programs is highlighted by the number of organizations that experienced security breaches as a result of application vulnerabilities in the last 18 months. As shown in Figure 15, 29% of responding organizations experienced at least one security breach as a result of application vulnerabilities in the last 18 months, with 14% experiencing 3–5 breaches and 3% experiencing at least 10 breaches. Most of these breaches were reported by larger organizations. Because of their size, they offer a much larger attack surface, they are generally more interesting targets to nonopportunistic hackers, and they have the resources to detect breaches and to determine the root cause. More small organizations may have been breached because of a software vulnerability without being aware of it, as shown in Figure 16. ![Figure 15: Security Breaches as a Result of Application Vulnerabilities](image) ![Figure 16: Number of Breaches Suffered by Size of Organization](image) Challenges to Implementing an Effective Appsec Program Many large enterprises (38%) do not have sufficient control over their application portfolios and cannot identify all of the applications that they need to secure. And organizations continue to struggle with creating an effective bridge between security and in-house, outsourced and third-party development (34%). Figure 17 illustrates the results. Testing makes up the backbone of many application security programs today. The good news is that testing—getting access to the tools and resources to do security testing for new applications and for legacy applications—is not holding organizations back. But the number one challenge facing most organizations this year, edging out lack of funding and management buy-in, is a lack of Appsec security skills to develop organizational programs and secure production systems (46%). Plans for Spending on Appsec Although lack of funding or management buy-in is the second largest challenge facing organizations, the picture may be improving. Respondents indicated that their organizations, in general, expect to spend more on Appsec in the coming year (see Figure 18). How do you expect your application security spending to change in the next year? More than half (58%) of responding organizations expect to spend more money on their Appsec programs over the next year: almost 38% expect to spend a bit more; almost 21% expect to spend a lot more. Only a very small percentage (3%) will spend less, and 29% expect no change in funding. Future Ideas and Roadmap Finally we asked respondents to list future plans, ideas and technologies that they are looking at to improve their Appsec programs. Most organizations have no clear next steps on the future roadmap. Some are looking at application security practice maturity models like Cigital’s Build Security in Maturity Model (BSIMM),\textsuperscript{10} OWASP’s OpenSAMM\textsuperscript{11} or Application Security Verification Standard (ASVS)\textsuperscript{12} as guidelines. A few are evaluating advanced intrusion prevention systems and cloud-based security offerings. Others are investigating how to use Big Data analytics to support their application security initiatives. But most organizations are not looking beyond their current set of ideas and tools. They still have a lot of work ahead of them. \textsuperscript{10} \url{http://bsimm.com} \textsuperscript{11} \url{www.opensamm.org} \textsuperscript{12} \url{www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project} Organizations are continuing to invest more in application security. Last year more than one-third of those surveyed did not have an Appsec program in place. More than 80% have formal programs in place, and most of these organizations are doing something about Appsec now or are planning to implement a program in the coming year. More organizations will spend more on application security next year (more than 58% plan to increase spending in the next 12 months.) So far, however, most of these programs are not proving to be effective. Almost two-thirds of respondents said that their programs needed to be improved, including 10% who said their programs needed a complete overhaul. Almost 29% of the organizations surveyed had experienced one or more security breaches due to an application security vulnerability in the last 18 months, and some (4%) experienced 10 or more breaches. Organizations continue to rely heavily on looking for security vulnerabilities after the fact (using black box dynamic testing and vulnerability scanning tools and services, as well as pen testing) and blocking these vulnerabilities with application firewalls and intrusion prevention systems. The good news is that organizations are taking advantage of better tools and online services to test their applications for security vulnerabilities much more frequently, even testing continuously, which could dramatically shorten vulnerability windows—if developers can fix the bugs when they are found. The bad news is that organizations are not attacking the root cause of application security problems—stopping developers from writing insecure software in the first place. Developers continue to create security holes because they don’t understand enough about secure design, threat modeling and secure coding practices. Developers aren’t taking enough advantage of static analysis tools to catch security bugs early (when they are less costly to repair), while they are still working on the code, because they don’t understand what the tools are telling them. They aren’t leveraging security libraries and the security features of their frameworks to reduce risks and costs because they—and their managers—don’t know that it is important. A lack of knowledge and skills is holding back Appsec programs today, and it is preventing organizations from making real progress in Appsec in the future. The number one obstacle to success reported in this year’s survey is a shortage of skilled people, part of a bigger problem facing the IT security industry in general, as recent studies by Forrester Research13 and (ISC)214 show. Training and education are needed to address this skills shortage—not just training more Infosec and Appsec specialists, but training developers and managers, too. Fewer than one-quarter of respondents have training programs that are ongoing and working well, and secure coding training ranks low in the list of practices that organizations depend on in their Appsec programs today. This needs to change. There aren’t any next generation tools or other silver bullets on the horizon that will solve the problem of secure software. Writing secure software is about fundamentals: thoughtful design, careful coding, disciplined testing and informed and responsible management. The sooner that organizations understand this—and start doing it—the sooner they will solve their security problems. 13 www.informationweek.com/traffic-management/security-skills-shortage-or-training-failure/d/d-id/1105895 Jim Bird is an application development manager and CTO with more than 25 years of experience in software engineering, with a special focus on high-integrity and high-reliability systems. Jim is currently the co-founder and CTO of a major US-based institutional trading service, where he is responsible for managing the company’s technology organization and information security programs. Jim has worked as a consultant to IBM and to major stock exchanges and banks globally. He was also the CTO of a technology firm (now part of NASDAQ OMX) that built custom IT solutions for stock exchanges and national clearinghouses in more than 30 countries. Jim is an active contributor to OWASP, helps out with the SANS Appsec blog and blogs on Agile software development, project management and application security topics at “Building Real Software.” Frank Kim is a security leader with more than 16 years of experience in information security, risk management and enterprise IT. He has a passion for developing security strategies and building teams focused on practical solutions to business risks. He currently serves as the curriculum lead for application security at the SANS Institute and is the author of the “Secure Coding in Java” course. Frank is a popular public speaker and has presented at security, software development and leadership events around the world. SANS would like to thank this paper’s sponsors: Upcoming SANS App Sec Training
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/survey-application-security-programs-practices", "len_cl100k_base": 5979, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 44851, "total-output-tokens": 7125, "length": "2e12", "weborganizer": {"__label__adult": 0.00038909912109375, "__label__art_design": 0.00027179718017578125, "__label__crime_law": 0.0017108917236328125, "__label__education_jobs": 0.0011453628540039062, "__label__entertainment": 7.146596908569336e-05, "__label__fashion_beauty": 0.0001609325408935547, "__label__finance_business": 0.0018024444580078125, "__label__food_dining": 0.0002944469451904297, "__label__games": 0.0006480216979980469, "__label__hardware": 0.0009179115295410156, "__label__health": 0.0005526542663574219, "__label__history": 0.00013387203216552734, "__label__home_hobbies": 9.834766387939452e-05, "__label__industrial": 0.0004074573516845703, "__label__literature": 0.00015842914581298828, "__label__politics": 0.0003390312194824219, "__label__religion": 0.00024437904357910156, "__label__science_tech": 0.0217132568359375, "__label__social_life": 0.00011473894119262697, "__label__software": 0.0207672119140625, "__label__software_dev": 0.947265625, "__label__sports_fitness": 0.00024068355560302737, "__label__transportation": 0.0003323554992675781, "__label__travel": 0.00016069412231445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30695, 0.02157]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30695, 0.1155]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30695, 0.95506]], "google_gemma-3-12b-it_contains_pii": [[0, 386, false], [386, 610, null], [610, 3043, null], [3043, 3787, null], [3787, 4626, null], [4626, 5477, null], [5477, 5767, null], [5767, 6531, null], [6531, 7167, null], [7167, 8358, null], [8358, 9213, null], [9213, 10892, null], [10892, 11870, null], [11870, 13142, null], [13142, 14046, null], [14046, 17297, null], [17297, 19738, null], [19738, 21224, null], [21224, 21743, null], [21743, 22506, null], [22506, 23076, null], [23076, 23961, null], [23961, 25650, null], [25650, 29249, null], [29249, 30665, null], [30665, 30695, null]], "google_gemma-3-12b-it_is_public_document": [[0, 386, false], [386, 610, null], [610, 3043, null], [3043, 3787, null], [3787, 4626, null], [4626, 5477, null], [5477, 5767, null], [5767, 6531, null], [6531, 7167, null], [7167, 8358, null], [8358, 9213, null], [9213, 10892, null], [10892, 11870, null], [11870, 13142, null], [13142, 14046, null], [14046, 17297, null], [17297, 19738, null], [19738, 21224, null], [21224, 21743, null], [21743, 22506, null], [22506, 23076, null], [23076, 23961, null], [23961, 25650, null], [25650, 29249, null], [29249, 30665, null], [30665, 30695, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30695, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30695, null]], "pdf_page_numbers": [[0, 386, 1], [386, 610, 2], [610, 3043, 3], [3043, 3787, 4], [3787, 4626, 5], [4626, 5477, 6], [5477, 5767, 7], [5767, 6531, 8], [6531, 7167, 9], [7167, 8358, 10], [8358, 9213, 11], [9213, 10892, 12], [10892, 11870, 13], [11870, 13142, 14], [13142, 14046, 15], [14046, 17297, 16], [17297, 19738, 17], [19738, 21224, 18], [21224, 21743, 19], [21743, 22506, 20], [22506, 23076, 21], [23076, 23961, 22], [23961, 25650, 23], [25650, 29249, 24], [29249, 30665, 25], [30665, 30695, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30695, 0.0915]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d30034a7fad87a8afa251f36f9cf12a951ed98a8
[REMOVED]
{"Source-Url": "http://dl.ifip.org/db/conf/cardis/cardis2006/SirettMMM06.pdf", "len_cl100k_base": 7695, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 39061, "total-output-tokens": 9195, "length": "2e12", "weborganizer": {"__label__adult": 0.0008406639099121094, "__label__art_design": 0.0005507469177246094, "__label__crime_law": 0.0024356842041015625, "__label__education_jobs": 0.0007443428039550781, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.0002949237823486328, "__label__finance_business": 0.0009293556213378906, "__label__food_dining": 0.0004525184631347656, "__label__games": 0.0014505386352539062, "__label__hardware": 0.0254364013671875, "__label__health": 0.0009179115295410156, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.00024271011352539065, "__label__industrial": 0.0016803741455078125, "__label__literature": 0.00025916099548339844, "__label__politics": 0.0004520416259765625, "__label__religion": 0.0005698204040527344, "__label__science_tech": 0.311767578125, "__label__social_life": 0.00010353326797485352, "__label__software": 0.0192413330078125, "__label__software_dev": 0.62744140625, "__label__sports_fitness": 0.0005192756652832031, "__label__transportation": 0.0030193328857421875, "__label__travel": 0.0002741813659667969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36462, 0.03544]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36462, 0.21267]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36462, 0.87076]], "google_gemma-3-12b-it_contains_pii": [[0, 2175, false], [2175, 4959, null], [4959, 7801, null], [7801, 10179, null], [10179, 11681, null], [11681, 14767, null], [14767, 16901, null], [16901, 19320, null], [19320, 22231, null], [22231, 25103, null], [25103, 27454, null], [27454, 30270, null], [30270, 33209, null], [33209, 33986, null], [33986, 36462, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2175, true], [2175, 4959, null], [4959, 7801, null], [7801, 10179, null], [10179, 11681, null], [11681, 14767, null], [14767, 16901, null], [16901, 19320, null], [19320, 22231, null], [22231, 25103, null], [25103, 27454, null], [27454, 30270, null], [30270, 33209, null], [33209, 33986, null], [33986, 36462, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36462, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36462, null]], "pdf_page_numbers": [[0, 2175, 1], [2175, 4959, 2], [4959, 7801, 3], [7801, 10179, 4], [10179, 11681, 5], [11681, 14767, 6], [14767, 16901, 7], [16901, 19320, 8], [19320, 22231, 9], [22231, 25103, 10], [25103, 27454, 11], [27454, 30270, 12], [30270, 33209, 13], [33209, 33986, 14], [33986, 36462, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36462, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
72d6f98e13f0fb8484fbe5ba367794011160d826
Regular expressions Regular expressions - Key to powerful, efficient, and flexible text processing - Defined as a string composed of letters, numbers, and special symbols, that defines one or more strings - You have already used them in selecting files when you used asterisk (*) and question mark characters to select filenames - Used by several Unix utilities such as ed, vi, emacs, grep, sed, and awk to search for and replace strings - Checking the author, subject, and date of each message in a given mail folder \[egrep "^\(From|Subject|Date\): " <folder>\] - The quotes above are not a part of the regular expression but are needed by the command shell - A regular expression is composed of characters, delimiters, simple strings, special characters, and other metacharacters defined below - Characters - A character is any character on the keyboard except the newline character `\n` - Most characters represent themselves within a regular expression - All the characters that represent themselves are called literals - A special character is one that does not represent itself (such as a metacharacter) and needs to be quoted * The metacharacters in the example above (with egrep) are ".", ``, `, `1, and `)` - We can treat the regular expressions as a language in which the literal characters are the words and the metacharacters are the grammar - Delimiters - A delimiter is a character to mark the beginning and end of a regular expression - Delimiter is always a special character for the regular expression being delimited - The delimiter does not represent itself but marks the beginning and end of the regular expression - Any character can be used as a delimiter as long as it (the same character) appears at both ends of the regular expression - More often than not, people use forward slash `/` as the delimiter (guess why) - If the second delimiter is to be immediately followed by a carriage return, it may be omitted - Delimiters are not used with the grep family of utilities - The metacharacters in the regular expressions are ^ $ . * [ ] \{ \} \ \ ( ) - In addition, the following metacharacters have been added to the above for extended regular expressions (such as the one used by egrep) + ? | ( ) - The dash (-) is considered to be a metacharacter only within the square brackets to indicate a range; otherwise, it is treated as a literal * Even in this case, the dash cannot be the first character and must be enclosed between the beginning and the end of range characters • The regular expression search is not done on a word basis but utilities like `egrep` display the entire line in which the regular expression matches. • Simple strings – The most basic regular expression – Matches only itself – Examples <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td><code>/ring/</code></td> <td>ring</td> <td>ring</td> </tr> <tr> <td></td> <td></td> <td>spring</td> </tr> <tr> <td></td> <td></td> <td>ringing</td> </tr> <tr> <td></td> <td></td> <td>stringing</td> </tr> <tr> <td><code>/Thursday/</code></td> <td>Thursday</td> <td>Thursday</td> </tr> <tr> <td></td> <td></td> <td>Thursday's</td> </tr> <tr> <td><code>/or not/</code></td> <td>or not</td> <td>or not</td> </tr> <tr> <td></td> <td></td> <td>poor nothing</td> </tr> </tbody> </table> • Special characters – Cause a regular expression to match more than one string – Period * Matches any character * Examples <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td><code>/ .alk/</code></td> <td>All strings that contain a space followed by any character</td> <td>will talk</td> </tr> <tr> <td></td> <td>followed by alk</td> <td>may balk</td> </tr> <tr> <td><code>/ .ing/</code></td> <td>All strings with any character preceding ing</td> <td>singing</td> </tr> <tr> <td></td> <td></td> <td>ping</td> </tr> <tr> <td></td> <td></td> <td>before inglenook</td> </tr> <tr> <td><code>/09.17.98/</code></td> <td>Date with any separator</td> <td>09/17/98</td> </tr> <tr> <td></td> <td></td> <td>09-17-98</td> </tr> </tbody> </table> • Square brackets * Define a class of characters that matches any single character within the brackets * If the first character immediately following the left square bracket is a caret `^`, the square brackets define a character class that match any single character not within the brackets * A hyphen can be used to indicate a range of characters * Within a character class definition, the special characters (backslash, asterisk, and dollar signs) lose their special meaning * A right square bracket appearing as a member of the character class can only appear as the first character following the square bracket * A caret is special only if it is the first character following the square bracket * A dot within square brackets will not be a metacharacter - `/07[-]17[-]98/` will not match 07/17/98 but will match 07-17-98 * Examples ### Regular Expressions <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td>/[bB]ill/</td> <td>Member of the character class b and B followed by ill</td> <td>bill, Bill, billed</td> </tr> <tr> <td>/t[aeiou].k/</td> <td>t followed by a lowercase vowel, any character, and a k</td> <td>talkative, stink, teak, tanker</td> </tr> <tr> <td>/number [6-9]/</td> <td>number followed by a space and a member of the character class 6 through 9</td> <td>number 60, number 8, get number 9</td> </tr> <tr> <td>/[^a-zA-Z]/</td> <td>any character that is not a letter</td> <td>1, 7, @, ., ) Stop!</td> </tr> </tbody> </table> - Asterisk - Can follow a regular expression that represents a single character - Represents zero or more occurrences of a match of the regular expression - An asterisk following a period matches any string of characters - A character class definition followed by an asterisk matches any string of characters that are members of the character class - A regular expression that includes a special character always matches the longest possible string, starting as far toward the beginning (left) of the line as possible - Examples <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td>/ab*c/</td> <td>a followed by zero or more b’s followed by a c</td> <td>ac, abc, abbc, debbcaabbbc</td> </tr> <tr> <td>/ab.*c/</td> <td>ab followed by zero or more other characters followed by a c</td> <td>abc, abxc, ab45c, xab 756.345 x cat</td> </tr> <tr> <td>/t.*ing/</td> <td>t followed by zero or more characters followed by ing</td> <td>thing, ting, I thought of going</td> </tr> <tr> <td>/([a-zA-Z ]+)/</td> <td>a string composed only of letters and spaces</td> <td>1. any string without numbers or punctuation!</td> </tr> <tr> <td>/(.* )/</td> <td>as long a string as possible between ( and )</td> <td>Get (this) and (that);</td> </tr> <tr> <td>/([^)])]*/</td> <td>the shortest string possible that starts with ( and ends with )</td> <td>(this) Get (this and that)</td> </tr> </tbody> </table> - Caret and dollar sign - A regular expression beginning with a caret `^` can match a string only at the beginning of a line - The regular expression cat finds the string cat anywhere on the line but `^cat` matches only if the string cat occurs at the beginning of the line - `^` is used to anchor the match to the start of the line - A dollar sign `$` at the end of a regular expression matches the end of a line The regular expression cat finds the string cat anywhere on the line but cat$ matches only if the string cat occurs at the end of the line, it cannot be followed by any character but newline (not even space) * Examples <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td>/^T/</td> <td>a T at the beginning of a line</td> <td>This line ... That time...</td> </tr> <tr> <td>/^[0-9]</td> <td>a plus sign followed by a number at the beginning of a line</td> <td>+5 + 45.72</td> </tr> <tr> <td>/:$/</td> <td>a colon that ends a line</td> <td>...below:</td> </tr> </tbody> </table> - Quoting special characters * Any special character, except a digit or a parenthesis, can be quoted by preceding it with a backslash * Quoting a special character makes it represent itself * Examples <table> <thead> <tr> <th>Reg. Exp.</th> <th>Matches</th> <th>Examples</th> </tr> </thead> <tbody> <tr> <td>/end./</td> <td>all strings that contain end followed by a period</td> <td>The end. send. pretend.mail</td> </tr> <tr> <td>/\</td> <td>a single backslash</td> <td>\</td> </tr> <tr> <td>/*</td> <td>an asterisk</td> <td><em>.c an asterisk (</em>)</td> </tr> <tr> <td>[5]</td> <td>[5]</td> <td>it was five [5]</td> </tr> <tr> <td>/and/or/</td> <td>and/or</td> <td>and/or</td> </tr> </tbody> </table> - Rules - Longest match possible * A regular expression always matches the longest possible string, starting as far towards the beginning of the line as possible - Empty regular expressions * An empty regular expression always represents the last regular expression used * Let us give the following command to vi :s/mike/robert/ * If you want to make the same substitution again, the following is sufficient :s//robert/ * You can also do the following /mike/ :s//robert - Bracketing expressions - Regular expressions can be bracketed by quoted parentheses () and \) - The string matching the bracketed regular expression can be subsequently used as quoted digits - The regular expression does not attempt to match quoted parentheses - A regular expression within the quoted parentheses matches exactly with what the regular expression without the quoted parentheses will match - The expressions /\(rexp\)/ and /rexp/ match the same patterns - Quoted digits * Within the regular expression, a quoted digit (\n) takes on the value of the string that the regular expression beginning with the nth \ matched * Assume a list of people in the format last-name, first-name initial * It can be changed to the format first-name initial last-name by the following vi command :%s/\([^-]*\), \(.*\)/\s \t dMNORx7 1 u\n - Quoted parentheses can be nested * There is no ambiguity in identifying the nested quoted parentheses as they are identified by the opening \ \ * Example /\([a-z]\)([A-Z]\)*x\)/ matches 3 t dMNORx7 1 u • Replacement string - vi and sed use regular expressions as search strings with the substitute command - Ampersands (&) and quoted digits (\n) can be used to match the replacement strings within the replacement string - An ampersand takes on the value of the string that the search string matched - Example :s/[0-9][0-9]*/Number &/ • Word boundaries - The word boundaries in the regular expressions are denoted by any whitespace character, period, end-of-line, or beginning of line - Expressed by \< \t beginning of word \> \t end of word • Regular expressions cannot be used for the newline character sed • Stream editor • Derivative of ed - Takes a sequence of editor commands - Goes over the data line by line and performs the commands on each line • Basic syntax sed 'list of ed commands' filename[s] ... • The commands are applied from the list in order to each line and the edited form is written to stdout • Changing a pattern in the file sed 's/pat_1/pat_2/g' in_file > out_file • sed does not alter the contents of the input file • Quotes around the list of commands are necessary as the sed metacharacters should not be translated by the shell • Selecting range of lines • Command to remove the mail header from a saved mail message \[ \text{sed '1,~/^$/d' in_file > out_file} \] • Removing the information from the output of the finger command to get only the user id and login time \[ \text{finger | sed 's/([a-zA-Z][a-zA-Z]*\//.*\([0-9][0-9][0-9][0-9]\) .*/1 \2/'} \] • Problem: The first line should have been removed as well \[ \text{finger | sed 's/([a-zA-Z][a-zA-Z]*\//.*\([0-9][0-9][0-9][0-9]\) .*/1 \2/'} | sed '1d' \] • Indenting a file one tab stop \[ \text{sed 's/~/>/ file} \] • The above matches all the lines (including empty lines) • Problem can be solved by \[ \text{sed '/./s/~/>/ file} \] • Another way to do it \[ \text{sed '*/$/s/~/>/ file} \] • Multiple commands in the same invocation of sed \[ \$ \text{finger | sed 's/([a-zA-Z][a-zA-Z]*\//.*\([0-9][0-9][0-9][0-9]\) .*/1 \2/} > 1d' \] • The commands must be on separate lines • sed scripts – The sed commands can be put into script files and can be executed by \[ \text{sed -f cmdfile in_file} \] • Lines containing a pattern can be deleted by \[ \text{sed '/regexp/'} \] • Automatic printing – By default, sed prints each line on the stdout – This can be inhibited by using the -n option as follows \[ \text{sed -n '/pattern/p'} \] • Matching conditions can be inverted by the ! \[ \text{sed -n '/pattern/!p'} \] - The last achieves the same effect as `grep -v` - Inserting newlines - Converting a document from single space to double space ``` $ sed 's/$/\n' > / ``` - Creating a list of words used in the document ``` $ sed 's/[ - ]*[ - ]*/\' > /g' file ``` - Counting the unique words used in the document ``` $ sed 's/[ - ]*[ - ]*/\' > /g' file | sort | uniq | wc -l ``` - Writing on multiple files ``` $ sed -n '/pat/w file1 > /pat!w file2' filename ``` - Line numbering - Line numbers can be used to select a range of lines over which the commands will operate - Examples ``` $ sed -n '20,30p' $ sed '1,10d' $ sed '1,/^$/d' $ sed -n '/^$/,'"end'/p' ``` - `sed` does not support relative line numbers (difference with respect to `ed`) **awk** - Acronym for the last names of its designers – Aho, Weinberger, Kernighan - Not as good as `sed` but includes arithmetic, variables, built-in functions, and a programming language like C; on the other hand, it is a more general processing model than a text editor - Looks more like a programming language rather than a text editor - Mostly used for formatting reports, data entry, and data retrieval to generate reports - `awk` is easier to use than `sed` but is slower - Usage is ``` awk 'awk_script' files ``` - The `awk_script` looks like ``` pattern { action } ``` ``` pattern { action } ``` Table 1: Summary of sed commands <table> <thead> <tr> <th>Command</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>a\</td> <td>append lines to output until one not ending in \</td> </tr> <tr> <td>b label</td> <td>branch to command : label</td> </tr> <tr> <td>c\</td> <td>change lines to following text (as in a)</td> </tr> <tr> <td>d</td> <td>delete lines</td> </tr> <tr> <td>i\</td> <td>insert following text before next output</td> </tr> <tr> <td>l</td> <td>list line, making all non-printing characters visible (tabs appear as &gt;; lines broken with )</td> </tr> <tr> <td>p</td> <td>print line</td> </tr> <tr> <td>q</td> <td>quit (for scripts)</td> </tr> <tr> <td>r file</td> <td>read file, copy contents to stdout</td> </tr> <tr> <td>s/pat1/pat2/f</td> <td>substitute pat2 for pat1</td> </tr> <tr> <td>f = g</td> <td>replace all occurrences</td> </tr> <tr> <td>f = p</td> <td>print</td> </tr> <tr> <td>f = w file, write to file</td> <td></td> </tr> <tr> <td>t label</td> <td>test: branch to label if substitution made to current line</td> </tr> <tr> <td>w file</td> <td>write line(s) to file</td> </tr> <tr> <td>y/str1/str2/</td> <td>replace each character from str1 with corresponding character from str2 (no ranges allowed)</td> </tr> <tr> <td>=</td> <td>print current input line number</td> </tr> <tr> <td>!cmd</td> <td>do sed cmd if line is not selected</td> </tr> <tr> <td>: label</td> <td>set label for b and t commands</td> </tr> <tr> <td>{</td> <td>treat commands up to the matching } as a group</td> </tr> </tbody> </table> • awk reads one line in the file at a time, compares with each pattern, and performs the corresponding action if the pattern matches • Just like sed, awk does not alter its input files • The patterns in awk can be regular expressions, or C-like conditions • grep can be written in awk as ``` awk '/regular expression/ { print }' filename ``` • Either of pattern or action is optional and can be omitted – Omitting pattern performs the action on every line ``` awk '{ print }' filename ``` – Omitting action prints matched lines ``` awk '/regular expression/' filename ``` • Just like sed, the awk_script can be presented to awk from a file by using ``` awk -f awk_script_file filename ``` • Fields – A field is a string of non-blank characters – awk splits each input line into fields, separated by blanks or tabs – The output of who has six fields as follows Regular Expressions sanjiv console Nov 18 13:26 sanjiv tty0 Nov 18 13:26 (:0.0) sanjiv ttypc Nov 19 13:27 (:0.0) vlad tty7 Nov 19 16:46 (arrak13.umsl.edu) - The fields are called $1, $2, .... $NF * NF is a variable whose value is set to the number of fields * NF and $NF are not the same - NF is the number of fields - $NF is the contents (string) of the last field - The field separator is white space by default but can be changed by a command line option * Changing the field separator to colon (;) awk -F: '/regular expression/ { action }' file * To print the user names and real names in the passwd file awk -F: '{print $1"\n"$5}' /etc/passwd - Printing - The current input line (or record) is tracked by the built-in variable NR - The entire input record is contained in the variable $0 - To add line numbers to each line, you can use the following awk '{print NR, $0}' filename - Fields separated by comma are printed separated by the field separator – a blank space character by default - Complete control of the output format can be achieved by using printf instead of print as follows awk '{ printf "%4d %s\n", NR, $0 }' filename - printf in awk is almost identical to the corresponding C function - Patterns - Checking for people who do not have a password entry in the file /etc/passwd awk -F: '$_2 == ""' /etc/passwd - Checking for people who have a locked password entry awk -F: '$_2 == "*"' /etc/passwd - Other ways to check for empty string | $2 == "" | 2nd field is empty | | $2 == /"$/ | 2nd field matches empty string | | $2 != /./ | 2nd field does not match any character | | length($2) == 0 | length of 2nd field is zero | - The symbol - indicates a regular expression match while !- indicates a regular expression non-match - length is a built-in function to count the number of characters in the string (or field) - Any pattern match can be preceded by ! to negate its match as follows awk -F: '!( $2 == "" )' filename - Data validation using the number of fields as criterion – line valid if the number of fields is odd echo $LINE | awk 'NF % 2 != 0' - Printing excessively long lines (> 72 characters) awk 'length($0) > 72' filename - Above problem with more informative solution awk '(length($0) > 72) { print "Line", NR, "too long: ", substr($0,1,50)}' filename - The function substr( s, m, n ) produces the substring of s beginning at position m and with a length of n characters; if n is omitted, it continues to the end of string - Extracting information with substr $ date Wed Nov 20 14:27:33 CST 1996 $ date | awk '{ print substr ( $4, 1, 5 )}' 14:27 - The BEGIN and END patterns - Special patterns used in awk scripts - BEGIN actions are performed before the first input line has been read (used to initialize variables, print headings, and like) * Setting the field separator within the script $ awk 'BEGIN {FS = "":} $2 == "" ' /etc/passwd - END actions are done after the last line has been processed * Printing the number of lines in the input awk 'END { printf NR }' ... - Arithmetic and variables - awk allows you to do more sophisticated arithmetic compared to the shell - Adding the numbers in a column (first column), and printing the sum and average { s = s + $1 } END { print s, s/NR } - Variables can be created by users and are initialized to zero by default - awk also allows for shorthand arithmetic operators like C { s += $1 } END { print s, s/NR } - Implementing wc in all its generality $ awk '{ nc += length ( $0 ) + 1 # number of chars, 1 for \n nw += NF # number of words END { print NR, nw, nc }' filename - Variables can also store string of characters and the interpretation is based on context - awk maintains a number of built-in variables of both types Developing man pages with [nt]off - nroff and troff - Native Unix programs to format text - Based on requests within the documents that start with a period in the first column - Commonly used requests are . I Italicize following line . B Following line in bold . R Following line in Roman . br Break the line . ce Center the following line . fi Fill lines (Align right margins) . ft Set font . na No right alignment . nf Do not fill lines (Preferable to .na) . sp One vertical line - The manual page - Stored in a subdirectory in the directory /usr/man - The subdirectory is called man x where x is a digit or character to indicate the section of the manual - The sections are numbered 1 to 8 and n and l 1 User commands 2 System calls 3 C Library functions 4 Devices and network interfaces 5 File formats 6 Games and demos 7 Environments, tables, and troff macros 8 Maintenance commands 1 Misc. reference manual pages (Locally developed and installed) n Misc. reference manual pages (New commands) - Printed with the man(1) command * A shellscript that runs nroff -man but may be compiled on newer machines * The locally developed man pages can be tested for printing with nroff -man command * The man pages in a given section can be printed by specifying the section number, for example, the man page for the system call umask can be printed by typing the command \texttt{man 2 umask} If the section number is not specified, the output will be for the user command from section 1 - The macros for man are discussed in section 7 of the manual and can be invoked by man 7 man - Layout of a Unix manual page - The manual page is laid out as per the specifications in the man macro of troff * Any text argument may be zero to six words * Quotes can be used to include the space character in a "word" * Some native nroff conventions are followed, for example, if text for a command is empty, the command is applied to the next line * A line starting with .I and with no other inputs italicizes the next line * The prevailing indentation distance is remembered between successive paragraphs but not across sections - The basic layout of a man page is described by .TH COMMAND <section-number> .SH NAME command \- brief description of function .B command options .SH DESCRIPTION Detailed explanation of programs and options. Paragraphs are introduced by .PP .PP This is a new paragraph. .SH FILES Files used by the command, e.g., passwd(1) mentions /etc/passwd .SH "SEE ALSO" References to related documents, including other manual pages .SH DIAGNOSTICS Description of any unusual output (e.g., see cmp(1)) .SH BUGS Surprising features (not always bugs) - If any section is empty, its header is omitted - The .TH line and the NAME, SYNOPSIS, and DESCRIPTION sections are mandatory - The .TH line * Begins a reference page * The full macro is described by .TH command section date_last_changed left_page_footer center_header * Sets prevailing indent and tabs to 0.5" - The .SH lines * Section headers * Identify sections of the manual page * NAME and SYNOPSIS sections are special; other sections contain ordinary prose * NAME section - Names the command (in lower case) - Provides a one-line description of it * SYNOPSIS section - Names the options, but does not describe them - The input is free form - Font changes can be described with the .B, .I, and .R macros - The name and options are bold while the rest of the information is in roman * DESCRIPTION section - Describes the commands and its options - It tells the usage of the command - The man page for cc(1) describes how to invoke the compiler, optimizer, where the output is, but does not provide a reference page for the manual - The reference page can be cited in the SEE ALSO section - However, man(7) is the description of the language of manual macros - Command names and tags for options are printed in italics, using the macros .I (print first argument in italics) and .IR (print first argument in italic, second in roman) * FILES section - Mentions any files implicitly used by the commands * DIAGNOSTICS section - Optional section and generally not present - Reports any unusual output produced by the command - May contain diagnostic messages, exit statuses, or surprising variations of the command's normal behavior * BUGS section - Could be called LIMITATIONS - Reports shortcomings in the program that may need to be fixed in a future release - Other requests and macros for man .IP x Indented paragraph with a tag x .LP Left-aligned paragraph .PP Same as .LP .SS Section subheading
{"Source-Url": "http://grid.cs.gsu.edu/~nmancuso1/files/csc3320/re.pdf", "len_cl100k_base": 7064, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 58004, "total-output-tokens": 7173, "length": "2e12", "weborganizer": {"__label__adult": 0.00031280517578125, "__label__art_design": 0.0007872581481933594, "__label__crime_law": 0.0003190040588378906, "__label__education_jobs": 0.0016956329345703125, "__label__entertainment": 0.00026679039001464844, "__label__fashion_beauty": 0.00014281272888183594, "__label__finance_business": 0.00021636486053466797, "__label__food_dining": 0.0001766681671142578, "__label__games": 0.0008187294006347656, "__label__hardware": 0.001041412353515625, "__label__health": 0.00017833709716796875, "__label__history": 0.0002416372299194336, "__label__home_hobbies": 0.000148773193359375, "__label__industrial": 0.00022995471954345703, "__label__literature": 0.0005464553833007812, "__label__politics": 0.00020372867584228516, "__label__religion": 0.00037550926208496094, "__label__science_tech": 0.01479339599609375, "__label__social_life": 0.00014829635620117188, "__label__software": 0.1693115234375, "__label__software_dev": 0.8076171875, "__label__sports_fitness": 0.00014078617095947266, "__label__transportation": 0.00012254714965820312, "__label__travel": 0.00016772747039794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25407, 0.01174]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25407, 0.43398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25407, 0.84437]], "google_gemma-3-12b-it_contains_pii": [[0, 2553, false], [2553, 4820, null], [4820, 6975, null], [6975, 10043, null], [10043, 11583, null], [11583, 13121, null], [13121, 14600, null], [14600, 16579, null], [16579, 18837, null], [18837, 20765, null], [20765, 22972, null], [22972, 25006, null], [25006, 25407, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2553, true], [2553, 4820, null], [4820, 6975, null], [6975, 10043, null], [10043, 11583, null], [11583, 13121, null], [13121, 14600, null], [14600, 16579, null], [16579, 18837, null], [18837, 20765, null], [20765, 22972, null], [22972, 25006, null], [25006, 25407, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25407, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25407, null]], "pdf_page_numbers": [[0, 2553, 1], [2553, 4820, 2], [4820, 6975, 3], [6975, 10043, 4], [10043, 11583, 5], [11583, 13121, 6], [13121, 14600, 7], [14600, 16579, 8], [16579, 18837, 9], [18837, 20765, 10], [20765, 22972, 11], [22972, 25006, 12], [25006, 25407, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25407, 0.14916]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
11cc6eac7eaab71f227933ad52c27caccb69f702
Abstract—This paper proposes a framework that helps in closing the gap that exists between the mobile learning environment and students’ everyday use of mobile devices. This new framework for Digital Rights Management in systems for mobile learning enhances the ability to define, manage, and share the licensed learning multimedia content among different mobile networks. The major strength of the proposed framework is to position the role of the university in the m-learning value chain as a policy setter, not an implementer. The new framework introduces a new approach that while the university may have a Web portal page for mobile learning material, it should not host or provide the complete delivery service. Instead, the university should let the content providers handle that role. Recognizing the fact that a single licensing authority is not obtainable in the near future, we proposed a new object repository for storing the information needed by a specific university – Digital Mobile Learning Rights Depository (dMLRID) consisting of two databases, Rights and Data Depository with appropriate standard interfaces. This framework allows for a possibility that multiple University Depositories for mobile learning may exist, our proposed framework is flexible enough to allow different implementations. Index Terms—Digital rights management, mobile learning; XML, multimedia, proof of concept, content provider. I. INTRODUCTION Our research project looks into the technical part of an m-learning system that deals with the content rights management. The project does not aims to create a new Digital Rights Management (DRM) or a mobile learning system, or to build a new REL, rather its goal is to better define the issues that surround the delivery mechanisms for different devices in mobile learning. Two main issues that we explore are the context of different copyright needs and delivery across the mobile networks, to offer the same experience to all students and actors in mobile learning. Once we are able to deliver appropriate content for each device and simultaneously prevent the abuse of copyrighted works, we will be able to establish a fully modern mobile learning environment. Defining the term framework, Pree (2000) makes possible to define an appropriate framework. The list of specifications with applicability to this area is comments that we need to explore the building blocks that will predefine the overall architecture of the system, while to produce the final application would mean to “adjust building blocks to specific needs by overriding some methods in subclasses [1].” Our proposed framework is related directly to mobile content delivery, as Markiewicz and de Lucena (2001) argue that the frameworks are “application generators that are directly related to a specific domain, i.e., a family of related problems [2].” In looking at our mobile learning content delivery framework that includes DRM handling, we can notice that some elements require more flexibility. The constant development of new multimedia formats, and codecs, alongside with the new types of content and devices, create a changeable environment for each element related to the content handling, such as data storage, packagers, interfaces and license generators. In that sense, the proposed framework will indicate its hot spots, which will be the points of its flexibility. As Riehle (2000) explains, “[h]ot spots are abstract classes or methods that must be implemented. Frameworks are not executable. To generate an executable, one must instantiate the framework by implementing application specific code for each hot spot [3].” A. DRM Architecture In exploring the digital content delivery, we need to consider the ways to secure that the rights of the content authors, owners and other members of the delivery chain are respected. From that perspective, “Digital Rights Management refers to controlling and managing rights to digital intellectual property [4].” To represent the intellectual and usage rights, a DRM solution needs to describe the rights using a defined set of rules - Digital Right Expression Languages (DREL). DRELs deal with the description of the rights and are of utter importance for interoperability activities. For example, Iannella (2001) shows one possible description model [5]. Usage permissions are defined by the set of attributes and related to the content via Constrains, Obligations and Right Holders. By defining the attributes, we allow for creation of a framework that will represent any model of usage, with the idea to offer a flexible solution. Any new set of services or an application should belong to one of those attributes, which makes possible to define an appropriate framework. B. DRM Interoperability Initiatives The list of specifications with applicability to this area is extensive, as there are many activities in the industry and academia. Among others, there are: IEEE Digital Rights Expression Language, XrML, ODRL, Creative Commons, Europe4DRM, and Business associations, groups, like OMA, Marlin, Coral [6]. Open Digital Rights Language (ODRL) represents another initiative, aimed at development and promotion of an open standard for rights expressions [7]. This initiative is working on the ODRL v2.0 [8], and its parts will be included within the new OMA DRM 2.0 – mobile DRM. C. Differentiation of Planned Research from Existing Literature There are two main differences between this research and majority of the initiatives by the industry or academy, described in this literature review. The first difference is a way our research deals with the specific new issue of collaboration in the mobile learning environment that is yet not considered by the other initiatives. Our focus is on the interoperability and ability to share multimedia files horizontally, between the members of a same class or a team within the class. Current social trends of social networking (Facebook, Twitter, YouTube and other) have made the collaboration and sharing a regular process in everyday life. Our research will propose a new framework that will help in closing the gap that exists between the mobile learning environment and students’ everyday use of mobile devices. The second important difference between this research and other mentioned initiatives lies in the fact that our research is looking at the university as the focal point of the content delivery for mobile learning. Students of a university have their devices registered on different mobile networks, just like other people in the same geographical area. There are several reasons why modern mobile learning tools and learning mechanisms that include collaboration do not work for all students. 1) Unless they all register in the same mobile network, they will not have the same access rights to the content. 2) They may have different delivery behavior defined in the networks. 3) They are not able to share the content or send it to each other, while keeping the content protected from outside usage (which may be important if a student group is working on a project that should not be shared with outsiders). This research is proposing a way to define the rights outside of the networks, which would enable all those sharing mechanisms needed for a modern mobile learning environment. Furthermore, the framework defined in this research will have the flexibility to be open for any implementation of the future service models. II. METHODOLOGY The main goal of the project is to develop a new framework for delivery of the DRM-protected content in an m-learning environment. To achieve this, it was required to perform a detailed analysis of the DRM interoperability, fully define the elements of a framework and needed use cases. In addition, it was required to choose the software tools to identify the solution and to present the new DRM framework that would offer an additional level of interoperability for a mobile learning environment. The following contents give a detailed analysis of the research design that has resulted in building a Proof-of-Concept (PoC) logical demo to support the results of the research. UML sequence diagrams are used to show the communication between the explored elements of the proposed framework. A. Design The research solution is supported by the PoC demo tests done within a simulated environment that demonstrates the logical call flow of the messages exchanged between the hot spots of our proposed framework. The PoC demo environment does not attempt to recreate a fully functional mobile content delivery system, as that is out of scope of this research. Instead, it contains logical units, with the purpose of providing the PoC type of demonstration with the simulated instead of “real world” content. The PoC results allow us to construct the new framework by using the standard software framework definition elements. As our intention is to make the framework very flexible, the architecture will not presume the use of any programming languages. Instead, it is given as a set of block architecture and UML diagrams, with the interfaces and objects defined using the XML schemes as the leads only in helping in the development of a physical solution. A basic diagram for the use of a DRM system in a mobile learning Use Case with multiple devices, assuming they are using different DRM systems is shown on Fig. 1. The Multi-DRM Environment box in Fig. 1 illustrates the focus of this project. ![Fig. 1. Basic mobile learning use case with the DRM architecture](image) The presented environment has to be defined by using the standard mechanisms – architecture, element description, call flows and UML diagrams, with the objects defined using the XML code. The different nature of the devices used in m-learning causes that the DRM environment needs to be more flexible than a typical mobile Content Delivery environment, usually employing only mobile phone communication elements, as the integration with other device types is done further back in an operator’s infrastructure (mainly using the operator’s billing and authentication systems). Once a basic architecture of an environment is given by block diagrams of the elements, to understand better communication between the elements, the UML Sequence diagrams are required. As blocks are considered as single communication points, communication between the blocks depicts an environment from a functional point of view. This project is focused on the details of the server-side only, as the intention of the proposed framework is to offer the deeper level of interoperability, assuming that the devices have a multitude of media codecs and associated DRM clients are already present in the handset platform. B. DRM Architecture A Basic UML sequence diagram represents the behavior of all of the relevant elements within a framework. The complexity of a UML sequence diagram allows us to define the communication between the units (or objects) without specifying their internal structure. A UML sequence diagram can represent well the architecture of an application or a framework. For that reason, this project uses the white hot spots to provide as much flexibility as possible for the future development, as well as use the UML sequence diagrams to represent the elements of the framework and relations among them. One of the most important Use Cases for this analysis is the case of Superdistribution of the mobile content, a feature enabled by DRM. This functionality enables collaboration, by allowing the students to exchange copyrighted material among them. In addition, as the content is protected, it cannot be used without a proper license. Students who receive the protected content and have a compliant device would be able to acquire a license, enabling the content use on their own devices. Based on the previous discussion, a generic look at the DRM elements is demonstrated in Fig. 2. As it can be seen, we can identify four main actors in the process: Consumer, Content Distributor, Service Provider and Content Licensors. ![Fig. 2. DRM elements architecture](image) Many current DRM solutions support Superdistribution (including PacketVideo, Microsoft Playready DRM, OMA v1 Separate Delivery and OMA v2 DRM, among others). For the purpose of this research, we look at the case of superdistribution involving multiple mobile networks, which is of key importance for mobile learning environment, as mentioned earlier. The supporting PoC use the case of superdistribution as a test of the proposed multi-network capable DRM framework. C. Mobile Learning Use Cases As today’s students have accounts in different mobile networks, we have to take into account the need to support multiple delivery paths to an end device. In other words, it is important to consider not only the differences among the end devices but also the different infrastructures of different networks. That includes different formats, rights definitions, content handling, DRM rules and license format (for example, while one operator may support a Creative Commons license, others may not). Understanding that the mobile learning deals with the multitude devices in the context where students use different mobile operators, there is a need to define an interoperable solution that would enable students to participate in m-learning environments within a network they already use with their mobile devices, instead of a learning institution forcing that choice on the students. As mentioned earlier, in a situation where a University wants to enable m-learning to include the collaboration and content sharing while maintaining the content copyrights, it has to assure equal treatment of any mobile network in the area. In addition, it needs to assure that the mobile networks allow file sharing with the different networks, while maintaining the copyrights. Today, that is not the case, as there is no way to transfer the copyrighted content across the mobile networks anywhere in the world. With that in mind, we explore two m-learning use cases, which represent the main issues within the context of the mobile learning and mobile content delivery of copyrighted materials: 1) A student downloads a learning unit that he/she wants to share with colleagues from the same learning group. In most cases, students will be using different mobile networks, making the content sharing or superdistribution impossible (under the assumption that even the originating network supports the superdistribution, which is not always the case). If an inter-carrier gateway is used, which consists of Rights management and Rights translating elements, we can get the rights properly translated and content delivered onto the other network to end users. 2) When a student wants to move to another mobile network, the question arises about what will happen with the already acquired multimedia content. Again, we have to look into another new element, an online vault or digital locker that would contain the rights information for all the users. If that locker were unified across Canada, for example, the move from one to another network would not affect the ability of students to have access to their previously acquired content. III. PROOF-OF-CONCEPT FRAMEWORK We look into two examples of different possible solutions for the use cases, i.e. inter-carrier gateway and online vault - unified Digital Locker. A. Inter-Carrier Gateway The “inter-carrier gateway” uses the similar approach that is already in use for SMS service across the world. An additional network element deals with the rights and licenses for a specific content and translates information from one to another operator. This inter-carrier gateway will be used every time there is content going from one to another mobile network, without any DRM concerns. The copyrighted content would need to be handled with another additional element, making this use case not ideal for a mobile learning usage. B. Online Vault - Unified Digital Locker The online vault is responsible for managing the users' owned media library for authorized PCs and mobile devices. Each operator is responsible for delivering the media from the online vault to the users’ authorized mobile device and/or a PC in their proprietary formats. This solution integrates with each operator storefront for validation (using the unified academic authentication front-end). As we can see, this use case is better suited for the case of mobile learning which we have defined previously as the research focus. The online vault can be considered as a key interoperability factor that enables universities to assure equal treatment of all students. In addition, by making it more flexible, we can prepare the mobile learning environment to react to new trends, such as previously mentioned collaboration and file sharing among predefined group. C. Proof-of-Concept Framework - dMLRiD Considering the chosen use case of online vault, this project makes certain assumptions. Instead of having a single licensing authority, we propose a Data Depository object (element), storing information needed by a specific university. While that allows for a possibility that the multiple University Data Depositories for mobile learning may exist, our proposed framework is flexible enough to allow different implementations. As a result, we have a desired architecture of our PoC environment that focuses on the dMLRiD element of the framework, which functionality depicts properly the innovation of our proposed framework. Fig. 3 illustrates the dMLRiD in the context of communication, making the requirements for the needed environment even more prominent. D. Test Case During the PoC, a prototype interoperable DRM environment database is built, with scripts to simulate interfaces within the needed call flows. The applications are tested to evaluate their functionality in a Linux CMS Server and Windows environment. Each test case is based on the Use Case analysis and the results (XML files, SQL commands, DB structure and the license example) are documented. For Use Cases, we analyzed the typical mobile learning case, in which students collaborating within a class, exchanging the learning object. The students could be using the same or different mobile networks. It is important to understand that from the functional perspective our proposed framework does not differentiate those two cases (the same or a different network). This is one of the main advantages of our approach, to make just another layer in the logical structure of the framework, allowing us to focus on the main environment functionality – to enable the transfer of DRM protected files between the students within the mobile learning context. Fig. 4 illustrates the specific architecture of our Use Case. The scope of this research projects limits the ability for PoC to utilize the full Use Case or to explore the complete functional test case(s), in this paper, we use the superdistribution with the translation as the test case. Our goal is, then, to show the communication between the objects, and not to deliver the content to any end destination, as the actual enhancements of this new proposed framework is described by the communication flow. We assume further that there is an external translation service and only deal with the process of superdistribution if the Content Provider (CP) 2 contains the needed content. In that case, we needed a mechanism to handle the content information and communicate with the CP1 and CP2, in order to make this content superdistribution possible. Fig. 5 shows the possible call flow. In order to use the content superdistribution, both students need to be registered in the RightsDepository (dMLrID) system. RightsLocker will contain the profiles for the whole class – in our case, just two students. Student 1 acquires or creates a DRM-protected content (Content1) that she/he wants to share with a colleague student from the same class. Student 1 does not need to know which network is student 2 on. Assuming that the content is not in the DataDepository, the license is translated into the rights and the appropriate end then into the proper license for the Student 2 to be able to use the content. Translation is assumed to be done by an external system, i.e. XAL– needed approval from a CP, and the test case does not deal with that part. It is assumed that the CP 1 gives permission to the Forward right for Student 1 for a specific Content 1. Depository acknowledges the content availability in the network 2 (DB Lookup) and pushes the rights (translated into the license by Translators) to CP2 (over an external translation element, hence the test is to push the content rights info). IV. DISCUSSIONS AND FUTURE WORKS This research offers a new framework as a possible solution for the issues of interoperability and collaboration in mobile learning including use of the copyright-protected content. The major strength of the proposed framework is to position the role of the University in the m-learning value chain as a policy setter, not implementer. The new framework introduces a critical new approach that while the University may have a Web portal page for m-learning material, it should not host or provide the complete delivery service. Instead, the University should let the content providers handle that role. The next step would be to prepare the content to be assimilated in the processes of each mobile operator for delivery in their mobile network. One possible process would be to apply for the access to a mobile operator’s service platforms, with appropriate interfaces information exchanged. Upon getting the access, AU would work with a content aggregator in order to register the content with the mobile operator, to enable its delivery across the mobile network. REFERENCES
{"Source-Url": "http://www.ijiee.org/papers/294-S0149.pdf", "len_cl100k_base": 4153, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15067, "total-output-tokens": 4783, "length": "2e12", "weborganizer": {"__label__adult": 0.0005707740783691406, "__label__art_design": 0.0008869171142578125, "__label__crime_law": 0.0014848709106445312, "__label__education_jobs": 0.108154296875, "__label__entertainment": 0.00017714500427246094, "__label__fashion_beauty": 0.00031876564025878906, "__label__finance_business": 0.001621246337890625, "__label__food_dining": 0.0006432533264160156, "__label__games": 0.0011034011840820312, "__label__hardware": 0.0018167495727539065, "__label__health": 0.0013828277587890625, "__label__history": 0.0008859634399414062, "__label__home_hobbies": 0.00021338462829589844, "__label__industrial": 0.0008382797241210938, "__label__literature": 0.0009026527404785156, "__label__politics": 0.0006585121154785156, "__label__religion": 0.0006952285766601562, "__label__science_tech": 0.13720703125, "__label__social_life": 0.0003552436828613281, "__label__software": 0.059722900390625, "__label__software_dev": 0.67822265625, "__label__sports_fitness": 0.0004091262817382813, "__label__transportation": 0.00113677978515625, "__label__travel": 0.0004987716674804688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23118, 0.00774]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23118, 0.35964]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23118, 0.91541]], "google_gemma-3-12b-it_contains_pii": [[0, 4856, false], [4856, 10377, null], [10377, 15812, null], [15812, 18875, null], [18875, 23118, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4856, true], [4856, 10377, null], [10377, 15812, null], [15812, 18875, null], [18875, 23118, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23118, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23118, null]], "pdf_page_numbers": [[0, 4856, 1], [4856, 10377, 2], [10377, 15812, 3], [15812, 18875, 4], [18875, 23118, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23118, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
93a8a29dcda1895738c4add5aee2076dd65c5e02
Introduction - The goal here is to illustrate some concepts in computer graphics - The tool we will use is libGDX, a cross-platform game development environment - libGDX library provides six interfaces to abstract away platform details - Application, Files, Input, Net, Audio, Graphics - The graphics library wraps OpenGL ES or WebGL - OpenGL has emerged as a standard library for graphics; ES = Embedded Systems - OpenGL ES available on Android and iOS - WebGL is a Javascript API that conforms to OpenGL ES Cross-platform - libGDX targets Desktop, Android, HTML5, and iOS - Desktop via LWJGL (Lightweight Java Game Library) - Android via Android SDK - HTML5 via GWT (Google Web Toolkit) - Java -> Javascript - iOS via RoboVM - Java -> Objective-C - For an alternate intro to libGDX try “2D Game Development with libGDX” from Udacity interface Application - According to the libGDX API: “An Application is the main entry point of your project. It sets up a window and rendering surface and manages the different aspects of your application, namely Graphics, Audio, Input and Files. Think of an Application being equivalent to Swing’s JFrame or Android’s Activity.” - Application is an interface which is implemented by one of the following: - JglfwApplication (Desktop) - AndroidApplication (Android) - GwtApplication (HTML5) - IOSApplication (iOS) • The Application interface and the corresponding XXXApplication (e.g., AndroidApplication) classes exist and don’t need to be modified • Create your own app by implementing ApplicationListener **App Lifecycle** ![App Lifecycle Diagram] Listeners and Adapters (Java Concept) - Usually a “Listener” in Java responds to events - e.g. in Swing interface MouseListener defines the following methods: - mouseClicked, mouseEntered, mouseExited, mousePressed, mouseReleased - This is really just another flavour of the Observer pattern - But what if you only care about “mouseClicked” events? Your concrete Listener has to define all 5 of the methods above - To avoid this the abstract class MouseAdapter is defined which provides empty methods for all of these - Now your concrete Listener can extend MouseAdapter instead of implementing MouseListener and you define only the methods you want About Starter Classes - For each platform (iOS, Android, Desktop ..) a starter class must be written - Starter classes are platform dependent - We will focus on - Desktop (LWJGL) - Android Starter Classes: Desktop ```java // This is platform specific: Java SE public class DesktopStarter { public static void main(String[] argv) { LwjglApplicationConfiguration config = new LwjglApplicationConfiguration(); config.title = "..."; config.width = 480; config.height = 320; new LwjglApplication(new MyGame(), config); } } ``` Starter Classes: Android ```java import android.os.Bundle; import com.badlogic.gdx.backends.android.AndroidApplication; import com.badlogic.gdx.backends.android.AndroidApplicationConfiguration; public class AndroidLauncher extends AndroidApplication { protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); AndroidApplicationConfiguration config = new AndroidApplicationConfiguration(); this.setApplicationListener(game); this.log("test", "success"); } } ``` Android Manifest ```xml <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:uses-permission="http://schemas.android.com/apk/res-auto"> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.VIBRATE" /> <application android:allowBackup="true" android:name="com.mygdx.game.android.AndroidLauncher" android:icon="@drawable/ic_launcher" android:label="MyGame"> <activity android:name="com.mygdx.game.android.MyGame" android:configChanges="keyboard|keyboardHidden|orientation|screenSize"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> ``` Android Permissions - Add permissions if your android app requires certain functionality - `<uses-permission android:name="android.permission.RECORD_AUDIO"/> - `<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> - `<uses-permission android:name="android.permission.VIBRATE"/> - Add these to manifest file - See Project Setup - Rather than craft your own Starter Classes, I recommend using the project generator (gdx-setup.jar) referred to here: - `https://github.com/libgdx/libgdx/wiki/Project-Setup-Gradle` - To run on the desktop: - Click Run -> "Edit Configurations..." - Click "+" in upper-left corner - Select "Gradle" (the build system) - Change "Name" to Desktop - Set "Tasks" to desktop:run - With "Desktop" selected, hit the play button - If everything works, this should appear: Example: “A Simple Game” - This example was adapted from https://github.com/libgdx/libgdx/wiki/A-simple-game A simple zen-like game with no end: - Catch raindrops with a bucket on the bottom of the screen. - Raindrops spawn randomly at the top of the screen every second and accelerate downwards. - Player drags the bucket horizontally via the mouse/touch or by the keyboard using left and right cursor keys. ```java package com.mygdx.game; import /*NOT SHOWN*/ public class MyGdxGame extends ApplicationAdapter { SpriteBatch batch; Texture img; @Override public void create() { batch = new SpriteBatch(); img = new Texture("badlogic.png"); // load the images for the droplet and the bucket, 64x64 pixels each dropImage = new Texture(Gdx.files.internal("droplet.png"))); bucketImage = new Texture(Gdx.files.internal("bucket.png"))); // centre the bucket horizontally bucket.setx(Gdx.graphics.getWidth() / 2); // bottom left corner of the bucket is 20 pixels above the bottom screen edge batch = new SpriteBatch(); // create a Sprite to logically represent the bucket // images to be loaded from files Images to be loaded from files Having a camera enables manipulating the view independent of the world. Two choices: - PerspectiveCamera: Distant objects will appear smaller. Good for 3D. - OrthographicCamera: The scene is projected onto a plane. Good for 2D. } @Override public void render() { // Two lines of actual OpenGL: glClearColor(1, 0, 0, 1); glClear(GL20.GL_COLOR_BUFFER_BIT); // draw entities (here an image) drawn in a batch to optimize them for processing batch.begin(); batch.draw(img, 0, 0); // batch.end(); // is not GC'd. // isn't Java supposed to have garbage collection (GC)? Unfortunately, GC // is unpredictable and costly. If large resources (e.g. images) were subject to GC it could // cause game lag. Also, objects allocated outside the JVM (e.g. by calling C++ code) are // not GC'd. } @Override public void dispose() { // dispose batch batch.dispose(); // dispose texture img.dispose(); } } ``` ```java package com.mygdx.game; import /*NOT SHOWN*/ public class Drop extends ApplicationAdapter { private Texture dropImage; private SpriteBatch batch; private OrthographicCamera camera; private Sprite bucket; private Array<Sprites> raindrops; private long lastDropTime; @Override public void create() { // The width and height of the screen — assumed not // to change (otherwise define resize). private int width, height; ... // load the images for the droplet and the bucket, 64x64 pixels each dropImage = new Texture(Gdx.files.internal("droplet.png"))); bucketImage = new Texture(Gdx.files.internal("bucket.png"))); width = Gdx.graphics.getWidth(); height = Gdx.graphics.getHeight(); // create the camera and the SpriteBatch camera = new OrthographicCamera(); camera.setToOrtho(false, width, height); batch = new SpriteBatch(); // create a Sprite to logically represent the bucket bucket = new Sprite(bucketImage); // center the bucket horizontally bucket.setx((width / 2 - bucket.getWidth() / 2)); // bottom left corner of the bucket is 20 pixels above the bottom screen edge // create the raindrops array and spawn the first raindrop raindrops = new Array<Sprite>(); spawnRaindrop(); } ``` The following class will be defined in core/src/com.mygdx.game ```java public class MyGdxGame extends ApplicationAdapter { SpriteBatch batch; Texture img; public class ApplicationAdapter { private long... Good for 3D. OrthographicCamera: The scene is projected onto a plane. Good for 2D. } ``` private void spawnRaindrop() { Sprite raindrop = new Sprite(dropImage); raindrop.setX(MathUtils.random(0, width - raindrop.getRegionWidth())); raindrop.setY(height); raindrops.add(raindrop); lastDropTime = TimeUtils.nanoTime(); } render method: part 1 / 3 @Override public void render() { // clear the screen with a dark blue color. The arguments to glClearColor are the red, green blue and alpha component // in the range [0,1] of the color to be used to clear the screen. Gdx.gl.glClearColor(0, 0.2f, 1, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // tell the camera to update its matrices. camera.update(); // tell the SpriteBatch to render in the coordinate system specified by the camera. batch.setProjectionMatrix(camera.combined); // begin a new batch and draw the bucket and all drops batch.begin(); bucket.draw(batch); for(Sprite raindrop: raindrops) { raindrop.draw(batch); } batch.end(); ... } render method: part 2 / 3 // process user input if(Gdx.input.isTouched()) { Vector3 touchPos = new Vector3(); touchPos.set(Gdx.input.getX(), Gdx.input.getY(), 0); camera.unproject(touchPos); bucket.setX(touchPos.x - bucket.getWidth() / 2); } if(Gdx.input.isKeyPressed(Keys.LEFT)) { bucket.translateX(-400 * Gdx.graphics.getDeltaTime()); } if(Gdx.input.isKeyPressed(Keys.RIGHT)) { bucket.translateX(400 * Gdx.graphics.getDeltaTime()); } // make sure the bucket stays within the screen bounds if(bucket.getX() < 0) {bucket.setX(0);} if(bucket.getX() > width - bucket.getWidth()) {bucket.setX(width - bucket.getWidth()); // check if we need to create a new raindrop if(TimeUtils.nanoTime() - lastDropTime > 1000000000) {spawnRaindrop(); ... render method: part 3 / 3 // move the raindrops, remove any that are beneath the bottom edge of // the screen or that hit the bucket. Iterator<Sprite> iter = raindrops.iterator(); while(iter.hasNext()) { Sprite raindrop = iter.next(); raindrop.translateY(-200 * Gdx.graphics.getDeltaTime()); if(raindrop.getY() + raindrop.getHeight() < 0) iter.remove(); if(raindrop.getBoundingRectangle().overlaps(bucket.getBoundingRectangle())) { iter.remove(); } } ... } Vectors and transformations For the case of OpenGL, everything that we want to visualize must be composed of primitives. To display anything interesting we will have to take our basic primitives and transform them to form the object of interest. Therefore, transformations are fundamental to computer graphics. We begin with the most common transformations (translation, rotation, and scaling) in 2-D... Translation If we represent the vertices of primitives as vectors, translation is easily accomplished by vector addition. Example: Given a triangle with a set of vertex vectors \( V = \{(2,2), (4,6), (6,2)\} \) and a displacement vector \( T = (1,1) \), the resultant vertex set for the triangle is \( V' = \{(3,3), (5,7), (7,3)\} \) \[ V' = V + T \] Scaling The vectors can be uniformly scaled by simply multiplying each vector by a scalar constant. Note that the full vector (from the origin to the point) will be scaled, so the image will change in both size and position. A differential scaling is also possible, where \( x \) and \( y \) are multiplied by two different factors \( s_x \) and \( s_y \). \[ \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} s_x & 0 \\ 0 & s_y \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \] It will be convenient to write this as a matrix multiplication. \[ \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} s_x & 0 \\ 0 & s_y \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \] This can be written as \[ V' = S \cdot V \] where \( x \) and \( y \) are the components of \( V \). Rotation Assume we have a vertex at \((x, y)\) which is to be rotated counterclockwise about the origin by an angle \( \theta \). In polar coordinates, this vertex is at \((r, \phi)\). We can express the Cartesian coordinates in these terms: \[ x = r \cos \phi \\ y = r \sin \phi \] Now the rotation by \( \theta \) can be understood as an addition of angles: \[ x' = r \cos(\theta + \phi) \\ y' = r \sin(\theta + \phi) \] We can now make use of the following trigonometric identities: \[ \cos(a + b) = \cos a \cos b - \sin a \sin b \\ \sin(a + b) = \sin a \cos b + \cos a \sin b \] To obtain, \[ x' = x \cos \theta - y \sin \theta \\ y' = x \sin \theta + y \cos \theta \] In vector form, this is written as: \[ \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \] or \( V' = R(\theta) \cdot V \) The three types of transformations have the form: Translation \( V' = V + T \) Scaling \( V' = S \cdot V \) Rotation \( V' = R \cdot V \) The results of successive rotations and scalings can be obtained by matrix multiplication, but translation cannot. We may have a long sequence of transforms to apply to a vertex \( V \). For example, \[ V' = S_0R_0S_1(R_1S_2V + T_0) + T_1 \] \[ V' = S_0R_0S_1(R_1S_2V + T_0) + T_1 \] The same transforms will be applied to many vertices so they should be done quickly. Observe that the matrices can be composed: \[ V' = M_0(M_1V + T_0) + T_1 \] where \( M_0 = S_0R_0S_1 \) and \( M_1 = R_1S_2 \). This improves efficiency somewhat, but the translations prevent us from optimizing any more than this. If translation could be described as a matrix multiplication then we could combine all transformations into a single matrix \( M \), \[ V' = MV \] **Homogeneous coordinates** By using homogeneous coordinates we can use matrix multiplication to implement all three basic transformations. With homogeneous coordinates, a third coordinate is added to a point; point \( (x, y) \) is represented as \( (x, y, W) \). If we set \( W \) to be 1, (the point would be \( (x/W, y/W, 1) \)) then we have homogenized the point. Translation can now be performed with matrix multiplication. Translation by \( (d_x, d_y) \) would be represented as \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & d_x & 0 \\ 0 & d_y & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] The transformation for scaling remains much the same: ...and for rotation: \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s_x & 0 & 0 \\ 0 & s_y & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta & 0 \\ \sin \theta & \cos \theta & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] Successive translations, scalings, and rotations can now be implemented with matrix multiplication. Remember, though, that if the order of transformations is changed the result may also change. (Matrix multiplication is not commutative — \( AB \neq BA \), in general.) The general form for transformations derived from translation, scaling, rotation, and shearing is: \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} r_{11} & r_{12} & t_x \\ r_{21} & r_{22} & t_y \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] where the \( r_{ij} \) correspond to some combination of rotation, scaling, and shearing, and the \( t \)'s correspond to translation. Shearing is another common transformation. Shearing distorts a primitive by "pushing" it in one direction, as shown: \[ \begin{bmatrix} 0.5 \\ 1 \end{bmatrix} \quad (0.5,1,0) \] Shearing in the y direction Shearing in the x direction The matrix for shearing is as follows, where \( a_x \) and \( a_y \) are the factors for shearing in the \( x \) and \( y \) directions, respectively: \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & a_x & 0 \\ a_y & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] The general form for transformations derived from translation, scaling, rotation, and shearing is: \[ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} r_{11} & r_{12} & t_x \\ r_{21} & r_{22} & t_y \\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} \] where the \( r_{ij} \) correspond to some combination of rotation, scaling, and shearing, and the \( t \)'s correspond to translation. Recall again that the order of operations is important in applying transformations. For example, if we have a point (or vector) \( V \) and wish to apply translation, then scaling, then rotation, then translation again, we would perform the operations as \( T(R(S(T(V)))) \) Three dimensional transformations All of the transformations we have seen have similar representations in 3 dimensions. In fact, all operations can be combined to give a general transformation of the form \[ \begin{bmatrix} x' \\ y' \\ z' \\ 1 \end{bmatrix} = \begin{bmatrix} r_{11} & r_{12} & r_{13} & t_x \\ r_{21} & r_{22} & r_{23} & t_y \\ r_{31} & r_{32} & r_{33} & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ z \\ 1 \end{bmatrix}. \] Example: “Zen Garden” • “A Simple Game” • Rain falls randomly from the sky • User controls a bucket to catch raindrops • Here we invert this setup • User controls a cloud in the sky, from which raindrops fall • If rain falls on a tree, the tree grows • Download the code for this example (see notes page) • The main files: • ZenGardenGame (extends ApplicationAdapter) • Tree (interface) • SimpleTree (implements Tree) • RecursiveTree (implements Tree) • First, consider SimpleTree’s draw method... ```java public void draw(SpriteBatch batch) { // An affine transform is used to represent translation, rotation, and scaling operations. Affine2 transform = new Affine2(); // Initial translation and rotation, bringing us to the base of the tree, pointed upwards. transform.translate(baseX, baseY); transform.rotate(90.0f); // Store the current transform state for use below. Affine2 savedTransform = new Affine2(transform); drawBranch(batch, transform); } ``` ```java private void drawBranch(SpriteBatch batch, Affine2 transform) { // Draw the current branch. We draw it as two halves because transformations such as // rotation are made with respect to the lower left corner of the image. batch.draw(stickLeft, stickLeftWidth, stickLeftHeight, transform); transform.scale(1L, -1L); batch.draw(stickRight, stickRightWidth, stickRightHeight, transform); transform.scale(1L, -1L); } ``` draw method: part 2 / 2 // Translate to the first branching point transform.translate(stickLeftWidth * 0.86f, 0); // Draw the first branch transform.rotate(30.0f); transform.scale(FIRST_BRANCH_SCALE, FIRST_BRANCH_SCALE); drawBranch(batch, transform); // Reposition to second branching point by restoring the saved transform. transform = savedTransform; transform.translate(stickLeftWidth * 0.55f, 0); // Draw the second branch transform.rotate(-30.0f); transform.scale(SECOND_BRANCH_SCALE, SECOND_BRANCH_SCALE); drawBranch(batch, transform); } • On the previous slide we are using the SimpleTree class which draws the basic trunk of the tree and two branches. • RecursiveTree adds the following: • Tree grows in response to water drops falling on it • Tree grows fractally by recursive branching • When at the maximum growth level, berries emerge! • Please see the attached code for details...
{"Source-Url": "http://www.cs.mun.ca/~av/courses/5895-current/manual_uploads/GraphicsWithLibGDX_quad.pdf", "len_cl100k_base": 5534, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 30099, "total-output-tokens": 6623, "length": "2e12", "weborganizer": {"__label__adult": 0.0003714561462402344, "__label__art_design": 0.0005888938903808594, "__label__crime_law": 0.0002290010452270508, "__label__education_jobs": 0.0004100799560546875, "__label__entertainment": 8.177757263183594e-05, "__label__fashion_beauty": 0.0001506805419921875, "__label__finance_business": 8.660554885864258e-05, "__label__food_dining": 0.0003478527069091797, "__label__games": 0.0011930465698242188, "__label__hardware": 0.0013704299926757812, "__label__health": 0.0002930164337158203, "__label__history": 0.0002321004867553711, "__label__home_hobbies": 8.571147918701172e-05, "__label__industrial": 0.0003273487091064453, "__label__literature": 0.00018465518951416016, "__label__politics": 0.00016307830810546875, "__label__religion": 0.0005064010620117188, "__label__science_tech": 0.0072021484375, "__label__social_life": 5.9604644775390625e-05, "__label__software": 0.0041351318359375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0003218650817871094, "__label__transportation": 0.0003898143768310547, "__label__travel": 0.0002231597900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21196, 0.02025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21196, 0.30678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21196, 0.66796]], "google_gemma-3-12b-it_contains_pii": [[0, 1403, false], [1403, 1642, null], [1642, 2885, null], [2885, 5299, null], [5299, 9328, null], [9328, 11681, null], [11681, 12668, null], [12668, 14199, null], [14199, 16264, null], [16264, 18342, null], [18342, 20291, null], [20291, 21196, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1403, true], [1403, 1642, null], [1642, 2885, null], [2885, 5299, null], [5299, 9328, null], [9328, 11681, null], [11681, 12668, null], [12668, 14199, null], [14199, 16264, null], [16264, 18342, null], [18342, 20291, null], [20291, 21196, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21196, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21196, null]], "pdf_page_numbers": [[0, 1403, 1], [1403, 1642, 2], [1642, 2885, 3], [2885, 5299, 4], [5299, 9328, 5], [9328, 11681, 6], [11681, 12668, 7], [12668, 14199, 8], [14199, 16264, 9], [16264, 18342, 10], [18342, 20291, 11], [20291, 21196, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21196, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
55fd5a5611b6514a7e574dc5b31f21903ce823f7
[REMOVED]
{"Source-Url": "https://www.bibalex.org/ISIS/UploadedFiles/Publications/DLF36_DAR%20Institutional%20Repository%20Integration%20in%20Action.pdf", "len_cl100k_base": 6241, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24868, "total-output-tokens": 7279, "length": "2e12", "weborganizer": {"__label__adult": 0.0005002021789550781, "__label__art_design": 0.0029296875, "__label__crime_law": 0.0009927749633789062, "__label__education_jobs": 0.030364990234375, "__label__entertainment": 0.0002741813659667969, "__label__fashion_beauty": 0.000324249267578125, "__label__finance_business": 0.0014238357543945312, "__label__food_dining": 0.0004191398620605469, "__label__games": 0.0007429122924804688, "__label__hardware": 0.001929283142089844, "__label__health": 0.0007205009460449219, "__label__history": 0.0022258758544921875, "__label__home_hobbies": 0.0002034902572631836, "__label__industrial": 0.0007672309875488281, "__label__literature": 0.00179290771484375, "__label__politics": 0.0004870891571044922, "__label__religion": 0.0006480216979980469, "__label__science_tech": 0.2841796875, "__label__social_life": 0.000347137451171875, "__label__software": 0.179931640625, "__label__software_dev": 0.4873046875, "__label__sports_fitness": 0.00022602081298828125, "__label__transportation": 0.0007162094116210938, "__label__travel": 0.00046896934509277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34205, 0.02497]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34205, 0.53574]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34205, 0.90046]], "google_gemma-3-12b-it_contains_pii": [[0, 2783, false], [2783, 6354, null], [6354, 9840, null], [9840, 11953, null], [11953, 13506, null], [13506, 15760, null], [15760, 19123, null], [19123, 22016, null], [22016, 25419, null], [25419, 28698, null], [28698, 31774, null], [31774, 34205, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2783, true], [2783, 6354, null], [6354, 9840, null], [9840, 11953, null], [11953, 13506, null], [13506, 15760, null], [15760, 19123, null], [19123, 22016, null], [22016, 25419, null], [25419, 28698, null], [28698, 31774, null], [31774, 34205, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34205, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34205, null]], "pdf_page_numbers": [[0, 2783, 1], [2783, 6354, 2], [6354, 9840, 3], [9840, 11953, 4], [11953, 13506, 5], [13506, 15760, 6], [15760, 19123, 7], [19123, 22016, 8], [22016, 25419, 9], [25419, 28698, 10], [28698, 31774, 11], [31774, 34205, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34205, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
e775c8c0fc9f3c3e9834dd74072143cdc5a68c8d
[REMOVED]
{"len_cl100k_base": 7304, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 45872, "total-output-tokens": 9548, "length": "2e12", "weborganizer": {"__label__adult": 0.00040650367736816406, "__label__art_design": 0.0008606910705566406, "__label__crime_law": 0.0005946159362792969, "__label__education_jobs": 0.0019664764404296875, "__label__entertainment": 0.00019872188568115232, "__label__fashion_beauty": 0.0002446174621582031, "__label__finance_business": 0.0007543563842773438, "__label__food_dining": 0.0004143714904785156, "__label__games": 0.0017986297607421875, "__label__hardware": 0.001567840576171875, "__label__health": 0.0007891654968261719, "__label__history": 0.0006489753723144531, "__label__home_hobbies": 0.0003132820129394531, "__label__industrial": 0.0010690689086914062, "__label__literature": 0.00035643577575683594, "__label__politics": 0.0004243850708007813, "__label__religion": 0.0005698204040527344, "__label__science_tech": 0.42919921875, "__label__social_life": 0.00016796588897705078, "__label__software": 0.0131378173828125, "__label__software_dev": 0.54248046875, "__label__sports_fitness": 0.0005626678466796875, "__label__transportation": 0.0010595321655273438, "__label__travel": 0.0003018379211425781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30639, 0.04799]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30639, 0.31757]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30639, 0.87436]], "google_gemma-3-12b-it_contains_pii": [[0, 201, false], [201, 584, null], [584, 2682, null], [2682, 4595, null], [4595, 7286, null], [7286, 10057, null], [10057, 11818, null], [11818, 11878, null], [11878, 13891, null], [13891, 16810, null], [16810, 20333, null], [20333, 21053, null], [21053, 22706, null], [22706, 23146, null], [23146, 26322, null], [26322, 27184, null], [27184, 28181, null], [28181, 28837, null], [28837, 30507, null], [30507, 30639, null]], "google_gemma-3-12b-it_is_public_document": [[0, 201, true], [201, 584, null], [584, 2682, null], [2682, 4595, null], [4595, 7286, null], [7286, 10057, null], [10057, 11818, null], [11818, 11878, null], [11878, 13891, null], [13891, 16810, null], [16810, 20333, null], [20333, 21053, null], [21053, 22706, null], [22706, 23146, null], [23146, 26322, null], [26322, 27184, null], [27184, 28181, null], [28181, 28837, null], [28837, 30507, null], [30507, 30639, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30639, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30639, null]], "pdf_page_numbers": [[0, 201, 1], [201, 584, 2], [584, 2682, 3], [2682, 4595, 4], [4595, 7286, 5], [7286, 10057, 6], [10057, 11818, 7], [11818, 11878, 8], [11878, 13891, 9], [13891, 16810, 10], [16810, 20333, 11], [20333, 21053, 12], [21053, 22706, 13], [22706, 23146, 14], [23146, 26322, 15], [26322, 27184, 16], [27184, 28181, 17], [28181, 28837, 18], [28837, 30507, 19], [30507, 30639, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30639, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c5739de5bb34d2cd53c2f2e47defde58eb3edf1a
Need to Redefine “Value” and Case for a New “Software Valuation” Technique: An Analytical Study Mohammad Ramzan, Sajid Anwar, Arshad Ali Shahid* National University of Computer and Emerging Sciences (FAST), A.K. Brohi Road, H11/4 Islamabad, Pakistan, Abstract. In this era of progress and innovation, every product and service has certain value attached to it. Similar is the case with software products and services. These also assign some value to the stakeholders for whom this product or service has some meaning. Software Engineering (SE) deals with the development of quality software product that fulfills stakeholder’s requirements. There are many valuation techniques which are being used to establish value of software products and services. However, irony of the situation is that current practices in software engineering are value neutral in essence. Value Based Software Engineering (VBSE) takes into consideration the value assigned to the software product and to the stakeholders who have interest in that software. However, VBSE as a theory has not redefined the term “Value” for the discipline of software engineering. In this paper, we present an analysis of current definitions of value as well as valuation techniques and subsequently, we have presented a case for the need to redefine the term value and modification of current valuation techniques for software engineering. Keywords: value, software engineering, software, Process, value based software engineering, requirement engineering 1. Introduction Software Engineering as a stream of knowledge deals with development of software product that meets certain business and technical quality constraints imposed on it while at the same time meets the requirements of stakeholders. This means that software engineering ensures that certain amount of value is attached to the software (being developed). It is an interesting fact however, that classical software engineering is “Value Neutral” in essence [1]. This means that software engineering does not consider the value that the product is going to deliver to stakeholders nor does it assign value to the stakeholders who have an interest in development of the software product. In the past two decades or so, many new theories and changes in the classical SE knowledge have emerged. Emergence of these new concepts has essentially altered the way we perceive SE and software development process. Value Based Software Engineering is one such theory which has been around for more than a decade now. Many known researchers have worked on design and formulation of VBSE. VBSE essentially incorporates the element of value in otherwise value neutral SE practices. However, it is again interesting to note that VBSE incorporates the concept of value in SE practices without actually first describing what should we mean by value in the domain of software development. Having absolutely no doubt in the fact that VBSE is a major and exciting emerging theory and it has immense potential in redefining the SE principles, we believe that there is a need to define value in the context of software engineering. Only then can we assess the true worth of a software product and stakeholders associated with it. It is also equally important to have a look at existing valuation techniques. Many valuation techniques have been used both in traditional business as well as software products and services. Some major names among these are Net Present Value, Internal Rate of Return, Sensitivity analysis etc. COCOMO and COCOMO 2 have been designed with the aim of better estimation of software. There is an urgent need to investigate the application of all of these approaches to determine their suitability and present a case for further improvements if necessary. We have also to be mindful of the fact that many of the concerns that we express in our work are present in the mind of software developers, project managers and other stakeholders when they valuate the software product. What we have tried to emphasize upon is the need for a formal new definition of value and anew valuation technique so that all of these concerns can be accommodated formally. * E-mail address: muhammad.ramzan@nu.edu.pk , sajid.anwar@nu.edu.pk , arshad.a.shahid@nu.edu.pk In this paper, we have put forward a case to describe the reason and significance of redefining the concept of value for software engineering. At the same time an analysis of current valuation techniques with the aim of insight and further improvement has been made. The paper is structured as follows; after giving a brief introduction in section 1, a brief literature review is given where we describe certain prevalent definitions of value and their significance along with a study of valuation techniques. In section 3, we have discussed COCOMO and COCOMO II from the perspective of software valuation. Section 4 presents a critique of these definitions and techniques while analyzing their strengths and shortcomings. In section 5, we have described certain preconditions that any new definition of value should cater for in order to become a suitable definition of value for SE. At the same time, we have shown how these modifications can be useful towards better establishing the value of a product based on the stakeholder requirements. Conclusion and future work has been presented in section 6. 2. Literature Review It is important to understand how value has been interpreted in the literature. This section describes in detail certain definitions of value and valuation techniques being applied in modern day software development. 2.1. What is Value? The ultimate aim of any industrial knowledge (and same is true for software engineering) is to create certain products or services which add value to the existing worth of stakeholders, for whom that service, product or process is designed. If the element of value is excluded, the creation of these products, services or processes is rendered meaningless. When talking of SE practices, the creators of Value Based Software Engineering theory conclude that “Our aim is to bring such value considerations to the foreground so that software engineering decisions at all levels can be optimized to meet explicit objectives of the involved stakeholders” [1]. What is important in this definition is the identification of the “Value” considerations for the purpose of maximization of objectives of all stakeholders. So it seems appropriate that we take a look at what literature offers us in terms of definition of value. - According to Merriam-Webster online dictionary, value can be; a fair return or equivalent in goods, services, or money for something exchanged - the monetary worth of something - relative worth, utility, or importance - a numerical quantity that is assigned or is determined by calculation or measurement [2] Dictionary of Canadian Economics defines value as: “The quantity of one product or service that will be given or accepted in exchange for another” [3]. This definition caters for the most classic form of value assigned to any product or service. Value therefore becomes a matter of mere economic significance according to this definition. According to this definition, value relates to the “quantity” that any product or service can be exchanged for. This definition though explains value quite well for almost all classical products or services, it is unable to define the value of software products or services that well. Another definition of value comes from Oxford Companion to Law and it states that “…value may consist of spiritual or aesthetic qualities or in utility in use, or in the amount of money or other goods which could be obtained in exchange for the thing in question…”[4]. This definition though quite comprehensive in its own right still leaves many questions unanswered. Lastly, Dictionary of Sociology defines value as a “…generalized principle of behavior to which the members of a group feel a strong commitment and which provides a standard for judging specific acts and goals” [5]. This definition generalizes the concept of value to such an extent that it is difficult to apply it particularly on software engineering. What all of these definitions fail to cater for can be summed up as: - These definitions rely too much on money as the relative unit to describe value - The value in all of these definitions is established based on market forces. - There is no room for accommodating the value of process and its maturity that goes into development of product or services. - Human resource and its quality that goes into development of these products or services has no significant role in establishing value - The value of product is established based on its business value in classical sense which is not applicable to the modern products or services like software. 2.2. “Value “ in Software Engineering The significance of value to software engineering is quite manifest. As Stefan Biffl et al have aptly highlighted in their work that the ultimate aim of software engineering is to add value to the existing state of affairs through creation of products, services and processes [1]. The authors have also mentioned the overall negative impact that this whole process can cause if the value consideration remains implicit. We shall briefly describe in chronological order, the application and evolution of concept of value in software engineering. Traditionally value has been used to describe cost models in software engineering. The first major work to address the concept of value beyond cost models was Boehm’s software engineering economics [6]. Consequently, in 1986, spiral model was introduced by Boehm after establishing relationship between value and software process. McTaggart’s work [7], titled as “The Value Imperative” resulted in a new way of thinking which was subsequently named as value-based management movement. A result of this movement was an IEEE software essay titled “When the Pursuit of Quality Destroys Value” by Favaro [8] in 1996. In this essay, Favaro argued that pursuit of quality should not be the sole aim as in many cases, this pursuit can destroy the value of the product. Later, in another article, the adjective “value-based” was used by Favaro et al. in the software development context addressing the economics of software reuse. [9]. WinWin model was another model of software engineering by Boehm et al. This model was proposed in 1998 and it basically dealt with the concept of requirement negotiations [10]. The formal agenda of Value Based Software Engineering was proposed by Boehm et al in 2003. This agenda captured the expanding scope in the domain of value based management approaches as well as agile development methods [11]. 2.3. Economic Valuation Techniques Traditionally, software economics has relied heavily of estimating the cost. For example, COCOMO II calculates the overall effort of a software project through using this equation. \[ \text{Effort} = (\text{Personnel})(\text{Environment})(\text{Quality})(\text{SizeProcess}) \] According to Patrick McKenna, this equation captures certain key factors and delivers an estimate of the effort required for software project [12]. These equations have been mainstay of many researchers like those working with IBM’s Rational. However, these researches and applications have only covered the cost side of this whole software economics [13]. The value side of this whole process has been covered with the help of VBSE by Barry and several researchers who have worked on it. Many valuation techniques can be found in literature. Though originating from economics, these are equally applicable in the field of software engineering. Many of these techniques have been widely used for valuation purpose of software projects quite effectively. One major valuation technique is return on investment (ROI). Value can be directly calculated using ROI technique which determines time to payback [13]. Net Present Value (NPV) is another major valuation technique which measures the profitability and hence value of the product according to certain statistical techniques. Internal Rate of Return (IRR) is used in capital budgeting and has been a favorite technique for valuating software projects for quite some time now. Sensitivity analysis is actually performed to show what kind of uncertainties can have serious affect on the value of software product. Monte Carlo simulation method for valuation of software product uses random numbers and probabilistic approach to generate multiple possible scenarios and establishing value of the product according to probabilities of these scenarios. Final result of this simulation is the product’s value which can be quite impressive provided sufficient data and a realistic model is used to run the simulation [13]. 3. COCOMO and COCOM II, Means for Software Valuation COntstructive COst Model (COCOMO and COCOM II) have been the leading estimation techniques for software products and services. In this capacity, these can be used for valuation of the software as well. In 1981, COCOMO was first published by Barry W. Boehm in his Book Software engineering economics as a model for estimating effort, cost, and schedule for software projects. The work was based on the study of 63 projects at TRW Aerospace. At that time, Barry Boehm was Director of Software Research and Technology at the organization. The study examined projects ranging in size from 2,000 to 100,000 lines of code, and programming languages ranging from assembly to PL/I. COCOMO model is often referred as COCOMO 81. In 1997 COCOMO II was developed and it appeared for the first time in published format in 2001 in the book Software Cost Estimation with COCOMO II. As a successor to COCOMO 81, COCOMO II is better suited for estimating modern software development projects. More importantly, it caters to the software which are developed with process models other than the waterfall model and new programming languages. It established its utility in the era of desktop development, code reusability and the use of off-the-shelf software components. Here, we are more interested in understanding how COCOMO II works and how it can be used for valuation purposes. COCOMO II allows one to estimate the cost, effort, and schedule when planning a new software development activity. There are three major components to this model where each model caters to different level of abstraction and detail. These sub models are called the Applications Composition, Early Design, and Post-architecture models. COCOMO II was designed to meet following objectives: - Investment and financial decision making based on software development effort Project budgeting and schedule • Tradeoff negotiation • Risk management • Level of reusability and legacy software inventory decision • Setting mixed investment strategies • Process improvement strategy COCOMO II offers many advantages towards estimation when it comes to effort or cost. The main advantage is the status of COCOMO II as an industry standard. Its easy availability and highly understood process for estimation make it ideal for the specific purposes. The presence of tool support is another highlight of this model. With extensive research going on in this specific model all over the world, it is well positioned to be the standard estimation approach for a long time to come. At the same time, one major drawback of COCOMO model is its non-suitable nature for small scale development effort. However, when we talk of valuation process, we are not simply discussing the cost or effort estimation. Additionally this estimation is essentially aimed towards development of the product. Our actual aim in valuation should be at reaching to a model where these estimates can be used for establishing a realistic value for the software. Also some new models which discuss the valuation of the product not just from its development perspective but also from other perspectives such as utility, applicability, requirements fulfillment etc. Without the presence of such mechanisms, we shall always be tempted to value the product based upon the total cost which has gone into its development (whether for effort or time). Many other factors which should play their role into valuation will still remain missing. So summing all of this discussion regarding COCOMO II, we believe COCOMO II in its present form can be a useful mechanism for valuation provided certain other parameter (essentially non-cost factors) are incorporated into it. 4. Critique of Contemporary Definitions and Valuation Techniques In this critique, we present an in depth analysis of current definitions as well as valuation techniques so that in the later section, we can present our observations in the form of guidelines which should be accommodated in any new scheme of things. 4.1. Analysis of Current definitions of Value As can be seen from table 1, there are many strengths of current definitions of value. However, there are many drawbacks associated with it as well. In this section, we will highlight some of those drawbacks which have a significant bearing on software engineering in particular. Definition is too Abstract: As we have seen in all definitions of value described in the sections, the description is very abstract in its nature. The effect of this abstraction is a possible absence of many other diverse properties of value. For example, when we describe value of any entity in modern world, we usually want to describe its business value. However, most software products don’t have a business value that strictly meets the above description of business value. For software, the real business value is not a direct monetary return that it can fetch. Definition of Value is relative to Money: The way value is currently defined, it strongly resembles many other definitions of physical sciences like mass, energy, pressure etc. The common aspect of all of these definitions is that all of these (including value) are relative. All these definitions require some other definition of unit to describe them. For example, definition of mass can’t be fully understood unless the definition of its units i.e. kilograms or pounds is used with it. Similarly, current definition of value cannot be fully comprehended until the definition of its associated unless is not understood. In current scenario, the associated unit for value is often money. We usually assign value to any product or service in terms of how much money, this particular product or service can fetch. However, when we talk of its software product or service. It need not be the one to have a lot of money associated with it to make it valuable. In fact, it has been the case that many most valuable software products and services are absolutely devoid of any money associated with them. So attaching a unit like money to describe the value of software is not practical or even justified to describe it value. - **Market Forces are not determined according to current scenario**: Current definitions of value imply that market forces always have a significant role in determining the value of any product or service. However, there is increasingly large evidence that the effort that goes in the development of product or service has its own value. We can cite the example of the development process that is used for development of software. Many a times, the value of ultimate product heavily relies on the maturity of the process that goes into development of the product. CMM and CMMI certifications actually evaluate the quality and ultimate value of software product based on maturity of the process. This aspect has also not fully catered for in the current definitions of value. - **Human and Time Resource are not properly incorporated**: Goods have been evaluated based on the human resources involved. However, most of the times, this has played quite negligible role in the overall valuation process. Similarly, the time that a product or service takes in development has also not played any significant role in this process. We know that many of modern day products (including software) have human resource as their major raw material. Current definitions have failed to put so much significance on this specific aspect. Also we need to establish how time plays a role in establishing value for a product or service. - **Concept of Value Prioritization is lacking**: Current definitions of value fail to grasp with the fact that value of the same product can be different for different stakeholders. Thus it is not possible to assign one single value to a product. This is particularly true in the case of software engineering where various stakeholders perceive same product or service with varying degree of significance. <table> <thead> <tr> <th></th> <th>Strengths</th> <th>Limitations</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Establishes the concept of Return</td> <td>Fails to determine value of many products e.g. software</td> </tr> <tr> <td>2</td> <td>Established and applied practices with practical applications</td> <td>Lack of all stakeholder’s participation</td> </tr> <tr> <td>3</td> <td>Notion of utility and desirability</td> <td>Too abstract in nature</td> </tr> <tr> <td>4</td> <td>Covers many aspects of daily life</td> <td>No concept of value prioritization</td> </tr> <tr> <td>5</td> <td></td> <td>Customer satisfaction not incorporated</td> </tr> <tr> <td>6</td> <td></td> <td>Value of various qualities achieved through the process is not addressed</td> </tr> </tbody> </table> Table 1. Strengths and Limitations of current definitions of "Value" ### 4.2 Analysis of Contemporary Valuation Techniques As, we have shown in section 2, there are many valuation techniques out of which we have selected five techniques which have been used for valuation of software projects and products. These include NPV, IRR, ROI, Sensitivity Analysis and Monte Carlo Simulation. An apt comparison of these and few more valuation techniques was presented by Nancy Burchfield in a diagram in her work [14] where she has highlighted the applicability and advantages of these techniques. This comparison is shown in figure 1. In this section we analyze each of these techniques and present an analysis for applicability of these approaches for a better valuation process. 404 ROI/Payback Period: According to Lutz Prechelt [15], “In the dynamic view, ROI describes the periodically recurring profits (returns) from fixed financial capital (investment). In the static view, ROI describes the one-time income or saving (return) realized as a consequence of a one-time expenditure (investment)” Return over investment is a process of directly calculating the value using following equation; \[ \text{ROI} = \frac{\text{Cost of Project}}{\text{Annual Cash Inflows}} \] According to this technique, the best valued product is one with the lowest payback time. In other words, higher, the cash inflows, smaller the payback time and thus better the value of product. When we look more closely into this technique, we see it plagued with several problems. Some of these can be: - From purely economic perspective, it ignores the future trends of money depreciation or otherwise. Thus the value calculated is unrealistic in the first place. - The technique does not accommodate the concept of risks for a software product. Thus the results achieved are heavily discounted from this perspective. As we have seen in the previous discussion regarding value, this technique too establishes value of a software product purely on the basis of monetary returns. No other important considerations are entertained. Net Present Value (NPV): In this technique, present value of cash inflows is subtracted by present value of cash outflows. Main improvement of this technique over ROI is that it takes the future movement of currency into consideration. However, just like ROI, the accuracy of this technique is also dependent on reliable availability of cash inflows in future. According to investopedia [16], NPV can be calculated as: \[ NPV = \sum_{t=1}^{T} \frac{C_t}{(1+r)^t} - C_0 \] Where \(C = \) cash inflows or outflows and \(r = \) discount rate over a period of time \(t\). Net Present Value offers a direct advantage over ROI in the sense that inflationary factor is taken into consideration. One major problem faced by net present value as discussed by Joan Pasqual et al. [17] is the non “monotonic” nature of NPV function in most of the cases. This makes it difficult to interpret the results properly. Similarly, at the same time, the glaring factor remains that just like ROI; NPV also considers value to be only monetary in nature. Internal Rate of Return (IRR): IRR is a mechanism for analysis of a major investment with relation to the time value of money. It basically calculates the interest rate equivalent of the dollar amount of return of the investment. If the knowledge about interest rate is known, we can compare it to IRR rates on other investments. When talking specifically about the software products or services, it is quite difficult to calculate the amount of return over certain period of time. The requirements for software products change much more frequently than many other products. Consequently, the level of anticipation for return is quite low. In the absence of certain careful mechanism where experts try to figure out the future trends of that particular domain, relying merely on IRR can be fatally misleading. Sensitivity Analysis: Whenever we perform analysis of the system, we are interested in knowing how “sensitive” the proposed system is to the change in value of various parameters being used for analysis. This is a very fascinating technique for establishing real value of a product such as software which shows very dynamic behavior in its lifespan. According to Lucia Breierova and Mark Choudhari, sensitivity analysis is quite useful when building confidence in model by studying and applying certain variations or “uncertainties” present in various parameters of the model [18]. However, sensitivity analysis poses its own limitations. One major limitation is that it is almost impossible to check all parameters for all possible changes so that a really impressive analysis could be performed. Monte Carlo Simulation: In monte carlo simulation, there is a heavy reliance on the usage of probabilistic approaches and random numbers to find solutions for our problems. The PMI (Project Management Institute) defines Monte Carlo Simulation as: “A technique that performs a project simulation many times to calculate a distribution of likely results.” In the words of McKenna “The Monte Carlo Simulation typically uses random number generators to generate multiple scenarios of a model by repeatedly sampling values from the probability distributions for the various input variables” [13]. This approach also shows the possible variance in value of the product due to risks involved which gives more credence to the technique. However, its purely economic sense makes it difficult to establish an all round value of the product. 5. Modification guidelines for new definition of Value Before proposing the modifications in the existing definitions of value to make them more suitable for software engineering and its emerging concepts, it is essential to point out those specific aspects which need to be incorporated in the classical definitions. These specific aspects can work as guidelines towards new definition. In our opinion, following aspects need to be considered in this regard. - Abstraction in the definition needs to be reduced so that the definition specifically caters for those aspects which are relevant. - The relative nature of value with currency or money (as in classical considerations) needs to be replaced with utility and throughput. - Market forces that affect the value of any service or product need to be redefined. - The value of a product or service should also cater for the maturity of effort that goes into its development. - Quality of human resource that goes into development of products or services needs to be used to establish its value. - Establishing a mechanism to assign different value to similar for different stakeholders. <table> <thead> <tr> <th></th> <th>Abstraction must be reduced</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>Relativity with “money” should be replaced with either “utility” or “throughput”</td> </tr> <tr> <td>3</td> <td>Incorporation of worth of “maturity” of the process</td> </tr> <tr> <td>4</td> <td>Incorporation of worth of “quality” of human resource</td> </tr> <tr> <td>5</td> <td>Value prioritization mechanism</td> </tr> <tr> <td>6</td> <td>User satisfaction</td> </tr> </tbody> </table> Table 2. Summary guidelines for new definition of “Value” - Establishing the fact that degree of accommodation of stakeholder’s requirements remains the ultimate mechanism for establishing the value of a product. - New definition of value should be a mechanism for determining the stakeholder’s satisfaction with respect to the product or service delivered and process that goes into development of that product or service. - We believe that it is essential for meaningful evolution of new software engineering paradigms like VBSE to redefine the concept of value. This process should accommodate all the concerns that have been mentioned above. At the same time, we believe that a new concept of stakeholder’s prioritized requirements should be introduced to establish a more realistic value of the software product or service. We have tried to demonstrate this in figure 2. This concept is also difficult to implement with current definitions of value. Summing up all this together, if we examine the pure definition of value which states “The value of goods is the product of their quantity multiplied by their price.” we can easily come to the conclusion that this definition doesn’t cater for the modern day software development and engineering. 6. Conclusion and Future Work Emergence of new paradigms like value based software engineering is an exciting evolution in the knowledge of software engineering. However this emergence is not without its challenges. One particular challenge in this regard has been the definition of “value” in the context of value based software engineering. Current definitions of value are unable to precisely describe the worth of software products and services. We have presented the case for the need of emergence of some new and meaningful definition of value. We have also described the guidelines which should be kept in mind while proposing any modification in the definition of value to better represent the modern day products and services. Work is already in progress to establish a meaningful definition of value. We are also in the process of developing anew software valuation schema which accommodates all of these concerns and give a better realistic value of the software in the early requirement engineering stage of the software development. A critical evaluation and appraisal is also needed to further establish the worth of this new definition. 7. Acknowledgments The authors, Mr. Muhammad Ramzan and Mr. Sajid Anwar would like to acknowledge the Higher Education Commission (HEC), Govt. of Pakistan and NU-FAST for providing funding and required resources to complete this work. It would have been impossible to complete this effort without their continuous support. 8. References
{"Source-Url": "http://ipcsit.com/vol2/75-B258.pdf", "len_cl100k_base": 6232, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 19524, "total-output-tokens": 7193, "length": "2e12", "weborganizer": {"__label__adult": 0.0005965232849121094, "__label__art_design": 0.0004558563232421875, "__label__crime_law": 0.0005321502685546875, "__label__education_jobs": 0.003101348876953125, "__label__entertainment": 8.660554885864258e-05, "__label__fashion_beauty": 0.00024247169494628904, "__label__finance_business": 0.002826690673828125, "__label__food_dining": 0.000568389892578125, "__label__games": 0.0008420944213867188, "__label__hardware": 0.0005598068237304688, "__label__health": 0.0009245872497558594, "__label__history": 0.0002808570861816406, "__label__home_hobbies": 0.0001138448715209961, "__label__industrial": 0.0004346370697021485, "__label__literature": 0.0006594657897949219, "__label__politics": 0.0003981590270996094, "__label__religion": 0.00047397613525390625, "__label__science_tech": 0.01172637939453125, "__label__social_life": 0.00016629695892333984, "__label__software": 0.00504302978515625, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.0003750324249267578, "__label__transportation": 0.0006227493286132812, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33797, 0.02199]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33797, 0.554]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33797, 0.93835]], "google_gemma-3-12b-it_contains_pii": [[0, 4286, false], [4286, 9166, null], [9166, 14779, null], [14779, 18070, null], [18070, 22590, null], [22590, 27399, null], [27399, 30524, null], [30524, 33797, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4286, true], [4286, 9166, null], [9166, 14779, null], [14779, 18070, null], [18070, 22590, null], [22590, 27399, null], [27399, 30524, null], [30524, 33797, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33797, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33797, null]], "pdf_page_numbers": [[0, 4286, 1], [4286, 9166, 2], [9166, 14779, 3], [14779, 18070, 4], [18070, 22590, 5], [22590, 27399, 6], [27399, 30524, 7], [30524, 33797, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33797, 0.09868]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2041a8c962fff9eeaa9e2057a997e5c6d364d259
END-USER DEVELOPMENT FOR KNOWLEDGE SHARING A Collaborative Web Mapping Application in the First Aid Domain Daniela Fogli and Loredana Parasiliti Provenza Dipartimento di Ingegneria dell’Informazione, Università di Brescia, Via Branze 38, Brescia, Italy Keywords: Collaborative web mapping, End-user development, Participatory design. Abstract: This paper describes FirstAidMap, a collaborative web mapping system for creating, managing and sharing territorial knowledge that can be useful in case of emergencies. The system arises from a design experience we have carried out with representative end users belonging to an association for public assistance and first aid. Volunteers of this association, and specifically ambulance drivers, need to know the characteristics of the territory where they ensure their assistance, in order to reach a given place quickly and in a safe manner. This knowledge is often tacit and usually distributed among the members of the association. Currently, to cope with this problem, paper-based maps are the only means to spread and share knowledge within the association; while training sessions are regularly taken to provide drivers with information about the dangers existing in the territory and possible viability modifications. Representative volunteers participated in the design of FirstAidMap and in the study of end-user development functionalities that could make all volunteers able to contribute their own knowledge and share it with the other volunteers. The resulting system engages and motivates users to participate in map shaping and, at the same time, reinforces the sense of community and individual awareness. 1 INTRODUCTION With the advent of the World Wide Web and particularly of the participatory web, or Web 2.0, (O’Reilly, 2006), maps are increasingly the venue where people with different expertise can meet and share knowledge for a specific purpose. As suggested in (Marcante and Parasiliti Provenza 2008), web maps become social media, where users are not only able to access and modify the information associated with the map, but also to act on the information added by other users, and thus interact directly or indirectly with other people by sharing and exchanging knowledge. Additionally, more and more often web-based maps are collaborative web mapping systems, namely virtual spaces created by end users and totally shaped at their hands. Collaborative web mapping systems, such as Google Maps, allow users to visually define spaces by enabling them to choose what to map according to their own goals, knowledge and practices. Thanks to the contribution of new cartographic content, the resulting map provides a living account of space as a social product of individual embedded knowledge, daily practices, and concerns (Giaccardi and Fogli, 2008). In this sense, collaborative web mapping systems, are intrinsically end-user development (EUD) environments. As effectively summarized by Fischer, EUD “is focused on the challenge of allowing users of software systems who are not primarily interested in software per se to modify, extend, evolve, and create systems that fit their needs” (Fischer, 2010). Indeed, collaborative web mapping systems should encompass socio-technical EUD mechanisms for supporting and encouraging user participation in contributing content and mapping space, especially when the map represents a fundamental knowledge source sustaining users’ daily practices. This happens for example in FirstAidMap, a collaborative web mapping system we have designed and developed to satisfy the needs of COSP (Centro Operativo Soccorso Pubblico), an Italian non-profit association for public assistance and first aid. FirstAidMap is a map-based system that supports COSP volunteers in acquiring, creating and sharing knowledge about the territory where COSP ensures its assistance. This knowledge is crucial for ambulance drivers to decide, in case of emergencies, how to reach a given place quickly and in a safe manner. However, knowledge of the territory is often tacit and anyway distributed among COSP volunteers, depending on their interests, attitudes, and experiences. FirstAidMap has thus been conceived as a virtual space that users (COSP volunteers) can directly shape and enrich, thus actively building knowledge on the territory and share it within the community they belong to. While interacting with FirstAidMap to make this virtual space evolve, users behave as end-user developers: indeed, they modify ‘at use time’ the system to satisfy their individual and collaborative needs. Moreover, representative users participated ‘at design time’ in the development of the system. Their contribution was fundamental to create a system not only easy to learn and to use, but also acceptable by the COSP community and trustable by all its members. The paper is structured as follows. Section 2 introduces the first aid domain. Section 3 describes the participatory design activity carried out with representative COSP volunteers; in particular, this section discusses EUD needs emerged during design sessions and the main ideas for satisfying these needs. Section 4 describes the main characteristics and functionalities of the resulting application. Section 5 compares our work with related literature, while Section 6 concludes the paper. 2 THE FIRST AID DOMAIN COSP (Centro Operativo Soccorso Pubblico) is an Italian non-profit association in Mazzano near Brescia for public assistance and first aid (http://www.cospmazzano.it). It includes about two hundred volunteers working together to provide initial care in case of medical emergencies. An ambulance is available 24 hours a day at the COSP’s offices. Volunteers are required to attend a certified course for first aid training. Some of them are trained to drive the ambulance and/or to act as specialized rescuers assisting a nurse or an emergency physician from the local hospital in the provision of first aid. In addition to first aid service, COSP association also offers assistance during sport contexts and demonstrations as well as in transporting patients between places of medical treatment. In this domain, navigator satellite systems, which ambulances are usually equipped with, are not considered sufficient and satisfactory by COSP volunteers to carefully assist ambulance drivers and the whole emergency crews in bringing medical care to serious patients timely. Current navigator systems do not take into account critical issues when suggesting quickest paths to a place, such as roads with humps or uneven road surfaces (really dangerous in case of patients on board), road yards in progress or weekly open-air markets causing detours that can irreparably delay the provision of first aid. Due to these limitations, COSP volunteers do not rely too much on navigator systems, but they rather prefer trusting in their knowledge and expertise of the territory to decide how to reach a given place quickly and in a safe manner. Consequently, COSP volunteers go on using traditional paper maps annotated with their comments and notes; however, due to the rapid topography updates and the perishable nature of paper maps, quick ageing of such traditional maps makes it difficult accessing up-to-date information. It is thus evident that a web-based mapping system, customized to the specific needs of the intended user community, may represent an effective solution to the problem at hand. 3 PARTICIPATORY DESIGN OF FIRSTAIDMAP In a first meeting we had with representative COSP volunteers, they specifically asked for a map-based software system that supports the training activity of new ambulance drivers, who need to know the characteristics of the region where COSP ensures its assistance. Indeed, a high percentage of interventions are performed in the neighbourhood of Mazzano including about fifteen different villages; as a consequence, drivers often find difficult orienting themselves in this wide territory, especially where interventions are rarely required or when rural areas must be reached. Regular training sessions thus provide volunteers with information about the dangers existing in the territory, including temporary holdups on the roads, and about the preferred roads leading to different zones. A good and up-to-date a priori knowledge of the territory is crucial for guaranteeing fast interventions. The training activity is usually performed by using traditional teaching materials, typically by projecting and describing PowerPoint™ slides with annotated maps of the territory. Therefore, a first goal was developing a web-based mapping application, called FirstAidMap, to support both instructors during training sessions and drivers in self-training. The application has been designed following a participatory approach (Schuler and Namioka, 1993). Scenarios and use cases have been used to analyse system requirements with the collaboration of representative users; mock-ups have been prepared and progressively refined to collect feedbacks and suggestions about the map look-and-feel and its interaction possibilities. An iterative approach based on the star-life cycle (Hix and Hartson, 1993) has been adopted to develop the application. In the following, we first describe the basic requirements identified at the beginning of the project referred to the training activity, and then the needs emerged during the development of the application related to more sophisticated activities of knowledge creation and sharing. 3.1 Requirements Analysis The activities carried out by driver instructors, and generally by COSP volunteers, are related to their territory and require detailed and up-to-date knowledge about the region where they offer first aid assistance. Therefore, a digital map, commonly used in different geographical systems should be the main component of the FirstAidMap application: its digital nature obviously increases the ability of COSP volunteers to explore information on the map with respect to the traditional paper-based version. For the specific application domain, there is the need of a digital map whose resolution level is high enough to make roads, but also buildings and houses of interest, recognizable. Additionally, the map should be easily ‘explorable’ by users with limited experience and competencies in information technologies. Consequently, COSP volunteers should easily zoom in and zoom out or pan to better visualize a certain area of the territory. A digital map, although up-to-date and with a high resolution, does not contain all the information the specific community requires about the territory. From the analysis of the application domain, three types of information have been recognized as crucial for COSP work: zones, points of interests and notifications. They are all necessary to guide ambulance drivers to the place where medical assistance is needed. In other terms, they are information that enrich the geographic map with semantics relevant for the COSP domain. Let us consider in the following all the three types of information. A zone is an area on the map with common characteristics; it groups together several points on the map that satisfies some condition. An example is a set of roads or neighbourhoods reachable through a same ambulance route from the COSP offices. It is described by a name and eventually by some users’ notes characterizing the area. A point of interest, or briefly POI, is a place on the map, i.e. a fixed and stable element on the territory that acts as a reference point for ambulance drivers and can help drivers to find their way to a place. As in navigator satellite systems, a POI can be a church, a sports ground, a square and so forth. However, it can also be a more specific reference point for an ambulance driver such as a bridge, a dangerous road or other points of interest relevant for first aid activities. Finally, a notification is a notice about an alert situation that can interfere with first aid interventions. It aims to notify medical personnel of emergency units about a critical condition occurring in a given place and for a period of time that can hinder the work of COSP volunteers, e.g. work in progress in a specific area of interest or the temporary modification of the road network of a neighbourhood due to a demonstration. Differently from the other types of information, a notification usually has a limited validity (e.g. the closing of a motorway tollbooth due to work in progress) or it refers to an event occurring with a certain frequency (e.g. the open-air market that takes place in a square each Wednesday morning). Therefore, notifications should be displayed on the map in the time frames they are active. All these types of information contribute to support the activities of COSP volunteers. However, they can be a lot of information which altogether can confuse the user of the map. Therefore to avoid information overload, there is the need to properly organize such knowledge. A possible solution is providing FirstAidMap users with all these information organized in four different levels (see Figure 1): (a) level 0 with the digital map (street, satellite or hybrid map) retrieved through an available web mapping service; (b) level 1 including the zones created on the map; (c) level 2 with the POIs; and finally (d) level 3 with the notifications associated with the map. Moreover, COSP volunteers should have the possibility to change the map level easily, by choosing among a set of available web mapping services. Finally, they should be able to switch among the four information levels independently, by disabling, if needed, those levels they are not interested in. 3.2 EUD Needs During the development of a first prototype satisfying the requirements described above, further discussions with representative users led to identify new usage scenarios, beyond driver training. Particularly, a new need emerged: let COSP volunteers use the system also for simple consultation to improve their knowledge of the territory, and as a support tool while preparing an emergency intervention to identify the characteristics of the area around the ambulance destination place. In this new perspective of FirstAidMap usage, user collaboration to map enrichment is crucial. Therefore, we started to study how ‘to transform’ COSP volunteers from passive users into co-designers of map content. This should have required to provide users with proper tools to enrich the map with significant and up-to-date information, along with functionalities for filtering relevant content, customize map visualization and monitoring users’ activities. Moreover, this should have to be achieved without forcing COSP volunteers to become expert neither in information technology nor in cartography, as many commercial geographic information systems require. To face this problem, the ideas and tools proposed in the end-user development field have been considered. The network of Excellence on End-User Development, which was funded by the European Commission during 2002-2003, defined EUD as “the set of methods, techniques, and tools that allow users of software systems, who are acting as non-professional software developers, at some point to create or modify a software artifact” (Lieberman et al., 2006). EUD leads to transfer to end users part of the activities that are traditionally performed by software developers, such as software design, implementation, customization, and adaptation ‘at use time’. Particularly, EUD research focuses the attention on those people who use software systems as part of their daily life or daily work, but who are not interested in computers per se (Cypher, 1993). They can be technicians, clerks, analysts and managers who often need to “develop software applications in support of organizational tasks” (Brancheau and Brown, 1993), due to new organizational, business and commercial technologies. The main goal of EUD is therefore studying and developing techniques and applications for “empowering users to develop and adapt systems themselves” (Lieberman et al., 2006). However, the level of complexity of these activities should be appropriate to the users’ individual skills and situations, and possibly allow them to easily move up from less complex to more complex EUD activities. In other words, a “gentle slope of complexity” (Meyers et al., 1992) should be guaranteed, meaning that big steps in complexity should be avoided and a reasonable trade-off between ease-of-use and functional complexity should be always kept in the system. In this way, EUD functionalities should be made available to users progressively, without forcing them to learn advanced functionalities soon. EUD functionalities should not be intrusive nor distract users from their primary task, at the same time, they should encourage users in experimenting system adaptation and modification, by requiring the same cognitive effort necessary for using basic functionalities. To integrate EUD tools in FirstAidMap, while guaranteeing a gentle slope of complexity, the classes of potential end-user developers have been identified, and then the EUD functionalities the system should offer have been designed. Next subsection discusses these aspects. 3.3 End-User Developers Classification To allow COSP volunteers to perform different types of EUD activities in FirstAidMap, we started analysing: (i) their current practices within the application domain; (ii) their skills and interests in information technologies; (iii) their motivations in collaborating to knowledge sharing on the map. This analysis has led us to identify different classes of end-user developers and for each of them we have designed a suitable interaction experience with FirstAidMap. As described in the next section, the result is a collaborative web mapping application that can be effectively adopted within COSP domain and in similar context. The classification of end-user developers is based on the following assumptions, which have been discussed and agreed upon during participatory design with representative end users. All COSP volunteers should be able to access the map-based information (zones, POIs and notifications) associated with the map to better know their territory and real-time updates (e.g. detours, hazards). Additionally, they may insert, in an easy and immediate way, a new notification to quickly point out a danger situation. To this end, COSP volunteers should access the system easily without any authentication mechanism. In this case, they access the system as **visitor** users just to explore current information on the map and eventually signal a danger; they should not be allowed to perform more advanced activities. Some volunteers possibly would like to actively contribute to the updating of map-based information and, consequently, they should be able to create and/or modify zones, POIs and notifications in addition to access and explore the knowledge base as simple visitors. To carry out these activities, volunteers need to authenticate with the system, acting as **contributor** users. Finally, more active and experienced COSP volunteers should be able to perform more advanced EUD activities to let both the content and the whole system evolve according to the COSP population’s needs, thus acting as **administrator** users. An administrator is a power user who manages user profiles, system accesses and all the information associated with the map (POIs, zones and notifications). Furthermore s/he is responsible for configuring the system according to the COSP volunteers’ needs. Furthermore, during a meeting with COSP representative users, a new requirement emerged: among COSP volunteers, ambulance drivers should be required to access the map-based information in FirstAidMap before each emergency intervention, in order to check possible alert situations in the way to the emergency site. To monitor drivers’ accesses to the system, a role **driver** has thus been added. As a visitor, a driver user can access the knowledge base, visualize new notifications, and eventually insert new ones, but s/he is required to authenticate to the system to enable the log of his/her activities. ### 4 FirstAidMap The resulting application supports EUD activities allowing COSP volunteers to customize the environment and its content, as well as shape what they need. These activities can be categorized as follows: - Personalization of map visualization by filtering the information available and customizing the map appearance. These activities span from choosing the type of map displayed (road, satellite or hybrid) or the web mapping service (Google Maps, Yahoo! or Visual Maps) to selecting the information levels to be shown; - Creation and management of new content, by adding a multimedia document and a marker (a POI or a notification marker) or defining an area (a zone icon) within the map; - Modification of the type of content to be added to the map and management of system configuration and user profiles. The application provides an authentication procedure and, as a consequence, different interaction modalities, according to the kind of end user logged in the system. The tools for customizing the map navigation and adding new content have been grouped in a set of panels. Some of them can be used also by simple visitor users who, even if not logged, may interact with the navigation panel and the panel for inserting new notifications. A separated section can be accessed by administrator users: this section does not support a direct interaction with the map, like in the other modalities, but allows creating new kinds of content, managing users, and monitoring drivers activities. 4.1 Accessing as Visitor or Driver Each visitor user can access FirstAidMap in a consultation mode. In this mode, the user can navigate the map and access content details (see Figure 2). In particular, the user can interact with the map by clicking on the zoom in/out and pan widgets or using the mouse wheel and left button. S/he can also select an icon on the map, so as a pop-up window appears to display its textual details (in the case at hand the pop-up associated with a notification informs about the closing of a specific square due to an open-air market occurring each Sunday from January to December 2009). On the right of the map there is a navigation panel to allow the user to customize the map visualization by selecting its type (road, hybrid or satellite map) and the web mapping service (Google, Yahoo, Visual Maps). S/he can also filter the map-related information to be displayed (zones, POIs, notifications) and look for a specific place in the map by specifying its address or immediately points to a relevant place from a list, such as the COSP’s offices by choosing the “Sede” item. Under the navigation panel on the right, there is a notification management panel allowing the user to insert notifications only. By selecting “Inserisci una nuova notifica” (Insert a new notification), the corresponding panel is enlarged to support the user in inserting a new notification while the only information level displayed on the map is that with active notifications. This allows the user to focus her/his attention on notifications. The visitor user can thus enrich the map-based information by adding a new notification marker on the map and characterizing it through a name, a description, a validity period, a frequency (all days or a given week day) and a type that represents its gravity. The same activities can be carried out with FirstAidMap by driver users who logged in the system. The only difference is the monitoring activity performed by FirstAidMap transparently with respect to the user; this activity, as required by COSP, allows checking a posteriori if drivers consulted the map before their emergency interventions. 4.2 Accessing as Contributor More advanced activities can be performed when the user logs in the system as a contributor. In this case, the map view environment is that shown in Figure 3. As the reader can notice, a richer set of panels is present on the right of the map. This set includes the same navigation panel previously described, and three panels to manage (i.e. insert, modify or delete) zones, POIs and notifications, respectively. The items in each panel can be selected by the user to perform a specific action; the corresponding sub-panel is thus expanded to show all the information necessary to carry out the selected action. Only a sub-panel, and thus only one functionality, can be active at a time. This allows driving more clearly the user during the interaction and reducing error possibilities. The panel for managing zones includes three sub-panels devoted to zone insertion, zone modification and zone deletion, respectively. By selecting one of these sub-panels, the map is automatically refreshed in order to enable the visualization of the zone level only. This way, the user should better understand the information level where s/he is going to operate. Moreover, in this state of the system, the interaction with the map is different with respect to the interaction allowed by the navigation panel: clicking and moving the mouse pointer on the map in the navigation state determines a dragging of the map and a visualization of a different portion of the territory; whilst, a click on the map in an insertion state leads to the creation of a new point on the map. This permits to reduce errors and to increase user performance while inserting (or modifying or deleting) a content, because in each system state only a limited number of actions can be performed and only the widgets necessary to perform those actions are visualized, without overwhelming the user with too much information and tools. In the case of zone insertion, a sequence of clicks on the map allows selecting the vertices of a polygon, which is automatically created and adjusted after each user click. A double-click allows completing the polygon draw. Additional information related to the zone, such as a name identifying the zone and a detailed description, can then be inserted by filling in the form that is presented in a sub-panel. This form includes also simple instructions that help a non-expert user in performing the task. The modification or deletion of a zone can be activated by selecting first the corresponding sub-panel of the zone manager panel, then by clicking on a zone on the map. The shape or the position of the zone can be modified by direct manipulation on the map; while the data associated with the zone, which are automatically loaded and visualized in a form, can be changed by just editing them. FirstAidMap behaves similarly also for managing POIs and notifications. Particularly, for POI insertion, the user can also choose the corresponding icon to be visualized on the map by selecting the POI type (church, soccer field, bridge, etc.). ### 4.3 Accessing as Administrator As a member of the COSP staff, an administrator user will not necessarily be an expert in system administration; s/he will be a power user, with some deeper knowledge in information technologies with respect to the other volunteers. The administrator user should therefore be supported in performing administration activities by easy-to-use tools and user-oriented terminology. For this reason, we classify also administration activities within the EUD activities supported by FirstAidMap. An interesting EUD activity at the hands of an administration user is concerned with the application configuration. Figure 4 shows the page devoted to this activity. At the top of the page the user can select the base map to be loaded at the application start. Then, s/he can manage the types of POIs and notifications by defining new ones or changing the existing ones. The administrator can define a new type by inserting a name and selecting an icon from those available in a group of radio buttons. If the user does not find a suitable icon, s/he can load a new image on the system and this image will then be added to the available icons. The types of POIs and notifications already existing in the application are shown as a list in the bottom part of the page; each 5 RELATED WORK EUD techniques have been used for many years in commercial software, such as macro recording in word processors, formula composition in spreadsheets or filter definition in e-mail clients. However, on the one hand, they are far to be used extensively by a large community of end users, and, on the other hand, there exists the potential for employing EUD techniques in many other application domains and with different levels of complexity (Fischer, 2010). Particularly, EUD-based solutions are advocated in cooperative domains, similar to that considered in this paper. For example, in (Cabitza and Simone, 2009) an EUD approach is proposed to facilitate document-mediated cooperative work in the healthcare domain. As far as the technical solutions are concerned, component-based approaches for EUD are proposed in the computer-supported cooperative work field (Mørch et al., 2004). Myers et al. focus instead on natural programming languages and environments that permit people to program by expressing their ideas in the same way they think about them (Myers et al., 2004). Annotation mechanisms and visual programming through direct manipulation are the main EUD techniques implemented in software shaping workshops (Costabile et al., 2007). A lightweight visual design paradigm is also proposed in (Spahn and Wulf, 2009), where the approach allows business users to create enterprise widgets. Moving from domain-oriented systems to more general web-based applications, the approach presented in (Da Silva and Ginige, 2007) is based on the definition of a meta-model of web applications and a set of form-based tools that can be used by end users to customize and shape their applications. A form-based approach is also proposed in (Fogli, 2009), as a way to support the development of e-government services on behalf of administrative employees, who do not have any competences in information technology and neither are interested in acquiring them. Anslow and Rielhe (2008) propose the adoption of Web 2.0 technologies, such as wikis, which are regarded as a platform to support end users not only in contributing content, but also in performing computational tasks, such as the development of business queries. In this line, it has been observed that also mashup makers include much support for EUD (Grammel and Storey, 2008), which usually adopts a model based on composition. In this work, we have capitalized on these proposals by adopting direct manipulation and form-based interaction as basic means to implement EUD. functionalities in a collaborative web mapping system. Actually, other systems are available on the Web having many similarities with ours. For example, Google Maps enables users to create personalized maps and share them with relatives and friends. Particularly, users can create their own maps by using place markers, shapes, and lines to define a location, an entire area, or a path. However, the interaction with tools for map personalization is much more programmer-oriented than in our system, with terminology and interaction style resulting to be sometimes intimidating for our classes of users (especially drivers and contributors). Other systems, such as WikiMapia (www.wikimapia.org) or OpenStreetMap (www.openstreetmap.org) are more oriented toward the creation of a social network rather than of a virtual place where to accumulate and share knowledge for a specific and common purpose. WikiMapia allows registered users to select interesting places by drawing polygons and add textual notes about the places, as well as images. Other users can see places, read annotations and add comments. Registered users can also vote for an annotation. If the annotation has more than one vote against it, it is deleted. This parameter, which WikiMapia uses to control the correctness of annotations, is a typical feature of social networks. YourHistoryHere (www.yourhistoryhere.com) is another map-based wiki based on the Google Maps API. It is similar to WikiMapia, since it enables users to mark a place with a flag and to add textual annotations telling the history of the specific place. Other logged users can comment on the history. In all these examples, users who comment on or edit annotations constitute informal groups, characterized by common interests or common knowledge about a same place. With respect to such systems, FirstAidMap has been designed in a participatory way in collaboration with its intended users, and therefore functionalities for knowledge creation and sharing are tailored to users’ skills and experiences, and aimed at satisfying the specific needs of the community. For example, the notion of levels and different kinds of information to be made available at users’ pace emerged from the discussions with representative users, as well as the distinctions between POIs and notifications. We argue that these characteristics may better sustain user participation in knowledge creation and sharing, and thus increase the usefulness and meaningfulness of the application. 6 CONCLUSIONS The experience described in this paper highlights the active role end users may play with respect to a software system both ‘at design time’ and ‘at use time’. However, while user-centred and participatory design approaches are considered by HCI scholars as consolidated and successful practices for interactive system development, only in recent years end-user development (EUD) has received an adequate attention. Differently from participatory design, EUD stresses the role of users as co-designers of their systems ‘at use time’, and not only ‘at design time’. To support user participation during system usage, Fischer and colleagues (2001) argue that software systems should be designed as living entities, which should be able to grow at the hands of users as if they were seeds. FirstAidMap could be regarded as a seed composed by a software system (the technical component) and its users (the social component). As a seed, both its components are able to evolve. Actually, COSP volunteers not only shape the technical environment and make it grow by designing zones or adding POIs and notifications, but they make their own community evolve. Indeed, volunteers become aware of the importance of their knowledge, and thus they become better observers of the territory around them and more willing to inform themselves about the potential dangers, in order to share all the acquired information with the other volunteers. Also the sense of community can change as a consequence of these activities. We argue that this model could be applied in all those situations where: (i) a community exists or may potentially exist; (ii) the knowledge is distributed among community members; (iii) knowledge changes and evolves in an unpredictable and non-monotonic way; (iv) sharing knowledge comes first than possessing knowledge per se. To achieve these goals, the EUD tools we have created for FirstAidMap are functionalities that support user participation in collaborative web mapping, and they have been studied to engage, encourage and motivate users in contributing and sharing their knowledge. In general, we argue that both usability aspects and social issues should be carefully considered when designing EUD systems for knowledge accumulation and sharing. FirstAidMap is developed as an evolutionary prototype to explore the requirements of the COSP community and verify the feasibility of a software system addressing such requirements. At the moment, heuristic usability evaluation and code testing have been performed by two experts in human-computer interaction and software engineering. The usability problems emerged in this preliminary evaluation have been solved and programming bugs are being fixed. We are currently organizing a systematic evaluation of the application through an experiment with COSP volunteers. As far as future work is concerned, we are studying the integration in FirstAidMap of more advanced EUD functionalities, such as the possibility for users to create new information levels. We are also re-engineering the application to make it flexible enough to be easily adaptable to other application domains. A portable version of FirstAidMap to be used inside ambulances, endowed with a real-time data updating based on a Global Positioning System (GPS), is under study. ACKNOWLEDGEMENTS The authors wish to thank the volunteers of COSP-Mazzano for their collaboration. We are also grateful to Francesca Facchetti e Paolo Melchiori for their contribution to the prototype design and implementation. Finally, we would like to thank Maddalena Germinario and Annamaria Percivalli for the usability evaluation and code testing of the version of the application presented in this paper. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2010/30664/30664.pdf", "len_cl100k_base": 7166, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 28312, "total-output-tokens": 8641, "length": "2e12", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0008974075317382812, "__label__crime_law": 0.0006775856018066406, "__label__education_jobs": 0.0037441253662109375, "__label__entertainment": 0.00011163949966430664, "__label__fashion_beauty": 0.0001876354217529297, "__label__finance_business": 0.0003151893615722656, "__label__food_dining": 0.0005769729614257812, "__label__games": 0.0006284713745117188, "__label__hardware": 0.0011348724365234375, "__label__health": 0.005218505859375, "__label__history": 0.0004820823669433594, "__label__home_hobbies": 0.00011998414993286131, "__label__industrial": 0.0003871917724609375, "__label__literature": 0.0003745555877685547, "__label__politics": 0.0005068778991699219, "__label__religion": 0.00046324729919433594, "__label__science_tech": 0.0333251953125, "__label__social_life": 0.0002796649932861328, "__label__software": 0.043304443359375, "__label__software_dev": 0.90478515625, "__label__sports_fitness": 0.0003712177276611328, "__label__transportation": 0.0013437271118164062, "__label__travel": 0.0004239082336425781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40077, 0.02084]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40077, 0.44031]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40077, 0.92527]], "google_gemma-3-12b-it_contains_pii": [[0, 4098, false], [4098, 8879, null], [8879, 13740, null], [13740, 18080, null], [18080, 20641, null], [20641, 25593, null], [25593, 28354, null], [28354, 30904, null], [30904, 35935, null], [35935, 40077, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4098, true], [4098, 8879, null], [8879, 13740, null], [13740, 18080, null], [18080, 20641, null], [20641, 25593, null], [25593, 28354, null], [28354, 30904, null], [30904, 35935, null], [35935, 40077, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40077, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40077, null]], "pdf_page_numbers": [[0, 4098, 1], [4098, 8879, 2], [8879, 13740, 3], [13740, 18080, 4], [18080, 20641, 5], [20641, 25593, 6], [25593, 28354, 7], [28354, 30904, 8], [30904, 35935, 9], [35935, 40077, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40077, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
128bcd7d1a124f0883d568adb6eb78721ce188f0
Software Defect Prediction Based on Competitive Organization CoEvolutionary Algorithm Xiao-dong Mu, Rui-hua Chang, Li Zhang Xi’an Research Inst. of Hi-Tech, Xi’an, 710025, China mu_msn@msn.com, sxwcrh@163.com, Zhangli_522@126.com Abstract In order to improve the accuracy of prediction for software defect data sets, competitive organization coevolutionary algorithm is presented and applied for dealing with the software defect data. During this algorithm, mechanism of competition is introduced into coevolutionary algorithm. Then leagues are formed based on the importance of attributes among them. And three evolution operators which are reduced operator, allied operators and disturbed operators are developed. Furthermore, both the importance of attributes and evaluation from competition are used for the calculation of fitness function. Finally, five data sets from NASA MDP (Metrics Data Program) were used to validate the algorithm. The experimental results show that the proposed algorithm is effective for software defect prediction. Keywords: Competition, Coevolutionary algorithm, Software defect, Prediction 1. Introduction Along with the increase of software complexity, software quality is growing to be an important factor in the fields of software engineering. In order to raise the effectiveness and efficiency of testing, software defect prediction has been used to identify defect-prone modules in an upcoming version of a software system and help to allow the effort on those modules. Over the past decades years, several empirical studies have been carried out to predict the fault proneness models such as [1-6]. From a holistic point of view, software defect prediction studies can be categorized as statistical and machine learning (ML) approaches. And the use of machine learning approaches to fault prediction modeling is more popular[7]. Unfortunately, this problems of software defect prediction have not resolved thoroughly. And none of the techniques have achieved widespread applicability in the software industry due to several reasons, including the limitation of testing resource and budge, the lack of software tools to automate this software defect prediction, the unwillingness to collect the software defect data, many methods based on the private software data, and the other practical problems [6]. Genetic algorithm (GA) is an evolutionary computation technique[8-9] and has been applied into many areas including software defect prediction[5]. However, genetic algorithm is easy to fall into premature and slow convergence. Recently, a coevolutionary algorithm was proposed to resolve this problem. The coevolutionary algorithm considers the coordination relationships between population and environment. As the advantageous of coevolution algorithm, a growing number of researchers have studied it. However, to the author's knowledge, research on software defect prediction using coevolution is at the beginning. In this study, we proposed a new methods competitive organization coevolutionary algorithm (COCA), and applied to solve the problem of software defect prediction. The rest of this paper is organized as follows: Firstly, we introduce related work about software defect prediction. Secondly, we present the competitive organization coevolutionary algorithm (COCA). Thirdly, we apply the COCA algorithm for prediction of software defect data. We simultaneously analyze and compared the results. Finally, we give our conclusion and works in the future. 2. Related Work of Software Defect Prediction Until now, various machine learning models, such as linear regression, discriminate analysis, decision trees, neural networks and Naïve Bayes and so on, have been developed and applied to predict defects in software. These relatively sophisticated models are preferable to simple linear regression and correlation models because the relationship between defects (response variable) and static measures (predictor variables) might not be a monotonous linear relationship [10]. Munson et al. [13] investigate linear regression models and discriminate analysis to conclude the performance of the latter is better. Nagappan et al. [14] also used linear regression analysis with the STREW metric suite. This suite of metrics was extracted from the testing process and is used to estimate the post-release defects. They validate their approach on industrial, open source and student projects and find strong correlations between the proposed metric suite and post-release defects. Catal et al. [15] applied artificial immune system to software defect prediction for pursue the high-performance models. In this paper, they reported that RF (Random Forests) gains the best prediction results for large datasets. And Naïve Byes is the best algorithm for small data sets. In 2011, Catal et al [16] developed an Eclipse-based software fault prediction tools for Java programs, and naïve bays chosen as the plug-in for the tools. Norman Fenton et al. in Ref. [17] pointed out that traditional statistical approaches were inadequate for software defect prediction. In 2007, he used dynamic Bayesian nets for predicting software defects [18]. Rather than depending only on data from previous versions, his method makes use of causal models of the Project Manager’s understanding and covering mechanisms. On open-source software, Denaro and Pezze [19] analyzed Apache using logistic regression with static code features and their 80% prediction performance pointed 50% of the modules to be inspected. Olivier Vandecruys et al. [20] tackled software quality problems based on Ant Colony Optimization. And compared with C4.5, logistic regression and support vector machines, AntMiner+ model is superior to them. Bullard et al. [21] employ a rule-based classification model in a telecommunication system and reported that their model produces lower false positives, which are considered as high cost classification errors. In brief, those works present promising results. However, until now, the ML-based works show two main disadvantages: most prediction models are not easily interpreted by the programmers and testers; and most approaches require a pre-process step. Until now, despite so many effort have been putting into developing the software defect prediction models, there are few prediction models achieved widespread application in the fields of software engineering. 3. Competitive Organization Coevolutionary Algorithm for Defect Prediction Software defect prediction is usually dealt with binary classification for a module: defective or non-defective. In Ref. [3], we have proposed a method that is a competitive organization coevolution algorithm for classification. Considering the similarity of them, in this section, we introduce the algorithm and applied it for software defect prediction. 3.1 Calculation of Fitness Function Fitness function is one of the most important parts for COCA. In COCA, the individual’s fitness is calculated not only by a population but also relying on competition among species. And in this algorithm, population is divided into two competitive parts which are species training data (STRD) and species test data (STED). 3.1.1. Calculation of the Fitness Function for STED The calculation of fitness function for STED is mainly through competition. In this section, we utilize Eq. (1) to calculate the reward factor as follows: Supposed: the number of league of STRD is \(M\), and there are \(M\) rules. There are \(N\) test data in STED. For STED, assuming that the number of test data correctly classified is \(N_i\), then the reward factor is calculated as follows: \[ \beta_{\text{STED}} = 1 / (N - M_j) \] (1) From Eq. (1), we can find that the less number of individual defeated by opponents, the greater their access to the reward factor. That is because the value of 1 is allocated to the fewer individuals. On the contrast, it will receive a small reward factor. Finally, the value of league’s fitness function is calculated by \( f_{\text{STED}}(x) = \beta_{\text{STED}} \). 3.1.2. Calculation of the Fitness Function for STRD **Step1**: To calculate the impact on the classification attributes. For a discretional league, if an attribute is removed, the results of prediction will change. Hence, its importance of the attributes is big. On the other hand, it is small. It is defined just as the Eq. (2): \[ \sigma(A') = \gamma_A(B) - \gamma_{A-A}(B) \] (2) where \( \sigma \) is the importance of attributes. \( A \) and \( B \) respectively express condition attributes set and class attribute set. Subset of attribute \( A' \) is part of \( A \). \( \gamma \) indicate the dependence between attributes. **Step2**: According to the importance of attributes, if the attribute’s importance is zero, the attribute will be removed. And it forms a new property set \( \{ C_1, C_2, \ldots, C_k \} \), where \( k \) is the number of attributes. **Step3**: Calculation of reward factor and penalty factor as follows: Supposed: the number of league in STRD is \( M \), and there are \( M \) rules, assuming that the number of test data by \( \text{League}_i \) successful identifies is \( M_j \). Its reward factor is calculated as follows: \( \beta_{\text{STRD}} = 1 / (N - M_j) \). **Step4**: The fitness is calculated based on Eq. (3). \[ f_{\text{STRD}}(x) = \sum_{i=1}^{k} \sum_{j=1}^{\mathbb{U}_{\text{str}}} \sigma_j(A_j) + \beta_{\text{STRD}} \] (3) where, \( \sigma_j(A_j) \) expresses importance of the \( i \)th attributes of \( j \)th individuals and \( | \mathbb{U}_{\text{str}} | \) expresses the number of effective attributes of league, \( | x | \) expresses the number of league. Finally, the value of league’s fitness function is returned. 3.2 The Evolution Operators In this section, according to the actual requests of the proposed method, we present reduced operator, allied operators and disturbed operators for increase the diversity of population. Input: \( \text{League}_i = \begin{pmatrix} V_{C_1} & \cdots & V_{C_l} \\ \vdots & \ddots & \vdots \\ V_{C_k} & \cdots & V_{C_l} \end{pmatrix} \), \( \text{League}_j = \begin{pmatrix} V_{C_1} & \cdots & V_{C_l} \\ \vdots & \ddots & \vdots \\ V_{C_k} & \cdots & V_{C_l} \end{pmatrix} \) where, each row \( (V_{C_1}, V_{C_2}, \ldots, V_{C_k}) \) expresses a member of league, a \( L \) is composted of multiple similar data sets. Output: The new league data \( L' \). **Reduced Operator** **Step1**: For any \( L_i \), if it meets the conditions: \( | L_i | > 1 \) and \( V_{C_1} \cup V_{C_2} \cup \ldots \cup V_{C_k} = V_{C_l} \) then goes to Step 2; **Setp2**: The \( L_i \) is amended as \( \text{League}_i' = \begin{pmatrix} * & V_{C_1} & \cdots & V_{C_l} \\ \vdots & \vdots & \ddots & \vdots \\ \vdots & V_{C_k} & \cdots & V_{C_l} \end{pmatrix} \), where *indicates that the property is excluded; **Setp3**: Return the new league data \( L' \). **Allied Operator** **Step1**: \( L \) and \( L' \) are randomly selected from the same category; Step2: Form a new individual League = \[ \begin{pmatrix} V_{C11} & \ldots & V_{C1t} \\ \vdots & \ddots & \vdots \\ V_{Ck1} & \ldots & V_{Ckt} \\ \vdots & \ddots & \vdots \\ V_{CgL} & \ldots & V_{Cgt} \end{pmatrix}; \] Step3: Return the new league data \(L_i'\). Disturbed Operator Initialization: if the attribution is discrete, then \(K = 0\); and if it is a continuous attribute, then \(K = 1\); Step1: For any \(L_i\), the attribution is randomly selected as a disturbed gene; Step2: When \(K = 0\), replaced the value from its domain and produce a new one; then go to Step 4. Step3: When \(K = 1\), a small number was randomly generated, then the original value plus or minus the number to generate new individuals; Step4: Return the new league data \(L_i'\). 3.3 Algorithms for Software Defect Prediction Step1: Initialization and preprocessing of data including the balancing of both classes (defective or otherwise) and the removal of a large number of repeating instances. Initialization the number of evolution \(E\), competition number \(C\), and the disturbance probability \(\mu\). Set \(t = 0\), \(k = 1\), \(l = 0\) where \(t\) is the number of evolution, \(k\) is the number of population, \(l\) is the number of competition. Step2: According to rule of 2-8, select the training data sets (STRD) and test data sets (STED). STRD is divided into \(|\text{Class}|\) sub-populations accordance with the number of Class type. Step3: In the sub-populations, attribute significance are calculated, and leagues are initialized. Step4: To select \(L_i\) and \(L_j\) randomly. Then the above evolution operators are performed on them. And the fitness function is calculated. If it meets the following condition: \[ \begin{align*} &f'_{\text{max}} = \max\{f'_{L_i}, f'_{L_j}\} \\ &f'_{\text{max}} = \max\{f'_{L_i}, f'_{L_j}\} \\ &\text{and } f'_{\text{max}} > f_{\text{max}} \end{align*} \] then \(L_i'\) and \(L_j'\) replace \(L_i\) and \(L_j\) respectively. Otherwise, \(L_i\) and \(L_j\) are saved. Step5: When \(k > |\text{Class}|\), then \(t = t + 1\), turn to Step6, otherwise turn to Step4. Step6: \(k = k + 1\), while \(t > E\), extract and select the prediction rules. Otherwise turn to Step4. Step7: To make STED and STRD "arms race". Then by referring to corresponding table of rules and, the fitness function value of STED and STRD is modified. Step8: If the termination condition is met or \(l > C\), then the algorithm quit. Otherwise, set \(t = 0\), \(k = 1\), go to Step2. 4. Experiments and Analysis 4.1 Description of Data As in any machine learning problem, software defect prediction models require a set of features (i.e. independent variables) to characterize the problem and to give estimation on the defect proneness of the system (i.e. dependent variable). In software quality, these attributes are referred to as software metrics. Metrics are the attributes that represent software; they are the raw data for software domain. An effective management of any software development process requires monitoring and analysis of software metrics. The software metrics and dataset used in this study are five mission critical NASA software projects [22], which are all high assurance and complex real-time system. NASA makes extensive use of contractors from many other industries including government and commercial organizations. It is practical to leverage the useful information in order to predict the quality of an ongoing similar project. Table 1 summarizes the characteristic of five datasets used in this study. And Table 2 presents the part of metrics used in the five datasets considering the length of paper. <table> <thead> <tr> <th>Data</th> <th>Language</th> <th>Model</th> <th>Feature</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>KC3</td> <td>JAVA</td> <td>458</td> <td>40</td> <td>processing and delivery of satellite metadata</td> </tr> <tr> <td>CM1</td> <td>C</td> <td>498</td> <td>22</td> <td>NASA spacecraft instrument</td> </tr> <tr> <td>MC2</td> <td>C++</td> <td>161</td> <td>40</td> <td>Video guidance system</td> </tr> <tr> <td>PC3</td> <td>C</td> <td>1563</td> <td>38</td> <td>Flight software for earth orbiting satellite</td> </tr> <tr> <td>PC4</td> <td>C</td> <td>1458</td> <td>38</td> <td>Flight software for earth orbiting satellite</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Metrics</th> <th>Type</th> <th>Metrics</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>V(g)</td> <td>McCabe</td> <td>T</td> <td>DHalstead</td> </tr> <tr> <td>EV(g)</td> <td>McCabe</td> <td>UniqOp</td> <td>DHalstead</td> </tr> <tr> <td>IV(g)</td> <td>McCabe</td> <td>UniqOpnd</td> <td>DHalstead</td> </tr> <tr> <td>LOC</td> <td>McCabe</td> <td>TotalOp</td> <td>DHalstead</td> </tr> <tr> <td>N</td> <td>DHalstead</td> <td>TotalOpnd</td> <td>DHalstead</td> </tr> <tr> <td>V</td> <td>DHalstead</td> <td>UniqOp</td> <td>DHalstead</td> </tr> <tr> <td>L</td> <td>DHalstead</td> <td>LOCcode</td> <td>Line Count</td> </tr> <tr> <td>D</td> <td>DHalstead</td> <td>LOCComment</td> <td>Line Count</td> </tr> <tr> <td>I</td> <td>DHalstead</td> <td>LOCBlank</td> <td>Line Count</td> </tr> <tr> <td>E</td> <td>DHalstead</td> <td>LOCCodeAndComment</td> <td>Line Count</td> </tr> <tr> <td>B</td> <td>DHalstead</td> <td>......</td> <td>......</td> </tr> </tbody> </table> 4.2 Prediction Performance Measures Evaluation measures [23] play a crucial role in both assessing the classification performance and guiding the classifier modeling. After a classification process, data samples can be categorized into four groups as denoted in the confusion matrix presented in Table 3. <table> <thead> <tr> <th>Actually Defective</th> <th>Predicted</th> </tr> </thead> <tbody> <tr> <td>True Positive (TP)</td> <td>False Negative (FN)</td> </tr> <tr> <td>False Positive (FP)</td> <td>True Negative (TN)</td> </tr> </tbody> </table> And several measures can be derived from the confusion matrix. And they are presented in Table 4. <table> <thead> <tr> <th>Table 3. Confusion matrix</th> </tr> </thead> <tbody> <tr> <td>Accuracy = ( \frac{TP \times TN}{TP \times TN + FP + FN} \times 100% )</td> </tr> <tr> <td>Recall = ( \frac{TP}{TP + FN} \times 100% )</td> </tr> <tr> <td>Precision = ( \frac{TP}{TP + FP} \times 100% )</td> </tr> <tr> <td>( F - Measure = 2 \times \frac{Recall \times Precision}{Recall + Precision} \times 100% )</td> </tr> <tr> <td>( pd = Recall = \frac{TP}{TP + FN} \times 100% )</td> </tr> <tr> <td>( pf = \frac{FP}{FP + TN} \times 100% )</td> </tr> </tbody> </table> Traditionally, accuracy is the most commonly used measure for these purposes. For classification with the class imbalance problem, accuracy is no longer a proper measure since the rare class has very little impact on accuracy as compared to the prevalent class [24]. F-Measure represents a harmonic mean between recall and precision. A high F-Measure value ensures that both recall and precision are reasonable high. According to the results of above, we choose the F-Measure with confusion matrix as our performance measure on the test data. 5. Comparisons and Analysis In this section, we investigate the results employed COCA for classification. Our experimental environment was Pentium (R) 3.2G CPU, 1G DDR memory, Windows XP operating system and so on. In this experiment, we split the data set into training data sets and testing data sets, respectively, 80% and 20%, firstly. In order to avoid bias, we run the experiment 100 times and calculated its average. Naïve Bays (NB) classifiers use statistical combinations of features to predict for class value. Such classifiers assume all the features are statistically independent. Nevertheless, a repeated empirical result is that, on average, seemingly Naïve Bays classifiers perform as well as other seemingly more sophisticated schemes. Random Forest (RF) [25] is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class’s output by individual trees. It is one of the most accurate learning algorithms available. For many data sets, it produces a high accuracy classifier. Radial Basis Function Network (RBFNet) is an artificial neural network that uses radial basis functions as activation functions. It is a linear combination of radial basis functions. The parameters for each of the compared methods were initialized with the default setting of the WEKA toolkit. And results presented on Figure 1. ![Figure 1. Compared with Other Methods on Five Data Sets](image) From Figure 1, it is observed that COCA out performs all the compared three methods in F-measure. The experimental results show that the competition strategy is effective for evolutionary algorithm based on organization. The competitive strategy guide the evolution of species. Also it is effective to avoid the blindness of population evolution, and to reduce the algorithm operation time and improve the accuracy of prediction for software defect in F-Measure measure. Meantime, we analyze the advantage of COCA in this section. There are three advantages. First, in the evolutionary process of the COCA algorithm, the strategy of competition coming from the biological is introduced for promoting the evolution of population. Second, the calculation of fitness function includes performance individual and their competition. Third, three evolutionary operators are designed for league. One is reduced operator, which is mainly used for reducing of dimension of attributes. Next one is allied operator that is mainly used to merge the similar leagues and reduce the number of leagues. The last one is disturbed operator. It is used for further promoting the diversity of population and avoiding falling into the local optimum. Under the role of above advantage, the diversity of population is increased and the prediction performance of proposed algorithm for software defect data is better than others. 6. Conclusions In order to improve the accuracy of software defect prediction, a coevolutionary algorithm based on the competitive organization is put forward for software defect prediction. During this algorithm, firstly, competition mechanism is introduced to organization coevolutionary algorithm. Then, three evolution operators which are reduced operator, allied operators and disturbed operators are developed for evolution of population. And competition is considered for calculate the fitness function. When the algorithm applied into software defect prediction, it improves the accuracy of software prediction through increases the diversity of population. Finally, experiment based on the five datasets from NASA is used to validate the method. The experimental results show that the proposed method is effective. It should be noted that this study has examined only on five datasets, which are not large dataset. Hence, Future work is that we should pay more attention to extension of this proposed method is to improve the efficiency of the large number of software defect data sets. 7. Acknowledgement The authors would like thank the various members of author’s Laboratory, Xi’an Research Inst. of Hi-Tech, for their helpful comments. 8. References Software Defect Prediction Based on Competitive Organization CoEvolutionary Algorithm Xiao-dong Mu, Rui-hua Chang, Li Zhang
{"Source-Url": "http://www.aicit.org/JCIT/ppl/JCIT%20VOL7NO5_part39.pdf", "len_cl100k_base": 5476, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22207, "total-output-tokens": 7298, "length": "2e12", "weborganizer": {"__label__adult": 0.0003418922424316406, "__label__art_design": 0.0002644062042236328, "__label__crime_law": 0.00039076805114746094, "__label__education_jobs": 0.0005888938903808594, "__label__entertainment": 5.6803226470947266e-05, "__label__fashion_beauty": 0.00015854835510253906, "__label__finance_business": 0.0002267360687255859, "__label__food_dining": 0.00033473968505859375, "__label__games": 0.0005688667297363281, "__label__hardware": 0.0008692741394042969, "__label__health": 0.0005497932434082031, "__label__history": 0.00017380714416503906, "__label__home_hobbies": 8.469820022583008e-05, "__label__industrial": 0.0003843307495117187, "__label__literature": 0.00022542476654052737, "__label__politics": 0.00022792816162109375, "__label__religion": 0.00033974647521972656, "__label__science_tech": 0.0238800048828125, "__label__social_life": 8.422136306762695e-05, "__label__software": 0.006038665771484375, "__label__software_dev": 0.96337890625, "__label__sports_fitness": 0.00027489662170410156, "__label__transportation": 0.0003528594970703125, "__label__travel": 0.00016188621520996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26898, 0.03071]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26898, 0.44898]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26898, 0.87522]], "google_gemma-3-12b-it_contains_pii": [[0, 3867, false], [3867, 7868, null], [7868, 11280, null], [11280, 14258, null], [14258, 17138, null], [17138, 19993, null], [19993, 23921, null], [23921, 26898, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3867, true], [3867, 7868, null], [7868, 11280, null], [11280, 14258, null], [14258, 17138, null], [17138, 19993, null], [19993, 23921, null], [23921, 26898, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26898, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26898, null]], "pdf_page_numbers": [[0, 3867, 1], [3867, 7868, 2], [7868, 11280, 3], [11280, 14258, 4], [14258, 17138, 5], [17138, 19993, 6], [19993, 23921, 7], [23921, 26898, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26898, 0.17877]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
694c4b6b51814fd77a9044bb64113d7cd211602e
Regular and context-free languages Consider the following language: \[ L = \text{all strings from } \{0, 1\}^* \text{ that have two or three occurrences of } 1, \text{ the first and second of which are not consecutive} \] Is there a more ‘concise’ way of representing this language? - strings in \( L \) can start with any number (possibly none) of 0’s: \( \{0\}^* \) - then comes the first 1, which must be followed by at least one 0: \( \{0\}^100 \) - then any number (possibly none) of 0’s: \( \{0\}^*10\{0\}^* \) - then comes the second 1: \( \{0\}^*10\{0\}^*1 \) - then either there are no more 1’s but there can be any number (possibly none) of 0’s, or there is a third 1 followed by any number (possibly none) of 0’s: \[ \{0\}^*10\{0\}^*1(\{0\}^* \text{ or } \{0\}^*1\{0\}^*) \] We will dispense with the braces \{\} and write \( \cup \) instead of ‘or’: \[ 0^*100^*1(0^* \cup 0^*10^*) \] From language to regular expression: Example 2 \[ L = \text{all the strings consisting of some number (possibly none) } \] \[ \text{of } a\text{’s, followed by some number (possibly none) of } b\text{’s } \] \[ = \{ \varepsilon, a, b, aa, bb, ab, aaa, aab, bbb, abb, \ldots, aaaaabbbbbbb, \ldots \} \] Let’s try: \( \{a\}\star\{b\}\star \) \[ \Rightarrow L \text{ is represented by the regular expression } a\star b\star \] Notation: \[ L = L[a\star b\star] \] Describe the words in the alphabet \( \{a, b\} \) that are not in \( L[a\star b\star] \) all words containing a sub-word \( ba: \)(\( a \cup b\))*ba(a \cup b)* From regular expression to language Example 3: \[ L[a(a^* \cup b^*)] = \text{all words starting with } a, \text{ followed by either a} \] (possibly empty) word of \(a\)'s, or a (possibly empty) word of \(b\)'s \[ = \{a, aa, ab, aaa, abb, aaaaa, abbb, aaaaaa, abbbbbb, \ldots \} \] Example 4: \[ L[a(a \cup b)^*] = \text{all words starting with } a, \text{ followed by any word over } \{a, b\} \] \[ = \text{all words over the alphabet } \{a, b\} \text{ starting with } a \] \[ = \{a, aa, ab, aab, aba, abb, aaaa, abbaabab, aabababa, \ldots \} \] These two examples are NOT the same! From regular expression to language: Example 5 $L[(b \cup aaa^*)]^* = \ ?$ - Let’s start with $L[aaa^*]$: all words of $a$’s of length $\geq 2$ $$= \{aa, aaa, aaaa, aaaaa, \ldots\}$$ - $L[b \cup aaa^*]$: as above, plus the word $b$ $$= \{b, aa, aaa, aaaa, aaaaa, \ldots\}$$ - $L[(b \cup aaa^*)]^*$: we can take any number (possibly none) of words from $L[b \cup aaa^*]$ and concatenate them - Is there a word over $\{a, b\}$ that is not in $L[(b \cup aaa^*)]^*$? - Do the words $\varepsilon$, $aabbaaabab$, and $bbbabbaaabaa$ belong to the language $L[(b \cup aaa^*)]^*$? Regular expressions: formal definition Every regular expression (over an alphabet $\Sigma$) is a string consisting of symbols from $\Sigma$, plus symbols from $\cup$, $\ast$, $(, )$, $\varepsilon$, $\emptyset$ Inductive definition of the set of regular expressions (over the alphabet $\Sigma$): - $\emptyset$, $\varepsilon$ and each symbol in $\Sigma$ are regular expressions - if $\alpha$ and $\beta$ are regular expressions, then so are $(\alpha \cup \beta)$, $(\alpha \beta)$, and $(\alpha \ast)$ - no other string is a regular expression Examples: $\emptyset$, $\varepsilon$, $((a\ast)(b\ast))$, $(((a((a \cup b)^\ast))b)(b\ast))$, $(((x(y\ast))y)(y \cup z))$, $((\Box \cup (\Diamond\ast))(\Diamond\Box\ast))$ Is $\cup^\ast a^\ast$ a regular expression? Regular expressions: notational conventions These brackets are pretty incomprehensible So we introduce some conventions: - We omit the outermost brackets - We omit the brackets when concatenate expressions E.g., we write $ababb$ instead of $(((ab)(ab))b)$ - * ‘binds tighter’ than concatenation and $\cup$ E.g., we write $aab^*$ instead of $(aa)(b^*)$ - concatenation binds tighter than $\cup$ Examples: (cf. the previous slide) $$a^*b^*, \ a(a \cup b)^*bb^*, \ xy^*y(y \cup z), \ (\Box \cup \Diamond^*)(\Diamond \Box)^*$$ But take care: say, $(aa)^*b$ and $aa*b$ are NOT the same! Always use brackets if in any doubt! Example 6 \[ L[(b \cup a)aa^*] = ? \] - \( L[b \cup a] = \{a, b\} \) - \( L[(b \cup a)a] = \{aa, ba\} \) - \( L[(b \cup a)aa^*] = \text{all words starting with } aa \text{ followed by } a \text{ (possibly empty) word of } a\text{'s, and all words starting with } ba \text{ followed by } a \text{ (possibly empty) word of } a\text{'s} \) Compare this language with \( L[b \cup aaa^*] \) (see Example 5) Brackets DO MATTER: the two languages are NOT the same! Regular expressions specify languages Every regular expression over $\Sigma$ represents a language over $\Sigma$ These are two different things: - a regular expression $\alpha$ ↔ the language $\alpha$ represents: $L[\alpha]$ - a string consisting of symbols from $\Sigma$ and $\cup$, $\ast$, $(, )$, $\varepsilon$, $\emptyset$ - a set of words over $\Sigma$ For example, $\alpha(a \cup b)^*$ $L[\alpha(a \cup b)^*] =$ all words over $\{a, b\}$ that start with $a$ How regular expressions specify languages Inductive definition of the language $L[\alpha]$ represented by the regular expression $\alpha$: - $L[\emptyset] = \emptyset$, $L[\varepsilon] = \{\varepsilon\}$, and $L[a] = \{a\}$ for any symbol $a$ in $\Sigma$. - If $\alpha$ and $\beta$ are regular expressions, then - $L[\alpha \cup \beta] = \text{all words that belong either to } L[\alpha] \text{ or to } L[\beta]$, - $L[\alpha \beta] = \text{any word from } L[\alpha] \text{ followed by any word from } L[\beta]$, - $L[\alpha^*] = \text{any word from } L[\alpha] \text{ followed by any word from } L[\alpha] \ldots$ the number of iterations is arbitrary (possibly none) Example 7: identifiers in programming languages Any such identifier begins with a letter, which may be followed by a string consisting of letters and numeric digits. A regular expression representing the set of all identifiers is: \[ ([a-z] \cup [A-Z]) ([a-z] \cup [A-Z] \cup [0-9])^* \] where - \([a-z]\) is a shorthand for \((a \cup b \cup c \cup \ldots \cup z)\) - \([A-Z]\) is a shorthand for \((A \cup B \cup C \cup \ldots \cup Z)\) - \([0-9]\) is a shorthand for \((0 \cup 1 \cup 2 \cup \ldots \cup 9)\) **Lexical analyser:** a part of every compiler which finds all identifiers in the text of the source program. It uses a list of regular expressions to do so. Example 8: Search engines on the WWW Notice the similarity between regular expressions and the form of the queries used by search engines on the WWW. Search queries often include ‘wild cards.’ Some search engines support two wildcards. The asterisk (*) is used to replace multiple characters and the percent (%) symbol is used to replace only one character. These are easy to simulate by regular expressions: For example, an arbitrary lowercase letter is captured by the regular expression $a \cup b \cup \cdots \cup z$. So to find, say, all Web pages that contain an occurrence of ‘turtle’ following (not necessarily immediately) an instance of ‘purple,’ one constructs a finite automaton to recognise the language $$purple \ (a \cup b \cup \cdots \cup z)^* \ turtle$$ Every page matching the query will be accepted by this automaton, and any page that doesn’t match the query will not be accepted. Example 9 $L[\varepsilon \cup c^*(a \cup bc^*)] = ?$ all the strings that are either empty or start with some (possibly none) $c$’s, followed by either an $a$, or a $b$ followed by some (possibly none) $c$’s $= \{\varepsilon, a, b, bc, bcc, ca, cb, cbcc, \ldots \}$ NOT in $L[\varepsilon \cup c^*(a \cup bc^*)]$: $ab$, $aa$, $caac$, $\ldots$ A **regular language** is any language that is described by a regular expression. In other words, a language $L$ is **regular** if $L = L[\alpha]$, for some regular expression $\alpha$. For instance, all the languages in Examples 1–9 above are regular. **NOT every language is regular !** Regular languages and finite automata (1) There is a general ‘mechanical’ procedure $\mathcal{A}$ that converts any regular language $L[\alpha]$ to an NFA $A$ such that $L(A) = L[\alpha]$. Moreover, there is a general way back: (2) There is a general ‘mechanical’ procedure $\mathcal{B}$ for converting an automaton $A$ into a regular expression $\alpha$ such that $L(A) = L[\alpha]$. In other words: Regular languages are precisely those languages that are accepted by finite automata. The procedure which converts any regular language \( L[\alpha] \) to an NFA \( A \) such that \( L(A) = L[\alpha] \) operates along the inductive definition of the regular expression \( \alpha \): - \( \alpha = \emptyset \). Then \( L[\alpha] = \emptyset \). - \( \alpha = \varepsilon \). Then \( L[\alpha] = \{ \varepsilon \} \). - \( \alpha = a \). Then \( L[\alpha] = \{ a \} \). Automaton accepting $L[\alpha \cup \beta]$ $A_1$: accepts $L[\alpha]$ $A_2$: accepts $L[\beta]$ $A$: accepts $L[\alpha \cup \beta]$ Automaton accepting $L[\alpha\beta]$ $A_1$: accepts $L[\alpha]$ $A_2$: accepts $L[\beta]$ $A$: accepts $L[\alpha\beta]$ http://www.dcs.bbk.ac.uk/~michael/foc/foc.html Automaton accepting $L[\alpha^*]$ $A_1$: accepts $L[\alpha]$ $A$: accepts $L[\alpha^*]$ Example: how the procedure works We apply the procedure to the regular expression \[ ((a \cup ab)^*ba)^* \] We construct step by step (going ‘inside out’) an NFA $A$ such that \[ L(A) = L[((a \cup ab)^*ba)^*] \] **Step 0**: automata accepting $L[a]$ and $L[b]$ ![Diagram of automata accepting L[a] and L[b]](image) **Step 1**: automata accepting $L[ab]$ and $L[ba]$ ![Diagram of automata accepting L[ab] and L[ba]](image) Step 2: automaton accepting $L[a \cup ab]$ Step 3: automaton accepting $L[(a \cup ab)^*]$ Step 4: automaton accepting $L[(a \cup ab)^*ba]$ Final step of the procedure Another example Automaton accepting $L[\varepsilon (b \cup ab)^* bb^*]$: Finite automata: summary - Finite automata may be regarded as programs that use fixed amounts of memory (represented by states) regardless of the input. - Finite automata can be used as recognition devices: they accept certain inputs and reject others. - Nondeterminism does not increase the computational power of finite automata, but nondeterministic automata are easier to design than deterministic ones. - The languages accepted by finite automata are precisely the regular ones. Nonregular languages Finite automata are theoretical models for programs that use a constant amount of memory regardless of the input. \[ \text{there should be languages which are not regular} \] **Example:** Is the following language over the alphabet \{a, b\} regular? \[ L = \text{all strings starting with a string of } a \text{'s followed by an equal-length string of } b \text{'s} \] \[ = \{a^n b^n \mid n = 0, 1, 2, \ldots \} \\ \] (a^n b^n is not a regular expression) It does not seem so: - to recognise strings in \( L \), we may have to store the entire prefix \( a^n \) before the first \( b \) shows up - the length of such prefix depends on \( n \) \( \sim \) constant memory is not enough ? Is it really the case? (the argument above is ‘handwaving’ there may be other ways to compare the numbers of \( a \) and \( b \)) ? If it is the case, ‘better’ models of computation are needed DFA computations on ‘long’ inputs - Given some regular language \( L \) with infinitely many words, take a DFA \( A \) that accepts \( L \); let \( n \) be the number of states in \( A \). - Consider the computation of \( A \) on input \( w \in L \) containing \( k > n \) symbols: \[ (q_1, w), (q_2, w^2), \ldots, (q_{k-1}, w^{k-1}), (q_k, \varepsilon) \] As \( n < k \), the states \( q_1, q_2, \ldots, q_{k-1}, q_k \) cannot be all distinct so \( A \) must contain a loop. \[ \text{pigeonhole principle} \] (here: 10 pigeons in 9 pigeonholes) Word \( w = xyz \) can be represented as \[ w = xyz \] but then \( xy^m z \) is also accepted by \( A \), for any \( m \geq 0 \) (pumping the middle string \( y \)). Example Find $x$, $y$ and $z$ (as on the previous page) for the following input words $w$: - $w = ababaa$ (for example, $x = \varepsilon$, $y = ab$, $z = abaa$ - $w = aabbb$ (for example, $x = aa$, $y = b$, $z = bb$) Pumping Lemma Let $L$ be an infinite regular language over an alphabet $\Sigma$. Then there is a number $n > 0$ such that, for any $w \in L$ of length $\geq n$, there exist strings $x, y, z \in \Sigma^*$ such that - $|xy| \leq n$ - $y \neq \varepsilon$ - $xy^m z \in L$, for any $m \geq 0$ Pumping Lemma is used to show that a given language $L$ (such as $L = \{a^k b^k \mid k = 0, 1, 2, \ldots \}$) is not regular. If $L$ does not satisfy the ‘pumping property,’ then $L$ cannot be regular. \[ L = \{a^n b^n \mid n \geq 0\} \text{ is not regular} \] Suppose to the contrary that \( L \) is regular. Take the number \( n > 0 \) provided by Pumping Lemma and consider \( w = a^n b^n \), which belongs to \( L \). By Pumping Lemma, we can represent \( w \) as \( w = xyz \) with \( |xy| \leq n \); but then \( x \) and \( y \) may only be strings of \( a \)'s. By Pumping Lemma, the word \( xy^{n+1}z \) is also in \( L \); however, \( xy^{n+1}z \) contains more \( a \)'s than \( b \)'s, and so cannot be in \( L \). This contradiction shows: our assumption that \( L \) is regular is not correct. Thus, the language \( L \) is not regular Q.E.D. (\textit{quod erat demonstrandum}) Exercise: Is the language \( L = \{a^{2^n} \mid n \geq 0\} \) regular?
{"Source-Url": "http://www.dcs.bbk.ac.uk/~michael/foc/slides/FoC-6.pdf", "len_cl100k_base": 4348, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 54951, "total-output-tokens": 5756, "length": "2e12", "weborganizer": {"__label__adult": 0.0005178451538085938, "__label__art_design": 0.0007915496826171875, "__label__crime_law": 0.0004718303680419922, "__label__education_jobs": 0.0023326873779296875, "__label__entertainment": 0.00036215782165527344, "__label__fashion_beauty": 0.0002624988555908203, "__label__finance_business": 0.0002799034118652344, "__label__food_dining": 0.0007991790771484375, "__label__games": 0.0011816024780273438, "__label__hardware": 0.0012922286987304688, "__label__health": 0.0006251335144042969, "__label__history": 0.00045609474182128906, "__label__home_hobbies": 0.0001920461654663086, "__label__industrial": 0.0007600784301757812, "__label__literature": 0.0038909912109375, "__label__politics": 0.0004017353057861328, "__label__religion": 0.0010652542114257812, "__label__science_tech": 0.1715087890625, "__label__social_life": 0.00024056434631347656, "__label__software": 0.01486968994140625, "__label__software_dev": 0.79638671875, "__label__sports_fitness": 0.0003960132598876953, "__label__transportation": 0.0008325576782226562, "__label__travel": 0.00023245811462402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 13497, 0.01041]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 13497, 0.96906]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 13497, 0.77637]], "google_gemma-3-12b-it_contains_pii": [[0, 35, false], [35, 903, null], [903, 1530, null], [1530, 2123, null], [2123, 2711, null], [2711, 3474, null], [3474, 4104, null], [4104, 4566, null], [4566, 5037, null], [5037, 5724, null], [5724, 6399, null], [6399, 7304, null], [7304, 7650, null], [7650, 7941, null], [7941, 8433, null], [8433, 8819, null], [8819, 8954, null], [8954, 9125, null], [9125, 9215, null], [9215, 9645, null], [9645, 9688, null], [9688, 9735, null], [9735, 9784, null], [9784, 9812, null], [9812, 9886, null], [9886, 10374, null], [10374, 11284, null], [11284, 12004, null], [12004, 12234, null], [12234, 12730, null], [12730, 13497, null]], "google_gemma-3-12b-it_is_public_document": [[0, 35, true], [35, 903, null], [903, 1530, null], [1530, 2123, null], [2123, 2711, null], [2711, 3474, null], [3474, 4104, null], [4104, 4566, null], [4566, 5037, null], [5037, 5724, null], [5724, 6399, null], [6399, 7304, null], [7304, 7650, null], [7650, 7941, null], [7941, 8433, null], [8433, 8819, null], [8819, 8954, null], [8954, 9125, null], [9125, 9215, null], [9215, 9645, null], [9645, 9688, null], [9688, 9735, null], [9735, 9784, null], [9784, 9812, null], [9812, 9886, null], [9886, 10374, null], [10374, 11284, null], [11284, 12004, null], [12004, 12234, null], [12234, 12730, null], [12730, 13497, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 13497, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 13497, null]], "pdf_page_numbers": [[0, 35, 1], [35, 903, 2], [903, 1530, 3], [1530, 2123, 4], [2123, 2711, 5], [2711, 3474, 6], [3474, 4104, 7], [4104, 4566, 8], [4566, 5037, 9], [5037, 5724, 10], [5724, 6399, 11], [6399, 7304, 12], [7304, 7650, 13], [7650, 7941, 14], [7941, 8433, 15], [8433, 8819, 16], [8819, 8954, 17], [8954, 9125, 18], [9125, 9215, 19], [9215, 9645, 20], [9645, 9688, 21], [9688, 9735, 22], [9735, 9784, 23], [9784, 9812, 24], [9812, 9886, 25], [9886, 10374, 26], [10374, 11284, 27], [11284, 12004, 28], [12004, 12234, 29], [12234, 12730, 30], [12730, 13497, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 13497, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
0b5f9212000240933c55643c11b48f2a5462deee
Problem 2. Let $K$ be a 56-bit DES key, let $L$ be a 64-bit string, and let $M$ be a 64-bit plaintext. Let $$\text{DESY}(K \parallel L, M) = \text{DES}(K, L \oplus M)$$ $$\text{DESW}(K \parallel L, M) = L \oplus \text{DES}(K, M).$$ This defines block ciphers $\text{DESY}, \text{DESW} : \{0,1\}^{120} \times \{0,1\}^{64} \rightarrow \{0,1\}^{64}$. Present the best possible key-recovery attacks that you can on these block ciphers. Your attacks should use very few input-output examples, not more than three. State the running time of your attacks. Note that $C = \text{DESY}(K \parallel L, M)$ iff $\text{DES}^{-1}(K, C) \oplus M = L$. This leads to the following key-recovery attack: Adversary $A_{\text{DESY}}((M_1, C_1), (M_2, C_2), (M_3, C_3))$ for each key $T \in \{0,1\}^{56}$ do $L_1 \leftarrow \text{DES}^{-1}(T, C_1) \oplus M_1$ ; $L_2 \leftarrow \text{DES}^{-1}(T, C_2) \oplus M_2$ ; $L_3 \leftarrow \text{DES}^{-1}(T, C_3) \oplus M_3$ if $L_1 = L_2 = L_3$ then return $T \parallel L_1$ The time taken by this attack is that of about $3 \cdot 2^{56}$ $\text{DES}^{-1}$ computations. Note that $C = \text{DESW}(K \parallel L, M)$ iff $C \oplus \text{DES}(K, M) = L$. This leads to the following key-recovery attack: Adversary $A_{\text{DESW}}((M_1, C_1), (M_2, C_2), (M_3, C_3))$ for each key $T \in \{0,1\}^{56}$ do $L_1 \leftarrow \text{DES}(T, M_1) \oplus C_1$ ; $L_2 \leftarrow \text{DES}(T, M_2) \oplus C_2$ ; $L_3 \leftarrow \text{DES}(T, M_3) \oplus C_3$ if $L_1 = L_2 = L_3$ then return $T \parallel L_1$ The time taken by this attack is that of about $3 \cdot 2^{56}$ $\text{DES}$ computations. As usual, we are only guaranteed the attacks find a key consistent with the input-output examples rather than finding the target key itself, but empirically we estimate that with three input-output examples the target key will be the only one consistent with the input-output examples and hence will be the one found by the attack. The same attacks using only two input-output examples will also typically find the target key, although perhaps with less frequency than the version using three input-output examples. But if you use only one input-output example, you will almost never find the target key. In that case, for every $T$ one computes an $L$ so that $T \parallel L$ is consistent with the single input-output example, so the attack terminates in one try, but with the wrong key most of the time. **Problem 3.** Define the family of functions $F$: $\{0, 1\}^{128} \times \{0, 1\}^{128} \rightarrow \{0, 1\}^{128}$ by $F(K, M) = \text{AES}(M, K)$. Assuming AES is a secure PRF, is $F$ a secure PRF? If so, explain why. If not, present the best attack (with analysis) that you can. $F$ is not a secure PRF. The easiest way to see this is to note that it is not even secure against key-recovery: given one input-output example $(M, C)$ of $F_K$, we can recover $K$ via $K \leftarrow \text{AES}_M^{-1}(C)$. However, this is not enough. The question was whether it is a secure PRF, not whether one can recover the key. To bridge this gap, we can use Proposition 3.14. To this end, first, following Definition 3.12, we formalize the above attack to present the following key-recovery adversary: **adversary $B$** Let $M$ be any 128 bit string $C \leftarrow F_n(M); K \leftarrow \text{AES}_M^{-1}(C)$ Return $K$ Now, looking at Definition 3.12, we see that $\text{Adv}_{kr}^{F}(B) = 1$. Now we can apply Proposition 3.14 to conclude that $F$ is not a secure PRF. An alternative solution is to demonstrate the insecurity of $F$ as PRF directly, by considering the following adversary $A$ that is given an oracle $F_n$: $\{0, 1\}^{128} \rightarrow \{0, 1\}^{128}$. **adversary $A$** Let $M, N$ be any two distinct 128 bit strings $C \leftarrow F_n(M); L \leftarrow \text{AES}_M^{-1}(C)$ $D \leftarrow F_n(N)$ if $(\text{AES}(N, L) = D)$ then return 1 else return 0 We claim that $$\text{Pr}[\text{Real}^A_F \Rightarrow 1] = 1 \quad \text{and} \quad \text{Pr}[\text{Rand}^A_{\{0, 1\}^{128}} \Rightarrow 1] = 2^{-128}.$$ Why? If $F_n = F_K$ is an instance of $F$ then $C = F(K, M) = \text{AES}(M, K)$, and thus $L = \text{AES}_M^{-1}(C) = K$. Then $D = F(K, N) = \text{AES}(N, K)$, but this equals $\text{AES}(N, L)$, since $L = K$, so $A$ returns 1 with probability one, justifying the first equation above. If $F_n$ is a random function, then $D$ is distributed uniformly and independently of $N, L$, and thus the probability that $D = \text{AES}(N, L)$ is $2^{-128}$. Now, subtracting, as per Definition 3.6, we get $$\text{Adv}_{prf}^F(A) = \text{Pr}[\text{Real}^A_F \Rightarrow 1] - \text{Pr}[\text{Rand}^A_{\{0, 1\}^{128}} \Rightarrow 1] = 1 - 2^{-128}.$$ The prf-advantage of our adversary is essentially one. Our adversary is very practical, making just two oracle queries and with running time that of a couple of AES or AES$^{-1}$ computations. So we have a highly effective attack, showing that $F$ is very insecure as a PRF. Problem 4. Let $F : \{0, 1\}^k \times \{0, 1\}^l \to \{0, 1\}^L$ be a family of functions where $l, L \geq 128$. Consider the game $G$ of Fig. 1. We define $$\text{Adv}_{lr}^F(B) = 2 \cdot \Pr[\text{G} \Rightarrow \text{true}] - 1 .$$ Let $(x_0^1, x_1^1), \ldots, (x_0^q, x_1^q)$ be the queries that $B$ makes to its oracle. (Each query is a pair of $l$-bit strings, and there are $q$ queries in all.) We say that $B$ is legitimate if $x_0^0, \ldots, x_0^q$ are all distinct, and also $x_1^1, \ldots, x_1^q$ are all distinct. We say that $F$ is LR-secure if $\text{Adv}_{lr}^F(B)$ is “small” for every legitimate $B$ of “practical” resources. 1. Show that the legitimacy condition is necessary for LR-security to be “interesting” by showing that if $F$ is a block cipher then there is an efficient, illegitimate $B$ such that $\text{Adv}_{lr}^F(B) = 1$. Consider the following adversary: **adversary $B$** Let $x, y, z$ be any distinct $l$-bit strings $C_1 \leftarrow \text{LR}(x, y)$; $C_2 \leftarrow \text{LR}(z, y)$ If $C_1 = C_2$ then return 1 else return 0 Note $B$ is not legitimate because its queries are of the form $(x_0^1, x_1^1), (x_0^2, x_1^2)$ with $x_1^1 = x_1^2$. Now, if the challenge bit $b = 1$, then $C_1 = F_K(y)$ and $C_2 = F_K(y)$ so $C_1 = C_2$ and $B$ returns 1. (That is, its output $b'$ equals $b$.) On the other hand if $b = 0$ then $C_1 = F_K(x)$ and $C_2 = F_K(z)$. But since $F$ is a block cipher (this is where we use this assumption) the map $F_K$ is a permutation, and thus $C_1 \neq C_2$. So $B$ returns 0. (That is, its output $b$ is again equal to $b$.) So $\Pr[b = b'] = 1$, and thus $\text{Adv}_{lr}^B(B) = 2 \cdot \Pr[b = b'] - 1 = 1$. Adversary $B$ makes only two oracle queries and has time-complexity $O(l + L + T_F)$ where $T_F$ is the time for one evaluation of $F$. Thus, this is a very practical attack. 2. Let $B$ be a legitimate lr-adversary that makes $q$ oracle queries and has time-complexity $t$. Show that there exists a prf-adversary $A$, also making $q$ oracle queries and having time-complexity close to $t$, such that $$\text{Adv}_{lr}^F(B) \leq 2 \cdot \text{Adv}_{prf}^F(A) .$$ (1) State what is the time-complexity of \(A\). Explain why this reduction shows that if \(F\) is a secure PRF then it is LR-secure. This is very similar to several reductions done in class and the notes. Recall that adversary \(A\) gets an oracle for a function \(F_n: \{0,1\}^l \rightarrow \{0,1\}^L\). It works as follows: ``` adversary A b \overset{\$}{\leftarrow} \{0,1\} d \overset{\$}{\leftarrow} B_{\text{SIM}(\cdot,\cdot)} If d = b then return 1 else return 0 ``` Our adversary picks at random a bit \(b\) to represent the challenge bit in game G. It then defines a subroutine \(\text{SIM}\) via which it responds to oracle queries made by \(B\). Note that \(\text{SIM}\) uses \(b\) and also invokes \(A\)'s oracle \(F_n\). After learning \(B\)'s decision \(d\), \(A\) tests whether it is correct, meaning equals the challenge bit \(b\). If so, it declares that its oracle is an instance of \(F\), and otherwise it declares its oracle to be random. Let us now compute \(\text{Adv}^\text{prf}_F(A)\). First consider \(\text{Real}^A_F\), where oracle \(F_n\) is \(F_K\) for \(K \overset{\$}{\leftarrow} \{0,1\}^k\). In this case, the response of \(\text{SIM}(M_0,M_1)\) is \(F_K(M_b)\), which is exactly \(LR(M_0,M_1)\). This means that \[ \text{Pr}\left[\text{Real}^A_F \Rightarrow 1\right] = \text{Pr}[b = d] = \frac{1}{2} + \frac{1}{2} \cdot \text{Adv}^\text{lr}_F(B). \] Now consider \(\text{Rand}^A_{\{0,1\}^L}\), where \(F_n\) implements a random function. In this case, the response of \(\text{SIM}(M_0,M_1)\) is the random value returned by \(F_n(M_b)\). The legitimacy of \(B\) (this is where we use this assumption) now implies that the sequence of responses to \(B\)'s oracle queries is distributed identically whether \(b = 0\) or \(b = 1\), in both cases being a sequence of random and independent \(L\) bit strings. Thus \(B\) gets no information about \(b\) from the oracle. This means that \(\text{Pr}[b = d] = 1/2\). Thus \[ \text{Pr}\left[\text{Rand}^A_{\{0,1\}^L} \Rightarrow 1\right] = \text{Pr}[b = d] = \frac{1}{2}. \] Subtracting, we get \[ \text{Adv}^\text{prf}_F(A) = \text{Pr}\left[\text{Real}^A_F \Rightarrow 1\right] - \text{Pr}\left[\text{Rand}^A_{\{0,1\}^L} \Rightarrow 1\right] = \frac{1}{2} \cdot \text{Adv}^\text{lr}_F(B), \] which implies Equation (1). The time-complexity of \(A\) is only \(O(1)\) more than that of \(B\) given our conventions about measuring time-complexity. Why does this reduction show that if \(F\) is a secure PRF then it is LR-secure? For the usual reason with such reductions. To show that \(F\) is LR-secure we need to show that \(\text{Adv}^\text{lr}_F(B)\) is small for any practical \(B\). However, if \(B\) is practical, so is \(A\), and then the assumption that \(F\) is a PRF tells us that \(\text{Adv}^\text{prf}_F(A)\) is small. Then Equation (1) tells us that \(\text{Adv}^\text{lr}_F(B)\) is also small, as desired. 3. Is the converse true? Namely, if \(F\) is LR-secure, then is it a secure PRF? Answer YES or NO. If you say YES, justify this via a reduction, and, if NO, via a counter-example. (The latter means a particular family of functions \(F\) which you can prove is LR-secure but which you can show via an attack is not a PRF.) The answer is NO. A simple counter-example is a family of functions \(F\) in which \(F_K\) is a constant function for each $K \in \{0, 1\}^k$. To be specific, consider the family $F$ defined by $F(K, x) = 0^L$ for all $K \in \{0, 1\}^k$ and $x \in \{0, 1\}^l$. Here is an attack showing that $F$ is not a PRF: \[ \text{adversary } A \begin{align*} \text{If } \mathbf{F}_n(0^l) &= 0^L \text{ then return 1 else return 0} \end{align*} \] In game $\text{Real}_F$, the oracle $\mathbf{F}_n$ implements $F_K$ for some $K$ and so $\mathbf{F}_n(0^l) = F_K(0^l) = 0^L$. On the other hand, in game $\text{Rand}_{\{0, 1\}^L}$ the probability that $\mathbf{F}_n(0^l) = 0^L$ is $2^{-L}$. Thus \[ \Pr[\text{Real}_F^A \Rightarrow 1] = 1 \quad \text{and} \quad \Pr[\text{Rand}_{\{0, 1\}^L}^A \Rightarrow 1] = 2^{-L}. \] Subtracting, we get $\text{Adv}^\text{prf}_F(A) = 1 - 2^{-L}$. This is close to 1 (recall the problem assumes $L \geq 128$) and furthermore $A$ is very efficient, so we have shown that $F$ is not a PRF. On the other hand, we claim that $F$ is LR-secure. To see this consider any $B$ with an LR$(\cdot, \cdot)$ oracle. The response of the oracle to any query is $0^L$. In particular, this is true regardless of the value of $b$, meaning the oracle responses give $B$ no information about $b$. Thus $\Pr[b = d] = 1/2$, where $d$ is the output of $B$, and so $\text{Adv}^\text{lr}_F(B) = 0$. So we have shown LR-security in a very strong sense: the advantage of any adversary is 0, regardless of its time-complexity or the number of queries it makes. We clarify that $F$ above is a family of functions. It is not required to be a block cipher except in part 1. --- **Extra credit** The goal of a key-search attack (such as exhaustive key search) is to find the target key, but, as discussed in the notes and in class, such an attack might find a key that is consistent with the input-output examples but is not the target key. We glossed over this, saying it “usually” does not happen. This problem gives a sense of how cryptographers arrive at this type of conclusion and estimate what “usually” means. We use what is called the *ideal cipher model*. Let $k, n \geq 1$ be integers. Let $K = 2^k$ and $N = 2^n$ and let $T_1, \ldots, T_K$ be some enumeration of the elements of $\{0, 1\}^k$. We consider a thought experiment in which a block cipher is chosen at random. By this we mean that for each key $T_i$, we choose $E(T_i, \cdot)$ as a random permutation on $\{0, 1\}^n$. Fix a message $M^* \in \{0, 1\}^n$ known to the adversary, who, given a ciphertext $C^* = E(T^*, M^*)$ for a random, unknown $T^*$ attempts to find $T^*$. The adversary can access $E$ (only) as an oracle. We formalize this via the game EKS of Fig. 2. We will use games a lot so this is a good chance to start getting familiar with them. The game maintains a table $E$, representing the block cipher, and assumed to initially be $\perp$ (undefined) everywhere. It also associates to each key $T$ a set $\text{Range}[T]$ that is initially empty. The game is executed with an adversary $A$. As this execution continues, the tables get populated, and the block cipher gets slowly defined. First, the main procedure executes. It picks a random challenge key $T^*$, defines $E[T^*, M^*]$ to be a random $n$-bit string, and returns it to the adversary as the challenge ciphertext $C^*$. Now the adversary executes, and can make queries --- We clarify that $F$ above is a family of functions. It is not required to be a block cipher except in part 1. of the form $T, M$ to procedure $E$. A query $T, M$ creates the point $E[T, M]$. It is chosen at random, but, to ensure the permutation property of a block cipher, from the set $$\{0, 1\}^n \setminus \text{Range}[T] = \{0, 1\}^n \setminus \{E[T, M'] : E[T, M'] \neq \perp\}.$$ The test “If not $E[T, M]$” returns true iff $E[T, M]$ is undefined, meaning equal to $\perp$ rather than an $n$-bit string. The set $\text{Range}[T]$ contains all points $E[T, M]$ that are currently defined. When the adversary is done, it outputs its guess $T$ for the value of $T^*$. The game returns true if $T = T^*$ and false otherwise. The output of main is called the output of the game or execution, and we let $\Pr[EKS^A]$ denote the probability that this output is true. The probability is over the random choices in the game, as well as those of the adversary, if any. Here we are considering a very simple form of key search where there is only one input-output example. Now, using this model, we can try to calculate the probability that an attack returns the target key, as opposed to some non-target key consistent with the input-output examples. **Problem 5.** Let $k, n \geq 1$ be integers. Let $K = 2^k$ and $N = 2^n$. Fix $M^* \in \{0, 1\}^n$ and let $T_1, \ldots, T_K$ be some enumeration of the elements of $\{0, 1\}^k$. Consider the following adversary for game EKS: $$\text{adversary } A(C^*)$$ For $i = 1, \ldots, K$ do If $E(T_i, M^*) = C^*$ then $G \leftarrow T_i$; return $G$ This adversary calls the $E$ oracle up to $K$ times as shown. Let $\text{Adv}^{eks}(K, N) = \Pr[EKS^A]$. This is the probability that the key $G$ output by $A$ in its execution with EKS equals the target key $T^*$ chosen by main. 1. Prove that $$\text{Adv}^{eks}(K, N) = \frac{N}{K} \left[1 - \left(1 - \frac{1}{N}\right)^K\right]. \quad (2)$$ We justify the following chain of equalities below: \[ \text{Adv}^{\text{eks}}(K, N) \] \[ = \Pr[G = T^*] \] (3) \[ = \sum_{j=1}^{K} \Pr[G = T^* \land T^* = T_j] \] (4) \[ = \sum_{j=1}^{K} \Pr[G = T^* \mid T^* = T_j] \cdot \Pr[T^* = T_j] \] (5) \[ = \sum_{j=1}^{K} \Pr[G = T^* \mid T^* = T_j] \cdot \frac{1}{K} \] (6) \[ = \frac{1}{K} \cdot \sum_{j=1}^{K} \Pr[G = T^* \mid T^* = T_j] \] \[ = \frac{1}{K} \cdot \sum_{j=1}^{K} \Pr[E[T_1, M^*] \neq C^* \land E[T_2, M^*] \neq C^* \land \ldots \land E[T_{j-1}, M^*] \neq C^*] \] (7) \[ = \frac{1}{K} \cdot \sum_{j=1}^{K} \prod_{i=1}^{j-1} \Pr[E[T_i, M^*] \neq C^*] \] (8) \[ = \frac{1}{K} \cdot \sum_{j=1}^{K} \prod_{i=1}^{j-1} \left(1 - \frac{1}{N}\right) \] (9) \[ = \frac{1}{K} \cdot \sum_{j=1}^{K} \left(1 - \frac{1}{N}\right)^{j-1} \] \[ = \frac{1}{K} \cdot \sum_{j=0}^{K-1} \left(1 - \frac{1}{N}\right)^{j} \] \[ = \frac{1}{K} \cdot \frac{1 - \left(1 - \frac{1}{N}\right)^{K}}{1 - \left(1 - \frac{1}{N}\right)} \] (10) \[ = \frac{N}{K} \cdot \left[1 - \left(1 - \frac{1}{N}\right)^{K}\right]. \] We now justify the numbered steps. (The others are straightforward.) Equation (4) is true because the events \(T^* = T_j\) \((1 \leq j \leq K)\) are a partition of the underlying probability space, meaning exactly one (not less, not more) of them is always true. Equation (5) uses the definition of conditional probability, namely \(\Pr[A \mid B] = \Pr[A \land B] / \Pr[B]\). Since \(T^*\) is chosen at random, the probability that \(T^* = T_j\) is \(1/K\) for any \(j\), which justifies Equation (6). For Equation (7), fix some \(j\) and assume \(T^* = T_j\). Now what is the probability that the attack returns \( G = T_j \)? Look at the code. It must be the case that \( E[T_i, M^*] \neq C^* \) for all \( i = 1, \ldots, j - 1 \), since otherwise the code will return \( T_i \) rather than \( T_j \). (In other words, the \( j - 1 \) keys preceding \( T_j \) must not yield false positives.) Once this is true, the loop index \( i \) arrives at the value \( j \). At that point, the code will certainly halt and return \( T_j \), because the test \( E[T_j, M^*] = C^* \) is true. (Because \( T_j = T^* \).) Thus, \( \Pr[G = T^* | T^* = T_j] \) is exactly the probability that \( E[T_i, M^*] \neq C^* \) for all \( i = 1, \ldots, j - 1 \). (Note that whether or not \( E[T_i, M^*] = C^* \) for \( l > j \) is not relevant because the code halts and returns when \( i = j \).) Equation (8) is true because the events \( E[T_i, M^*] = C^* \) are independent for \( i = 1, \ldots, j - 1 \). This is true because of the random choices made in defining \( E \) and because \( T_1, \ldots, T_{j-1} \) are distinct. The random choices that define \( E \) ensure that \( \Pr[E[T_i, M^*] = C^*] = 1/N \) for all \( i = 1, \ldots, j - 1 \), which justifies Equation (9). Equation (10) uses the standard formula \( \sum_{i=0}^{K-1} a^i = (1-a^K)/(1-a) \) for the sum of a geometric series, with \( a = 1 - 1/N \). That completes the proof. 2. It is difficult to get a quantitative feel from Equation (2). We will now lower bound it via a simpler expression. To do so we first recall an inequality. Namely let \( x \) be a real number in the range \( 0 \leq x \leq 1 \). Let \( m, l \) be integers such that \( 0 \leq l \leq m \) and \( l \) is even. Then \[ (1-x)^m \leq \sum_{i=0}^{l} \binom{m}{i} (-x)^i. \] (11) Use this and the result of 1. above to show that \[\text{Adv}^{eks}(K, N) \geq 1 - \frac{K-1}{2N}.\] (12) We will use the given inequality with \( x = 1/N, m = K \) and \( l = 2 \). This gives us \[ \left(1 - \frac{1}{N}\right)^K \leq \binom{K}{0} \left(\frac{-1}{N}\right)^0 + \binom{K}{1} \left(\frac{-1}{N}\right)^1 + \binom{K}{2} \left(\frac{-1}{N}\right)^2 \] \[= 1 - \frac{K}{N} + \frac{K(K-1)}{2N^2}.\] Now from 1. above we have \[ \text{Adv}^{eks}(K, N) = \frac{N}{K} \left[ 1 - \left(1 - \frac{1}{N}\right)^K \right] \] \[\geq \frac{N}{K} \left[ 1 - \left(1 - \frac{K}{N} + \frac{K(K-1)}{2N^2}\right) \right] \] \[= \frac{N}{K} \left[ \frac{K}{N} - \frac{K(K-1)}{2N^2} \right] \] \[= 1 - \frac{K-1}{2N}.\] 3. Let $k, n$ be (respectively) the key-length and block-length parameters of DES. Use the result of 2. to numerically estimate $\text{Adv}^{\text{eks}}(K, N)$ in this case. Do the same when $k, n$ are the parameters of AES. The DES parameters are $k = 56$ and $n = 64$. In this case we get $$\text{Adv}^{\text{eks}}(2^{56}, 2^{64}) \geq 1 - \frac{2^{56} - 1}{2^{65}} \approx 1 - 2^{-9}.$$ The AES parameters are $k = 128$ and $n = 128$. In this case we get $$\text{Adv}^{\text{eks}}(2^{128}, 2^{128}) \geq 1 - \frac{2^{128} - 1}{2^{129}} \approx \frac{1}{2}.$$ Thus with DES parameters, the attack finds the target key except with the relatively small probability of $2^{-9}$, while with AES parameters the attack finds the target key less often, namely about half the time. 4. What do these results tell us about the success probability of an exhaustive key-search attack on DES? What about on AES? Is DES an ideal cipher? Is AES an ideal cipher? Discuss. Let’s begin with the last two questions. Is AES “an ideal cipher”? Is DES “an ideal cipher”? To answer these, we would first have to know the meaning or definition of the phrase in quotes. In particular, meaningfully answering such a question pre-supposes a definition of the form: “A block cipher $E$ is said to be an ideal cipher if $X$ is true” where $X$ is some condition on $E$. Then, we can ask if DES or AES meet condition $X$. The difficulty is that a definition as indicated above, and in particular the definition of $X$, were never given. We never gave any definition of what it meant for a block cipher $E$ to “be an ideal cipher.” That is, nowhere do you read a definition of the form: “Let $E$ be a block cipher. Then we say that $E$ is ideal if . . .” What we defined instead is an ideal cipher model, which involves a game in which a block cipher is chosen at random. To better understand the difference, suppose our questions had been: Is AES a PRF? Is DES a PRF? In this case, the questions are meaningful, because we did give definitions of the form “Let $E$ be a block cipher. Then we say that $E$ is a PRF if . . .”. (The condition “. . .” was given in class and can also be found in the notes.) Now, is the answer yes or no? Well, we don’t know. We conjecture it is yes (up to the limitations given by known attacks) but it could be no. But in any case, this is not the question we were asked, so let us return to the discussion. What then do these ideal-cipher model results tell us about the success probability of an exhaustive key-search attack on DES or AES? Well, speaking strictly mathematically, nothing at all. So why do we use this model? Because, heuristically, it tells us something. We “believe” that DES and AES have some “properties of an ideal cipher.” What we mean by this is that if we figure out the probability of some event in the ideal cipher model, for example the success probability of an attack as above, then in practice, when the attack is run on DES or AES, the success probability will be about what is predicted by the ideal cipher model. We believe this because, heuristically, the block ciphers are designed to have “random behavior.” In principle we might substantiate this belief by experiments. That may be possible for DES, but not for AES, where exhaustive key search attacks are out of practical reach. Thus it is important to appreciate both the value and the limitations of the ideal cipher model. (Later we will see another model, the random oracle model, with the same features.) If you face the problem of designing or analyzing a block cipher based scheme, it is actually a good idea to first get some sense of what the security would be like in the ideal cipher model. Heuristically you would guess the same would be true if you used a “good” block cipher. But even better is to use the PRF/PRP model, and that is what we will be doing. (Can you understand why it is better?)
{"Source-Url": "http://pages.cs.wisc.edu/~rist/838-spring-2012/ss1.pdf", "len_cl100k_base": 8073, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 42957, "total-output-tokens": 8858, "length": "2e12", "weborganizer": {"__label__adult": 0.0006165504455566406, "__label__art_design": 0.000438690185546875, "__label__crime_law": 0.0020618438720703125, "__label__education_jobs": 0.0015401840209960938, "__label__entertainment": 0.00015020370483398438, "__label__fashion_beauty": 0.0002313852310180664, "__label__finance_business": 0.0006546974182128906, "__label__food_dining": 0.0006437301635742188, "__label__games": 0.002452850341796875, "__label__hardware": 0.003063201904296875, "__label__health": 0.0010805130004882812, "__label__history": 0.0005083084106445312, "__label__home_hobbies": 0.0002410411834716797, "__label__industrial": 0.0014505386352539062, "__label__literature": 0.0005822181701660156, "__label__politics": 0.0005946159362792969, "__label__religion": 0.0008602142333984375, "__label__science_tech": 0.432373046875, "__label__social_life": 0.00015497207641601562, "__label__software": 0.012481689453125, "__label__software_dev": 0.5361328125, "__label__sports_fitness": 0.0005679130554199219, "__label__transportation": 0.0009589195251464844, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23748, 0.03555]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23748, 0.83942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23748, 0.84413]], "google_gemma-3-12b-it_contains_pii": [[0, 2331, false], [2331, 4988, null], [4988, 7143, null], [7143, 10463, null], [10463, 13901, null], [13901, 15737, null], [15737, 17389, null], [17389, 19831, null], [19831, 23086, null], [23086, 23748, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2331, true], [2331, 4988, null], [4988, 7143, null], [7143, 10463, null], [10463, 13901, null], [13901, 15737, null], [15737, 17389, null], [17389, 19831, null], [19831, 23086, null], [23086, 23748, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23748, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23748, null]], "pdf_page_numbers": [[0, 2331, 1], [2331, 4988, 2], [4988, 7143, 3], [7143, 10463, 4], [10463, 13901, 5], [13901, 15737, 6], [15737, 17389, 7], [17389, 19831, 8], [19831, 23086, 9], [23086, 23748, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23748, 0.0]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
9ae0c128a2fc1e38ff5e0698b1889ae26b3fd77e
More on Append What if we type in the query app(X,Y,Z). The answer given by prolog: 1 ?- [app]. app compiled, 0.00 sec, 1,108 bytes. Yes 2 ?- app(X,Y,Z). X = [] Y = _G143 Z = _G143 ; X = [_G260] Y = _G143 Z = [_G260|_G143] ; X = [_G260, _G266] Y = _G143 Z = [_G260, _G266|_G143] ; X = [_G260, _G266, _G272] Y = _G143 Yes 3 ?- Note the order of the answers depends on the order of the clauses in the database If the rules were in the other order then prolog would reverse the order of the output and never generate even one solution Removing an Item from a List How do we remove a single item from a list \[ \text{skip}(\text{item}, \text{list}, \text{list\_with\_item\_removed}) \] If the first element of the list equals the element to remove then the answer is the rest of the list \[ \text{skip}(X, [X|Y], Y). \] Or the first element followed by the rest of the list with the specified element removed \[ \text{skip}(X, [Y|Ys], [Y|Zs]) :- \text{skip}(X,Ys,Zs). \] Testing 4 \?[ skip(2, [1,2,3], X). X = [1, 3] ; No 5 \?[ skip(2, X, [a,b,c]). X = [2, a, b, c] ; X = [a, 2, b, c] ; X = [a, b, 2, c] ; X = [a, b, c, 2] ; No 6 \?[ skip(1, [2,3,4], X). No Permutations How do we generate permutations of a list? Then the following will check for permutations \[ \text{perm}([], []). \\ \text{perm}([X|Xs], Y) :- \text{skip}(X, Y, R), \text{perm}(Xs, R). \] Empty lists are permutations of each other Removing the first element from a list and the same element from the second list and the remainders are permutations Try it out 9 ?- \text{perm}(X, [1, 2]). \[ X = [1, 2] ; \\ X = [2, 1] ; \\ \text{No} \] But it goes into an infinite loop the other way 12 ?- trace,perm([1,2],X). Call:( 8) perm([1,2],_G209) Call:( 9) skip(1,_G209,_L143) Exit:( 9) skip(1,[1|_G322],_G322) Call:( 9) perm([2],_G322) Call:(10) skip(2,_G322,_L170) Exit:(10) skip(2,[2|_G325],_G325) Call:(10) perm([],_G325) Exit:(10) perm([],[]) Exit:( 9) perm([2],[2]) Exit:( 8) perm([1,2],[1,2]) X = [1,2] ; Redo:(10) skip(2,_G322,_L170) Call:(11) skip(2,_G325,_G328) Exit:(11) skip(2,[2|_G328],_G328) Exit:(10) skip(2,[_G324,2|_G328],[_G324|_G328]) Call:(10) perm([],[_G324|_G328]) Fail:(10) perm([],[_G324|_G328]) Redo:(11) skip(2,_G325,_G328) Call:(12) skip(2,_G331,_G334) Exit:(12) skip(2,[2|_G334],_G334) Exit:(11) skip(2,[_G330,2|_G334],[_G330|_G334]) Exit:(10) skip(2,[_G324,_G330,2|_G334],[_G324,_G330|_G334]) Call:(10) perm([],[_G324,_G330|_G334]) Fail:(10) perm([],[_G324,_G330|_G334]) Redo:(12) skip(2,_G331,_G334) Call:(13) skip(2,_G337,_G340) Exit:(13) skip(2,[2|_G340],_G340) Exit:(12) skip(2,[_G336,2|_G340],[_G336|_G340]) Exit:(11) The Letter Problem What consistent assignments to letters make the following sum correct? \[ \begin{array}{c} \text{S} \\ \text{E} \\ \text{N} \\ \text{D} \\ \hline \\ \text{M} \\ \text{O} \\ \text{R} \\ \text{E} \\ \hline \\ \text{M} \\ \text{O} \\ \text{N} \\ \text{E} \\ \text{Y} \end{array} \] We need to be able to add We can type in all of the basic digit addition facts We need to make sure that the letters have distinct values We could do this by using permutations of all the digits and insisting that the letters be part of this permutation Addition problem addlist([],[],0,[]). addlist([X|Xs],[Y|Ys],CO,[Z|Zs]):- add(X,Y,CI,CO,Z),addlist(Xs,Ys,CI,Zs). % addlist([0,S,E,N,D],[0,M,O,R,E],0,[M,O,N,E,Y]). % addlist([1,9,2],[3,4,5],0,X). % % addlist([0,S,E,N,D],[0,M,O,R,E],0,[M,O,N,E,Y]),perm([S,E,N,D,M,O,R,Y|_],[0,1,2,3,4,5,6,7,8,9]). % % add(A,B,C,T,U) add(0,0,0,0,0). ad (1,0,0,0,1). ad (2,0,0,0,2). ad (3,0,0,0,3). ad (4,0,0,0,4). ad (5,0,0,0,5). ad (6,0,0,0,6). ad (7,0,0,0,7). ad (8,0,0,0,8). ad (9,0,0,0,9). ad (0,0,1,0,1). ad (1,0,1,0,2). ad (2,0,1,0,3). ad (3,0,1,0,4). ad (4,0,1,0,5). ad (5,0,1,0,6). ad (6,0,1,0,7). ad (7,0,1,0,8). add(8,0,1,0,9). add(9,1,0,1,0). add(0,1,0,0,1). add(1,1,0,0,2). add(2,1,0,0,3). add(3,1,0,0,4). add(4,1,0,0,5). add(5,1,0,0,6). add(6,1,0,0,7). add(7,1,0,0,8). add(8,1,0,0,9). add(9,1,0,1,0). add(0,1,1,0,2). add(1,1,1,0,3). add(2,1,1,0,4). add(3,1,1,0,5). add(4,1,1,0,6). add(5,1,1,0,7). add(6,1,1,0,8). add(7,1,1,0,9). add(8,1,1,1,0). add(9,1,1,1,1). add(0,2,0,0,2). add(1,2,0,0,3). add(2,2,0,0,4). add(3,2,0,0,5). add(4,2,0,0,6). add(5,2,0,0,7). add(6,2,0,0,8). add(7,2,0,0,9). add(8,2,0,1,0). add(9,2,0,1,1). add(0,2,1,0,3). add(1,2,1,0,4). add(2,2,1,0,5). add(3,2,1,0,6). add(4,2,1,0,7). add(5,2,1,0,8). add(6,2,1,0,9). add(7,2,1,1,0). add(8,2,1,1,1). add(9,2,1,1,2). add(0,3,0,0,3). add(1,3,0,0,4). add(2,3,0,0,5). add(3,3,0,0,6). add(4,3,0,0,7). add(5,3,0,0,8). add(6,3,0,0,9). add(7,3,0,1,0). add(8,3,0,1,1). add(9,3,0,1,2). add(0,3,1,0,4). add(1,3,1,0,5). add(2,3,1,0,6). add(3,3,1,0,7). add(4,3,1,0,8). add(5,3,1,0,9). add(6,3,1,1,0). add(7,3,1,1,1). add(8,3,1,1,1). add(9,3,1,1,2). add(0,4,0,0,4). add(1,4,0,0,5). add(2,4,0,0,6). add(3,4,0,0,7). add(4,4,0,0,8). add(5,4,0,0,9). add(6,4,0,1,0). add(7,4,0,1,1). add(8,4,0,1,2). add(9,4,0,1,3). add(0,4,1,0,5). add(1,4,1,0,6). add(2,4,1,0,7). add(3,4,1,0,8). add(4,4,1,0,9). add(5,4,1,1,0). add(6,4,1,1,1). add(7,4,1,1,2). add(8,4,1,1,3). add(9,4,1,1,4). add(0,5,0,0,5). add(1,5,0,0,6). add(2,5,0,0,7). add(3,5,0,0,8). add(4,5,0,0,9). add(5,5,0,1,0). add(6,5,0,1,1). add(7,5,0,1,2). add(8,5,0,1,3). add(9,5,0,1,4). add(0,5,1,0,6). add(1,5,1,0,7). add(2,5,1,0,8). add(3,5,1,0,9). add(4,5,1,1,0). add(5,5,1,1,1). add(6,5,1,1,2). add(7,5,1,1,3). add(8,5,1,1,4). add(9,5,1,1,5). add(0,6,0,0,6). add(1,6,0,0,7). add(2,6,0,0,8). add(3,6,0,0,9). add(4,6,0,1,0). add(5,6,0,1,1). add(6,6,0,1,2). add(7,6,0,1,3). add(8,6,0,1,4). add(9,6,0,1,5). add(0,6,1,0,7). add(1,6,1,0,8). add(2, 6, 1, 0, 9). add(3, 6, 1, 1, 0). add(4, 6, 1, 1, 1). add(5, 6, 1, 1, 2). add(6, 6, 1, 1, 3). add(7, 6, 1, 1, 4). add(8, 6, 1, 1, 5). add(9, 6, 1, 1, 6). add(0, 7, 0, 0, 7). add(1, 7, 0, 0, 8). add(2, 7, 0, 0, 9). add(3, 7, 0, 1, 0). add(4, 7, 0, 1, 1). add(5, 7, 0, 1, 2). add(6, 7, 0, 1, 3). add(7, 7, 0, 1, 4). add(8, 7, 0, 1, 5). add(9, 7, 0, 1, 6). add(0, 7, 1, 0, 8). add(1, 7, 1, 0, 9). add(2, 7, 1, 1, 0). add(3, 7, 1, 1, 1). add(4, 7, 1, 1, 2). add(5, 7, 1, 1, 3). add(6, 7, 1, 1, 4). add(7, 7, 1, 1, 5). add(8, 7, 1, 1, 6). add(9, 7, 1, 1, 7). add(0, 8, 0, 0, 8). add(1, 8, 0, 0, 9). add(2, 8, 0, 1, 0). add(3, 8, 0, 1, 1). add(4, 8, 0, 1, 2). add(5, 8, 0, 1, 3). add(6, 8, 0, 1, 4). add(7, 8, 0, 1, 5). add(8, 8, 0, 1, 6). add(9, 8, 0, 1, 7). add(0,8,1,0,9). add(1,8,1,0,0). add(2,8,1,1,1). add(3,8,1,1,2). add(4,8,1,1,3). add(5,8,1,1,4). add(6,8,1,1,5). add(7,8,1,1,6). add(8,8,1,1,7). add(9,8,1,1,8). add(0,9,0,0,9). add(1,9,0,0,0). add(2,9,0,1,1). add(3,9,0,1,2). add(4,9,0,1,3). add(5,9,0,1,4). add(6,9,0,1,5). add(7,9,0,1,6). add(8,9,0,1,7). add(9,9,0,1,8). add(0,9,1,1,0). add(1,9,1,1,0). add(2,9,1,1,1). add(3,9,1,1,2). add(4,9,1,1,3). add(5,9,1,1,4). add(6,9,1,1,5). add(7,9,1,1,6). add(8,9,1,1,7). add(8,9,1,1,8). add(9,9,1,1,9). Output 3 ?- addlist([1,9,2],[3,4,5],0,X). X = [5, 3, 7] ; No 4 ?- addlist([0,S,E,N,D],[0,M,O,R,E],0,[M,O,N,E,Y]). S = 0 E = 0 N = 0 D = 0 M = 0 O = 0 R = 0 Y = 0 ; S = 0 E = 0 N = 0 D = 1 M = 0 O = 0 R = 0 Y = 1 ; S = 0 E = 0 N = 0 D = 2 M = 0 O = 0 R = 0 Y = 2 Yes Trying and Avoiding Duplicates 7 ?- addlist([0,S,E,N,D],[0,M,O,R,E],0,[M,O,N,E,Y]),perm([S,E,N,D,M,O,R,Y|_|],[0,1,2,3,4,5,6,7,8,9]). S = 2 E = 8 N = 1 D = 7 M = 0 O = 3 R = 6 Y = 5 ; S = 2 E = 8 N = 1 D = 7 M = 0 O = 3 R = 6 Y = 5 ; S = 2 E = 8 N = 1 D = 9 M = 0 O = 3 R = 6 Y = 7 ; S = 2 E = 8 N = 1 D = 9 M = 0 O = 3 R = 6 Y = 7 ; S = 3 E = 7 N = 1 D = 9 M = 0 O = 4 R = 5 Y = 6 ; S = 3 E = 7 N = 1 D = 9 M = 0 O = 4 R = 6 Y = 9 ; Cuts Sometimes a programmer would like to abandon a search or prune the depth-first search that Prolog uses. A solution may have been found and it is a waste of time to continue looking. The cut operator is ! and causes the search never to back up past the ! operator. In particular, no further clauses with the same head will be tried. Neither will other possibilities in terms before the cut in the current clause be tried. Goals to the right of the cut behave normally and can backtrack. Cut is a "hack" and does not have a good logical explanation. But it is useful tool. It can cause illogical behavior if it is used incorrectly. They have global effects. Cut Example Given the database \[ s(a) . \] \[ s(b) . \] \[ r(a) . \] \[ r(b) . \] \[ p(X, Y) : - l(X) . \] \[ p(X, Y) : - r(X), !, . . . \textless \textgreater \text{ note the cut } \] \[ p(X, Y) : - m(X), . . . \] and the query \[ ?- s(A), p(B, C) . \] The system will unify A with a unify B with X and C with Y in the first p clause search for l(X) and fail unify B with X and C with Y in the second p clause unify r(X) with r(a) do the ... stuff Because of the cut r(X) will not be unified with r(b) the third p(X,Y) clause will not be tried if the ... fails then the entire clause fails Kinds of cuts - White cuts White cuts are those which do not discard solutions They improve performance because they avoid backtracking (which should fail, anyway), and they, in some Prolog implementations, avoid creating choicepoints at all. An example of white cut is: \[ \text{max}(X, Y, X) :- X > Y, !. \] \[ \text{max}(X, Y, Y) :- X =< Y. \] The two tests are mutually exclusive: since (because of the way arithmetic works in Prolog) both X and Y must be instantiated to numbers, if the first clause succeeds (which will happen if the cut is reached), then the second will not; conversely, if the second clause is to succeed, then the first one could not have succeeded, and the cut in it would not have been reached. Kinds of cuts - Green cuts Green cuts are those which discard correct solutions which are not needed Sometimes a predicate yields several solutions, but one is enough for the purposes of the program--or one is preferred over the others Green cuts discard solutions not wanted, but all solutions returned are correct For example, if we had a database of addresses of people and their workplaces, and we wanted to know the address of a person, we might prefer his/her home address, and if not found, we should resort to the business address. This predicate implements this query: address(X, Add) :- home_address(X, Add), !. address(X, Add) :- business_address(X, Add). Another useful example is checking if an element is member of a list, without either enumerating (on backtracking) all the elements of a list or instantiating on backtracking possible variables in the list The membercheck/2 predicate does precisely this: when the element sought for is found, the alternative clause which searches in the rest of the list is not taken into account: membercheck(X, [X|Xs]) :- !. membercheck(X, [Y|Xs]) :- membercheck(X, Xs). Again, it might be useful in some situations, mainly because of the savings in memory and time it helps to achieve But it should be used with caution, ensuring that it does not remove solutions which are needed. Kinds of cuts - Red cuts Red cuts both discard correct solutions not needed, and can introduce wrong solutions, depending on the call mode. This causes predicates to be wrong according to almost any sensible meaning. For example, if we wanted to know how many days there are in a year, taking into account leap years, we might use the following predicate: ```prolog days_in_year(X, 366):¬ number(X), leap_year(X), !. days_in_year(X, 365). ``` The idea behind is: "if X is a number and a leap year, then we succeed, and do not need to go to the second clause. Otherwise, it is not a leap year" But the query ?- leap_year(z, D) succeeds (with D = 365), because the predicate does not take into account that, in any case, a year must be a number It is arguable that this predicate would behave correctly if it is always called with X instantiated to a number, but the check number(X) would not be needed, and correctness of the predicate will then be completely dependent on the way it is called--which is not a good way of writing predicates. More Red Cut Examples Look at the following implementation of the max/3 predicate which works out the maximum of two numbers: \[ \text{max}(X, Y, X) : \neg X > Y, !. \text{max}(X, Y, Y). \] The idea is: if \( X > Y \), then there is no need to check whether \( X =< Y \) or not, hence the cut. And, if the first clause failed, then clearly the case is that \( X =< Y \) But there are two serious counterexamples to this: the first is the query \(- \text{max}(5, X, X).\), which succeeds binding nothing (instead of failing or giving an error, which would in any case be a better behavior, at least indicating that there has been a call with a wrong instantiation mode). In any case, the second counterexample does not violate any sensible assumption: the call \(- \text{max}(5, 2, 2).\) succeeds instead of failing, because the first head unification fails and the second succeeds! What happens here is a case of the so-called "output unification": there are unifications made before the cut, which means that data is changed prior to the tests which determine if the (first, in this case) clause is the right one or not. Changing the program to \[ \text{max}(X, Y, Z) : \neg X > Y, !, X = Z. \text{max}(X, Y, Y). \] will make the predicate behave correctly in both counterexamples (giving an error in the first, failing in the second). Negation as Failure Negation in Prolog is implemented based on the use of cut Actually, negation in Prolog is the so-called negation as failure, which means that to negate p one tries to prove p (just executing it), and if p is proved, then its negation, not(p), fails Conversely, if p fails during execution, then not(p) will succeed. The implementation of not/1 is as follows: \[ \text{not}(\text{Goal}) \leftarrow \text{call}(\text{Goal}), !, \text{fail}. \] \[ \text{not}(\text{Goal}). \] (fail/0 is a builtin predicate which always fails. It can be trivially defined as fail:- a = b.) not/1 is usually available as the (prefix) predicate + / 1 in most Prolog systems. I.e., not(p) would be written + p. Since not(p) will try to execute p, if the execution of p does not terminate, the execution of not(p) will not terminate, either Also, since not(p) succeeds if and only if p failed, not(p) will not instantiate any variable which could appear in p This is not a logically sound behavior, since, from a formal point of view, not(p) should succeed and instantiate variables for each term for which p is false The problem is that this will very likely lead to an infinite number of solutions. But using negation with ground goals (or, at least with calls to goals which do not further instantiate free variables which are passed to them) is safe, and the programmer should ensure this to hold. Otherwise, unwanted results may show up: Examples of Negation as Failure unmarried_student(X):- not(married(X)), student(X). student(joe). mtripled(john). This program seems to suggest that joe is an unmarried student, and that joe is not an unmarried student, and indeed: ?- unmarried_student(joe). yes ?- unmarried_student(john). no But, for logical consistence, asking for unmarried students should return joe as answer, and this is not what happens: ?- unmarried_student(X). no The reason for this is that the call to not(married(X)) is not returning the students which are not married: it is just failing because there is at least a married student. The use of cut and a fail in a clause forces the failure of the whole predicate, and is a technique termed cut-fail. It is useful to make a predicate fail when a condition (which may be a call to an arbitrary predicate) succeeds. An example of cut-fail combinations is implementing in Prolog the predicate ground/1, which succeeds if no variables are found in a term, and fails otherwise. The technique is recursively traversing the whole term, and forcing a failure as soon as a variable is found: ```prolog ground(Term) :- var(Term), !, fail. ground(Term) :- nonvar(Term), functor(Term,F,N), ground(N,Term). ground(0,T). ground(N,T) :- arg(N,T,Arg), ground(Arg), N1 is N-1, ground(N1,T). ```
{"Source-Url": "http://www.cs.rit.edu/~swm/cs450/Prolog1.pdf", "len_cl100k_base": 6648, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 37595, "total-output-tokens": 8203, "length": "2e12", "weborganizer": {"__label__adult": 0.00026988983154296875, "__label__art_design": 0.00028634071350097656, "__label__crime_law": 0.0003714561462402344, "__label__education_jobs": 0.001308441162109375, "__label__entertainment": 0.00010019540786743164, "__label__fashion_beauty": 0.00012695789337158203, "__label__finance_business": 0.00014102458953857422, "__label__food_dining": 0.00034737586975097656, "__label__games": 0.0012416839599609375, "__label__hardware": 0.000957012176513672, "__label__health": 0.00030040740966796875, "__label__history": 0.00024116039276123047, "__label__home_hobbies": 0.00011682510375976562, "__label__industrial": 0.00044465065002441406, "__label__literature": 0.0004208087921142578, "__label__politics": 0.00019240379333496096, "__label__religion": 0.0005159378051757812, "__label__science_tech": 0.03460693359375, "__label__social_life": 9.864568710327148e-05, "__label__software": 0.0179290771484375, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.0002467632293701172, "__label__transportation": 0.00041794776916503906, "__label__travel": 0.00014829635620117188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16355, 0.17466]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16355, 0.84863]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16355, 0.66797]], "google_gemma-3-12b-it_contains_pii": [[0, 580, false], [580, 1222, null], [1222, 1679, null], [1679, 2739, null], [2739, 3297, null], [3297, 4001, null], [4001, 4609, null], [4609, 5217, null], [5217, 5825, null], [5825, 6585, null], [6585, 7081, null], [7081, 7356, null], [7356, 7643, null], [7643, 7795, null], [7795, 8462, null], [8462, 9092, null], [9092, 9819, null], [9819, 11165, null], [11165, 12214, null], [12214, 13559, null], [13559, 15009, null], [15009, 15456, null], [15456, 16355, null]], "google_gemma-3-12b-it_is_public_document": [[0, 580, true], [580, 1222, null], [1222, 1679, null], [1679, 2739, null], [2739, 3297, null], [3297, 4001, null], [4001, 4609, null], [4609, 5217, null], [5217, 5825, null], [5825, 6585, null], [6585, 7081, null], [7081, 7356, null], [7356, 7643, null], [7643, 7795, null], [7795, 8462, null], [8462, 9092, null], [9092, 9819, null], [9819, 11165, null], [11165, 12214, null], [12214, 13559, null], [13559, 15009, null], [15009, 15456, null], [15456, 16355, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 16355, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16355, null]], "pdf_page_numbers": [[0, 580, 1], [580, 1222, 2], [1222, 1679, 3], [1679, 2739, 4], [2739, 3297, 5], [3297, 4001, 6], [4001, 4609, 7], [4609, 5217, 8], [5217, 5825, 9], [5825, 6585, 10], [6585, 7081, 11], [7081, 7356, 12], [7356, 7643, 13], [7643, 7795, 14], [7795, 8462, 15], [8462, 9092, 16], [9092, 9819, 17], [9819, 11165, 18], [11165, 12214, 19], [12214, 13559, 20], [13559, 15009, 21], [15009, 15456, 22], [15456, 16355, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16355, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a3ad7be4d00d4ec37df980ed543b07e2be6eb1d0
Supplementing Windows Audit, Alerting, and Remediation with PowerShell Daniel Owen Supplementing Windows Audit, Alerting, and Remediation with PowerShell GIAC (GCWN) Gold Certification Author: Daniel Owen, ggold@danielowen.com Advisor: Adam Kliarsky Accepted: October 20, 2017 Abstract This paper outlines the use of PowerShell to supplement audit, alerting, and remediation platform for Windows environments. This answers the question of why use PowerShell for these purposes. Several examples of using PowerShell are included to start the thought process on why PowerShell should be the security multi-tool of first resort. Coverage includes how to implement these checks in a secure, automatable way. To demonstrate the concepts discussed, small code segments are included. The intent of the included code segments is to inspire the reader’s creativity and create a desire to use PowerShell to address challenges in their environment. Finally, a short section includes resources for code examples and learning tools. While some knowledge of PowerShell will aid the reader, the intended audience of this paper is the PowerShell novice. 1. Introduction Understanding what exists in the protected environment is the beginning of any successful defensive security program, and internal auditing is a path toward gaining that understanding. Audits further allows the testing of assumptions about the existing security posture and comparison to the expected or documented standard (Christopher, 2010). Studies have shown that implementing the first five CIS Controls, from the Center for Internet Security, prevent ~85% of attacks seen in the wild. All five of these controls require an audit component to find success or prove their implementation. CIS further advises an audit as a foundational step toward developing a plan for implementing the CIS Critical Controls. (Center for Internet Security, n.d.) While the emphasis of this paper is practical security improvements, there is overlap with third-party audit controls such as PCI or HIPAA. As such, references that illuminate their relationship are also included. As an extension of point-in-time auditing, it is critical to detect and quickly remediate changes to standard secure configurations. It is not realistic for an organization to expect to be able to do this manually. Through Continuous Risk Treatment (CRT), we can automate the process of detecting, altering, and in some cases remediating configuration skew (Steffan & Sandage, 2017). To this goal, the paper discusses the process of taking a script initially used for point in time audits and automating it to provide continual coverage. A force multiplier allows an increased output from a given input (Kaufman, 2012, p. 158-159). A lever, such as a crowbar, is a simple physical world example of a force multiplier. Spending a relatively small amount of time using PowerShell as a force multiplier generates dividends many times over in time savings and better outcomes. This paper explores methods of using PowerShell to supplement existing auditing tools and for using the data to automate alerting and remediation efforts. Through this process, system defenses are significantly improved. One goal of this paper is to introduce the security practitioner to a sampling of ways to use PowerShell in a defensive manner; however, the larger goal is to inspire the Daniel Owen, ggold@danielowen.com reader to expand upon the examples in this paper and use PowerShell to fill gaps in their own security infrastructure. 2. Why PowerShell? 2.1 Supplemental tool PowerShell should not be the only tool used for auditing, alerting, or remediation but will add to the processional’s toolbox. PowerShell is one of the more versatile tools currently available while still retaining an approachable learning curve. In addition, a number of security professionals are already using PowerShell so it is easy to learn and borrow from the community. In addition, many third-party tools can use PowerShell to extend their functionality. As an example, Nessus can use PowerShell for compliance auditing. Paul Asadoorian demonstrates this in a number of examples for a Tenable blogpost showing Nessus rules written using PowerShell (Asadoorian, 2012). In the case of using PowerShell to extend the functionality of Nessus, Tenable has provided a set of PowerShell cmdlets to integrate directly with the Nessus API (Tenable, 2015). At a more basic level, PowerShell can be used as a data transformation tool using CSV exports from within Nessus as was demonstrated in a SANS Internet Storm Center Handler’s Diary by Rob VandenBrink (VandenBrink, n.d.). By combining these two approaches, relatively complex automation, can be achieved. PowerShell can be used both as a standalone tool or to fill in holes where existing tools are incomplete. 2.2 Flexibility PowerShell is built on the .NET Framework (Microsoft, n.d.a). This presents great flexibility in that PowerShell scripts have access similar to any other .NET language. Cmdlets are the building blocks of PowerShell scripts. They use a basic verb-noun naming convention and accept parameters to control their usage. As an example, Get-ADUser is the cmdlet used to query Active Directory to retrieve user objects. Daniel Owen, ggold@danielowen.com Additional cmdlets that expose more functionality in the underlying .NET Framework are included in each new version of PowerShell. It is also possible to call .NET Framework classes directly (Wilson, 2010). There are 1285 cmdlets in PowerShell 5 (Wilson, 2015). With this wide array of cmdlets, a process that cannot be audited or automated exclusively with PowerShell cmdlets is a rare challenge. 2.3 Part of the Operating System PowerShell was first released as an optional feature of Windows 2008 (Vanover, 2009). For later versions of Windows, it became a standard part of the install. PowerShell is now part of the operating system for all supported versions of Windows and does not require any additional software to be loaded. This is an advantage in that other scripting languages require additional interpreters. Installing and maintaining interpreters outside the standard Microsoft patching cycle adds additional management overhead, complexity, and cost, while expanding the attack surface. Bruce Schneier summed this issue up succinctly in Secrets and Lie when he said “Simply put, complexity is the worst enemy of security. As systems get more complex, they necessarily get less secure” (Schneier, 2015, p.3). 2.4 The future of Windows administration The future of Windows Server administration has less to do with the Graphical User Interface (GUI) today than it did prior to the release of PowerShell. This becomes more obvious with each new Windows release. Server Core for Windows 2008 is Microsoft’s first attempt at a server operating system without a GUI. The stated goal for Server Core is that it is a lighter weight installation requiring less server resources, less management, and a smaller attack surface. The central concept behind Server Core management is that the system is primarily managed using PowerShell or remote administration tools (Microsoft, n.d.b). With each subsequent Windows Server release, Microsoft has evolved the Server Core option. In Windows Server 2016, Microsoft has taken the minimalist operating system even further with Nano. For heavily virtualized and cloud environments Nano is Daniel Owen, ggold@danielowen.com an even lighter operating system (Ferrill, 2016). Due to Nano Server’s minimalist nature many tools that Windows administrators have become accustomed to do not work. This includes many Microsoft standbys such as Group Policy and System Center Configuration Manager. Even the version of PowerShell in Nano has limitations (Poggemeyer & Jaimeo, 2017). This is all to say that for Nano Server, custom scripting may be the only option for management and automation, at least in the short term. Microsoft is trying to change the administration of Windows servers. Simply stated, PowerShell is the future of Windows administration and automation. Furthermore, the speed at which a competent scripter can complete and automate tasks relative to the repeated time cost of someone manually repeating tasks is significant. Between these two truths, PowerShell is the way of Windows administration going forward. The only remaining question is how long, not if, those who refuse to learn PowerShell can survive in their profession. While this may affect Windows administrators first, security professionals should not expect any less radical a change. 2.5 Other Scripting Languages There are a number of other scripting languages that can be used for development on Windows. These include Visual Basic, batch scripts, Python, Perl, and Bash. For the reasons outlined above, this paper concentrates on PowerShell as the preferred language for scripting on Windows. Development in other scripting languages can use many of these same concepts, but other languages may be more limited in functionality. Carefully consider the significant advantages to using PowerShell for Windows automation before making a decision to use an alternative development language. 3. Uses of PowerShell 3.1 Administrative Group Members There are a number of highly privileged groups in Active Directory that are critical to its operation. For this reason, they are tempting targets for attackers. For example, the Domain Admins group is described by Microsoft as having “complete control over all domain controllers and all directory content stored in the domain” and Daniel Owen, ggold@danielowen.com "can modify the membership of all administrative accounts in the domain" (Microsoft, n.d.c). This makes the Domain Admins group a tempting target for attackers. Following the concept of least privilege, which requires granting the user the minimum possible access so that they can still complete their tasks (Bishop, 2002), there should be as few people in privileged groups, such as the Domain Admins group, as possible. A Domain Admin’s primary role is as a database administrator for Active Directory; therefore, it is not desirable to have users logged in as a Domain Admin for other tasks. For this reason, quick alerting for Domain Admins group changes is critical. This allows quick remediation when someone who is not authorized is added to the group, which, in turn, helps to protect the company from rogue or malicious acts as well as mistakenly overprovisioned users. To accomplish this audit and remediation goal, users in the Domain Admins group are compared to a list of users who are authorized to be members of the group. To complete this in a script, there are two easy ways to define the authorized list. The list can be either a file that the script reads or a comparison group from Active Directory. Both approaches have advantages and disadvantages. The advantage of using an Active Directory “authorization” group is that it is easy to manage and document. This may be of limited use since the script protects against an adversary who already has rights in Active Directory. As an example, frequently, attackers clone an existing Domain Admin group member as a form of persistence. The authorizing group is also included in the copied account. For this reason and simplicity, this script uses a text file stored outside of Active Directory. Registry hives, SQL databases, or a number of other solutions would also be options for storing the authorized users. ``` 1 $Allowed = Import-Csv 'C:\scripts\get-admins\allowed.csv' #Path to file containing SIDs of users allowed to be a member of Domain Admin 2 $group = Get-ADGroupMember $group #Get members of the group we are protecting 4 if ($allowed.Count -eq 0) { exit } #If no allowed, kill the script if the text file does not contain any data or is not read for some reason. 5 $allowed = Get-ADGroupMember $group #Get members of the group we are protecting 6 $account = Get-SIDToAccount $allowed #Create a string variable that holds the SID for the member of $group to be compared. This simplifies code later. 7 $saccount = Get-SIDToAccount $allowed #Does the $allowed array contain the member of $group currently being compared? If not take action. 8 Disable-ADAccount -Identity $account #Disable the AD account that was added to $group but not in the $allowed file. 9 Remove-ADGroupMember -Identity $account $group #Remove $account from the $group. 10 if ($saccount -contains $account) { #Does the $saccount array contain the member of $group currently being compared? If not take action. 11 New-EventLog -LogName Application -Source 'get-admins-script' -Message "Account was removed from group as part of the get-admins security process." -ErrorAction SilentlyContinue -EventId 0 12 Write-EventLog -LogName Application -Source 'get-admins-script' -EntryType Warning -EventId 0 -Message "Account was removed from group as part of the get-admins security process." -ErrorAction SilentlyContinue -EventId 0 13 $saccount = Get-SIDToAccount $allowed #Create a new event log type in the Application log if it does not exist. 14 $allowed = Get-ADGroupMember $group #Get members of the group we are protecting 15 if ($saccount -contains $account) { #Does the $saccount array contain the member of $group currently being compared? If not take action. 16 New-EventLog -LogName Application -Source 'get-admins-script' -Message "Account was added to group as part of the get-admins security process." -ErrorAction SilentlyContinue -EventId 0 17 Write-EventLog -LogName Application -Source 'get-admins-script' -EntryType Warning -EventId 0 -Message "Account was added to group as part of the get-admins security process." -ErrorAction SilentlyContinue -EventId 0 18 $saccount = Get-SIDToAccount $allowed #Create a new event log type in the Application log if it does not exist. 19 $account = Get-SIDToAccount $allowed #Create a string variable that holds the SID for the member of $group to be compared. This simplifies code later. ``` Figure 1 - This sample script is included in Appendix A as get-admins.ps1 Daniel Owen, ggold@danielowen.com The 16 lines of sample code above demonstrate a simple script for detecting unauthorized additions to the Domain Administrators group. Once the script detects an unauthorized user, it disables the Active Directory account and removes the user from the group. Finally, the script writes a log file to the system’s Application log to allow for auditing. In order to keep the code as tight as possible, there is no white space, and there are additional steps that would typically be included for maintainability as well as additional remediation. A more complete version of this script is included in Appendix A as get-admins.ps1. ### 3.2 Identifying outdated local account password Many security practitioners believe that regular password changes are part of good account hygiene. Further, some regulatory or best practices frameworks require changing of passwords on a regular basis. For example, PCI DSS Version 3.2 requirement 8.2.4 states that users must “change user passwords/passphrases at least once every 90 days” (PCI Security Standards Council, 2016). Item 1.1.2 of the CIS Microsoft Windows Server 2016 RTM (Release 1607) Benchmark requires “Ensure 'Maximum password age' is set to '60 or fewer days, but not 0’” (Center for Internet Security, 2017). A simple PowerShell script can test both these regulatory requirements as well as to assure that old passwords do not linger for longer than policy allows. The script that follows looks at every enabled Active Directory account and creates a Comma Separated Values (CSV) file listing all accounts that are beyond the desired password age. The sample script is four lines with documentation and coding styled for ease of reading, but if one was to give up these niceties, this could become a one-line script. PowerShell makes this type of reporting simple. ``` $MaxAge = 90 # Maximum days since last password change $Dates = (get-date).AddDays($MaxAge) # Date of oldest password that does not require a change $Users = Get-ADUser -filter [PasswordLastSet -lt $Date] -Enabled -eq $true # Array that holds accounts that are past due for password change $Users | Export-Csv -NoTypeInformation c:\temp\oldpass.csv # Export all users who need to change their passwords to a CSV ``` *Figure 2 – An extended version of this script is included in Appendix A as get-old password.ps1* This can further be expanded by adding a few more lines of code that create a Help Desk ticket or send an email to review accounts that are beyond their timeouts. Scheduling this to run as frequently as needed will close the loop on automation. This last Daniel Owen, ggold@danielowen.com option is a good example of the flexibility of PowerShell which is to a great extent limited only by the user’s imagination and skill unlike more traditional audit tools that are limited by their developer’s imagination and desires. Appendix A houses this sample script as get-oldpassword.ps1. ### 3.3 Identify unauthorized email forwards Email has made communication within and between companies easier, but with ease comes risk of unauthorized data leakage. One example of that unauthorized communication is the automated forwarding of company email to an account outside the company’s control. This can happen when a user wants to use a non-company account and forwards email to their preferred platform or when an attacker does the same. While the first may be a policy violation, the second is potentially more worrying as it allows an attacker to begin to profile a target user and organization. The attacker is able to see what typical organizational emails look like as well as collect insider communications about the company and their clients. Using the collected data, other attacks such as phishing, ransomware, or financial fraud can be launched (Cidon, 2017). There are two approaches the defender can take when looking at this issue. Assuming forwarding is never allowed, fully automated attack detection and remediation can be achieved. This will follow a process similar to: 1. Detect creation of a forward. 2. Remove the email forward. 3. Disable the account or change the account password. 4. Contact the user to determine whether this was an external attack or a user action. Depending on the answer, other internal processes for security incidents or policy compliance remediation will follow. Obviously, which, if any, of these steps receive automation in a given company are determined by different risk acceptance levels. In a company with a higher risk tolerance, a lower tolerance for impacting end users, or that allows users to auto-forward their email a more restrained approach must be taken. In this case, a regular list is created Daniel Owen, ggold@danielowen.com so the user can be contacted outside of email to confirm they created the forward rule. The security incident process is initiated if the user did not create the rule. Multiple single line scripts to identify accounts in Exchange or Office365 that have been configured to forward email can be found with a simple web search (Grogan, 2011). This code can be used as is, for a one-time test, or built upon as part of a larger script that generates automated alerting and remediation as previously described. Once again, only the imagination and needs of the script’s author limit this script. ### 3.4 Identify Inactive Accounts Much as it is important to disable unused services, it is also critical to disable or remove unneeded Active Directory accounts. This helps to identify users who may have left employment, service accounts for applications that are no longer in use, or other accounts that have become dormant. This type of cleanup may also be a direct or indirect regulatory requirement. PCI DSS version 3.2 section 8.1.4 states unequivocally “remove/disable inactive user accounts within 90 days” (PCI Security Standards Council, 2016). HIPPA is somewhat less prescriptive with regulation 164.308(a)(3)(ii)(C) that states, “Implement procedures for terminating access to electronic protected health information when the employment of a workforce member ends” (Public Welfare, 2007). Using a script that reviews Active Directory for last login date meets both the compliance and the regulatory requirements. While PCI DSS may be comfortable with an unused account being active for 90 days, a high security environment may require shorter timeouts. Canned tools may not provide the flexibility to change timeouts easily, but PowerShell allows the automation to meet the security needs of the company using the script rather than a third-party compliance standard. This could include, but are not limited to, such unique items as different policies based on Organizational Unit, group membership, manager, or time of year. Since this is a common need for security professionals and systems administrators, searching popular code repositories identifies multiple scripts to automate this search. As an example, Microsoft partner TSO has created GetInactiveComputer.ps1 Daniel Owen, ggold@danielowen.com and made it available through TechNet (TSO, 2013). This simple script is both functional and a good starting point for a more complex script. On the other end of the complexity continuum, Luca Sturlese of 9to5IT has published PS-ManageInactiveAD to GitHub. Included in that package is Find-ADInactiveUsers.ps1 which has more included functionality and can be controlled using runtime variables (Sturlese, 2016). Scheduling this script can provide automated remediation efforts. PowerShell skills are valuable, but this use case demonstrates leveraging PowerShell without writing the first line of code since suitable solutions were already available. Further, it is often far more efficient to use or modify existing free scripts rather than writing code. 3.5 Business Logic Errors There are a number of business specific errors that can have a negative effect on security, but are unlikely to be addressed in a canned security audit solution. These are excellent opportunities for PowerShell to show its flexibility. There are often Active Directory groups whose membership is based on some other attribute of the account. This could be as simple as all members of a group must be located in a specific geographic location or could be more complex and include all users who report directly or indirectly to a specific manager, have a specific title level, and are located in a specific location. As mentioned before, this is an esoteric set of requirements, but with PowerShell, it is trivial to produce a report of users who are out of compliance or even auto remediate the situation. For the first example, a group made up of members of a specific geographic location, a sample script can easily be created. The first step to writing this script is to decide on the desired result. For this example, the script is looking for any Active Directory user in the “US_Associates” group whose account object does not show their location as United States. There might be other constraints such as excluding service accounts or users of a specific job title but for simplicity, this example is limited. By adding the single line “Remove-ADGroupMember -Identity 'US_Associates' -Members $invalidmembers -Confirm:$false”, this script can be taken a step further. This automates the removal of the users from the group. It may be desirable when automating the removal of users from a group to send an audit log to a human for final review. This can be completed easily through email by using the Send-MailMessage cmdlet. For usability, this is wrapped in a conditional so an email is sent only if an object has been removed from the group. ``` 8 if ($invalidmembers.Count -gt 0) { 9 Remove-ADGroupMember -Identity 'US_Associates' -Members $invalidmembers -Confirm:$false 10 Send-MailMessage -To "user1@example.com" -From "user2@example.com" -Subject "Users removed from US" 11 } ``` This demonstrates how a script can be extended after it is completed to add functionality or can be built in parts as new needs are discovered. The code for this script is reproduced in Appendix A as get-invalidgroupmembers.ps1. Taking the second more esoteric example from before, this next example creates a list of users who report directly or indirectly to a specific manager, have a specific title level, and are located in a specific location. PowerShell does not have a way to natively, recursively create a list of a manager’s direct and indirect reports, nor is this a trivial scripting exercise. As has been mentioned previously, when a script is going to require significant code it is often best to leverage an existing script if possible. In this case, Microsoft MVP, François-Xavier Cat has already written the code to recurse management levels and posted it to TechNet as Get-ADDirectReports (Cat, 2015). As part of the custom script, it is imported. This is a good example of mixing existing code with new custom scripting in PowerShell. This script remains very small and simple by using existing code. Daniel Owen, ggold@danielowen.com There are hundreds of examples of very specific business logic issues for which an existing security application is unlikely to be found. As demonstrated, PowerShell can make quick work of those issues. 4. Automating Scripts As has been alluded to earlier, running a PowerShell script on an as needed basis can be extremely useful but automating the script adds a new dimension to the force multiplier effect. Automation opens up new opportunities such as regularly generated reports, generating automated tickets for remediation, and fully automated remediation efforts among other options. As has been said before, the developer’s imagination and needs are the only limiting factor in the opportunities for automation. There are a number of ways to automate scripts and this paper looks at two of the most common. Security concerns introduced by automation are also considered. 4.1 Scheduled Tasks Using the Windows Task Scheduler to run a PowerShell script is likely the easiest and most common way to automate the running of a PowerShell script on a repeating basis. Scheduled tasks can be created directly on the machine that runs the task or a Group Policy can be used to push the scheduled task to remote systems. Creating tasks locally is useful for tasks that connect remotely to other systems to make changes or gather information. This can also be useful for scripts that only need to run on a single system. Daniel Owen, ggold@danielowen.com Pushing a scheduled task to remote systems is beneficial for situations where a script needs to run on a regular basis without interaction with other systems. This can also be helpful for machines that are not always accessible across the network, such as laptops. Since Group Policy refreshes on a regular basis, this has the added benefit of being self-correcting if the scheduled task is changed or removed. Using third-party job schedulers can achieve the same goals. Other schedulers are out of scope for this paper but should be considered in environments that have standardized on alternate schedulers. **4.2 Automation Security** It is important to consider the security ramifications of running scheduled tasks using Task Scheduler. Many automated tasks require elevated rights so it is critical to consider the tradeoffs inherent in automating PowerShell scripts. PowerShell code can be signed using a code-signing certificate issued by a trusted internal PKI server or a third-party certificate authority. By setting the Execution Policy to “AllSigned” PowerShell only executes signed code. Running “Set-ExecutionPolicy AllSigned –Force” from within PowerShell achieves this goal (Perez, 2013). Setting the Execution Policy is not a guarantee that unsigned code will not execute. Scott Sutterland authored the article “15 Ways to Bypass the PowerShell Execution Policy” (Sutherland, 2014) in which he catalogs a number of ways to achieve this. Even considering this, setting the execution policy does help reduce risk in reference to automation. Sutterland’s work concentrates on a user who already has access to run their scripts as an elevated user. In the case of automation, the “AllSigned” setting is there to reduce the risk of a script on the server’s hard drive being modified and then executed as a scheduled task. Credentials may be stored on a system running scheduled tasks. These credentials can be recovered using tools such as mimikatz (Delpy, 2017). When an attacker gains admin level access to a system where scheduled tasks are running, the assumption should be that credentials have been stolen. For this reason, it is best not to use a full user account. It is safer to run the scheduled task as Network Service or System since these services have no password to be stolen and are therefore more secure options. These Daniel Owen, ggold@danielowen.com services, via their Computer object in Active Directory, can still be delegated access to remote resources. If a true user account must be used, Kerberos S4U (Services for User) can be used with constrained delegation, which limits the damage of lost credentials (Fossen, 2014, p. 56). 5. Additional Sources of PowerShell Scripts There are a number of additional sources of complete code or scripts that can be built upon. As has been mentioned before, when a viable script exists it is a waste of resources to write that script again. Similarly, by looking at someone else’s code and borrowing ideas even challenging scripts can be completed. Microsoft encourages the use of GitHub repositories for shared coding projects (Harry, 2017) and many developers have followed this recommendation. This includes the PowerShell Team’s own repository (Microsoft, n.d.). Jason Fossen, the lead author for SANS 505 - Securing Windows and PowerShell Automation, also has a GitHub repository that included numerous security specific PowerShell scripts (Fossen, n.d.). There are many PowerShell scripts as well as extensive guides and advice available as part of the Microsoft TechNet web pages. This includes the “Hey, Scripting Guy!” blog which covers topics in a start to finish learning style that can be helpful for both the novice and experienced scripter. Numerous individuals and companies are publishing PowerShell scripts and advice as well as dozens of books on the topic. The challenge is easily too much information rather than a lack of coverage for the topic of PowerShell. Defensive PowerShell security is less well covered but the topic is gaining interest. These scripts often closely intersect with the more generalist systems administration topics found in non-specialist areas. 6. Conclusion PowerShell works as a force multiplier to allow security professionals be more efficient in their efforts. For the IT audit professional, it allows for efficient collection of Daniel Owen, ggold@danielowen.com data. For the analyst, it allows efficient review of data. For the defensive generalist it allows for automated alerting and remediation. As has been said repeatedly, only the imagination and the needs of the user, limit the benefits of PowerShell for the security professional. This paper has covered a small number of use cases as examples, but only scratches the surface of what is possible. The quickest way to prove the value of PowerShell in an environment is to pick a problem that current tools are not adequately identifying or that requires repeated manual intervention and spend a day with PowerShell. At the end of that time, the utility of PowerShell as leverage should be obvious. PowerShell quickly becomes the easy route to better Windows security efforts once a professional starts utilizing it and discovers its multiple uses. References Daniel Owen, ggold@danielowen.com Daniel Owen, ggold@danielowen.com VandenBrink, R. (n.d.). Nessus and Powershell is like chocolate and peanut butter! Retrieved from https://isc.sans.edu/forums/diary/Nessus+and+Powershell+is+like+Chocolate+and+Peanut+Butter/20431/ Daniel Owen, ggold@danielowen.com Appendix A It should be noted, for readability of sample code, these scripts have been kept simple. Minor changes will make these more flexible at the expense of simplicity. As an example, looping through an array of groups rather than checking one hard coded group in get-admins.ps1 is more useful in production. Further, the scripts use a number of constants that are expedient for small sample scripts, but variables would better handle these settings as the complexity and re-usability of the scripts grow. A.1 Sample code for get-admins.ps1 ```powershell #Script to identify unauthorized accounts added to the Domain Admins group #region variables $allowed = Import-Csv 'c:\scripts\get-admins\allowed.csv' #Path to file containing SIDs of users allowed to be a member of Domain Admins $group = 'Domain Admins' #Group being protected $members = Get-ADGroupMember $group #Get members of the group we are protecting #endregion if ($allowed.count -eq 0) {exit} #Failsafe to kill the script if the text file does not contain any data or is not read for some reason. ForEach ($m in $members) { #Create a loop to look at each member of the $group $account = $m.id.ToString() #Create a string variable that holds the SID for the member of $group to be compared. This simplifies code later. if ($allowed.sid -notcontains $account) { #Does the $allowed array contain the member of $group currently being compared? If not take action. Disable-ADAccount -Identity $account #Disable the AD account that was added to $group but is not in the $allowed file. Remove-ADGroupMember -Identity $group -Members $account -Confirm:$false #Remove $account from the $group. #This is a good place to send an email alert or write an alert to a console. if (((Get-EventLog -LogName Application -Source "get-admins-script" -ErrorAction SilentlyContinue) -eq $null) { #Create a new event log type in the Application log if it does not exist. New-EventLog -LogName Application -Source 'get-admins-script' -EntryType Warning -EventId 0 -Message "$account was removed from $group as part of the get-admins security process." #Write an event to the local systems Security Event Log. } } ``` A.2 Sample code for get-oldpassword.ps1 ```powershell #Script to find users who have not changed their password recently $maxage = 90 #Maximum days since last password change $date = (Get-Date).AddDays(-$maxage) #Date of oldest password that does not require a change $users = Get-ADUser -Filter {PasswordLastSet -lt $date -and Enabled -eq $true} #Array that holds accounts that are past due for password change $users | Export-Csv -NoTypeInformation c:\temp\oldpass.csv #Export all users who need to change their passwords to a CSV Send-MailMessage -To "user1@example.com" -From "user2@example.com" -Subject "Users with old passwords" -Body "See the attached CSV file for a list of users with passwords over $maxage days old" -Attachments 'c:\temp\oldpass.csv' -SmtpServer smtp.example.com #Send an email ``` Daniel Owen, ggold@danielowen.com A.3 Sample code for get-invalidgroupmembers.ps1 #Script to find users in the 'US_Associates' group that do not have a country entry of 'US' $group = Get-ADGroupMember 'US_Associates' #Create a new variable with all members of the group we are interested in $invalidmembers = @() #Initialize the array that will hold invalid users foreach ($g in $group) { #Check the country for every user who is in the group $tuser = Get-ADUser $g.sid -Properties country #Read the AD record for the current member being reviewed if ($tuser.country -notlike 'US') { $invalidmembers = $invalidmembers + $tuser } #If the user's country is not "US" add it to the variable invalidmembers }$invalidmembers | Export-Csv -NoTypeInformation c:\temp\invalidmembers.csv #Export the results to a CSV file if ($invalidmembers.Count -gt 0){ #If there are invalid members cleanup and report Remove-ADGroupMember -Identity 'US_Associates' -Members $invalidmembers -Confirm:$false #Remove invalid members from the group Send-MailMessage -To "user1@example.com" -From "user2@example.com" -Subject "Users removed from US_Associates AD group" -Body "See the attached CSV file for a list of users removed from the US_Associates AD Group" -Attachments 'c:\temp\invalidmembers.csv' -SmtpServer smtp.example.com #Send an email } A.4 Sample code for get-recursivemanager.ps1 #Script to find a specific group of users based on manager, title, and locations #Get-ADDirectReport.ps1 can be downloaded from https://gallery.technet.microsoft.com/scriptcenter/Get-ADDirectReport-962616c6 "C:\scripts\Get-ADDirectReport.ps1" #Dot include script to do recursive lookup based on manager $reportsfinal = @() #Initialize variable foreach ($r in $reports) { #Create a list of users who will go on the report $user = Get-ADUser -Identity $r.SamAccountName -Properties country, title #Lookup user in AD if (($user.country -like 'US') -and ($user.title -like 'manager')) { $reportsfinal = $reportsfinal + $user } }$reportsfinal | Export-Csv -NoTypeInformation C:\temp\selectreports.csv #Export to CSV Daniel Owen, ggold@danielowen.com ## Upcoming SANS Training Click here to view a list of all SANS Courses <table> <thead> <tr> <th>Event Name</th> <th>Location</th> <th>Date Range</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>SANS Las Vegas Summer 2020</td> <td>Las Vegas, NVUS</td> <td>Jun 08, 2020 - Jun 13, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Chicago Spring 2020</td> <td>OnlineILUS</td> <td>Jun 01, 2020 - Jun 06, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Books &amp; MP3s OnlyUS</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table> Last Updated: March 30th, 2020
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/assurance/supplementing-windows-audit-alerting-remediation-powershell-38140", "len_cl100k_base": 8147, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 62816, "total-output-tokens": 11114, "length": "2e12", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.0005006790161132812, "__label__crime_law": 0.0014104843139648438, "__label__education_jobs": 0.00423431396484375, "__label__entertainment": 0.0001417398452758789, "__label__fashion_beauty": 0.0001621246337890625, "__label__finance_business": 0.0013704299926757812, "__label__food_dining": 0.0002675056457519531, "__label__games": 0.0008015632629394531, "__label__hardware": 0.001247406005859375, "__label__health": 0.0004596710205078125, "__label__history": 0.00024306774139404297, "__label__home_hobbies": 0.0001652240753173828, "__label__industrial": 0.0005741119384765625, "__label__literature": 0.0003139972686767578, "__label__politics": 0.0003905296325683594, "__label__religion": 0.00034999847412109375, "__label__science_tech": 0.06378173828125, "__label__social_life": 0.00017952919006347656, "__label__software": 0.10772705078125, "__label__software_dev": 0.814453125, "__label__sports_fitness": 0.0001704692840576172, "__label__transportation": 0.00027561187744140625, "__label__travel": 0.00017321109771728516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44523, 0.02648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44523, 0.36315]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44523, 0.89238]], "google_gemma-3-12b-it_contains_pii": [[0, 84, false], [84, 1143, null], [1143, 3429, null], [3429, 5327, null], [5327, 7506, null], [7506, 9682, null], [9682, 14193, null], [14193, 16825, null], [16825, 18929, null], [18929, 21245, null], [21245, 23348, null], [23348, 25307, null], [25307, 26770, null], [26770, 29160, null], [29160, 31178, null], [31178, 32025, null], [32025, 33813, null], [33813, 35760, null], [35760, 37591, null], [37591, 37801, null], [37801, 40860, null], [40860, 42971, null], [42971, 44523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 84, true], [84, 1143, null], [1143, 3429, null], [3429, 5327, null], [5327, 7506, null], [7506, 9682, null], [9682, 14193, null], [14193, 16825, null], [16825, 18929, null], [18929, 21245, null], [21245, 23348, null], [23348, 25307, null], [25307, 26770, null], [26770, 29160, null], [29160, 31178, null], [31178, 32025, null], [32025, 33813, null], [33813, 35760, null], [35760, 37591, null], [37591, 37801, null], [37801, 40860, null], [40860, 42971, null], [42971, 44523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44523, null]], "pdf_page_numbers": [[0, 84, 1], [84, 1143, 2], [1143, 3429, 3], [3429, 5327, 4], [5327, 7506, 5], [7506, 9682, 6], [9682, 14193, 7], [14193, 16825, 8], [16825, 18929, 9], [18929, 21245, 10], [21245, 23348, 11], [23348, 25307, 12], [25307, 26770, 13], [26770, 29160, 14], [29160, 31178, 15], [31178, 32025, 16], [32025, 33813, 17], [33813, 35760, 18], [35760, 37591, 19], [37591, 37801, 20], [37801, 40860, 21], [40860, 42971, 22], [42971, 44523, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44523, 0.05098]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
55b9b6519ddb957644bafe324b34b8ab8892b936
A Web Engineering Approach to Model the Architecture of Inter-Organizational Applications Johannes Meinecke, Martin Gaedke, Martin Nussbaumer University of Karlsruhe, Institute of Telematics, IT-Management and Web Engineering Research Group, Engesserstr. 4, 76128 Karlsruhe, Germany {meinecke,gaedke,nussbaumer}@tm.uni-karlsruhe.de Abstract. During recent years, the World Wide Web (WWW, Web) has increasingly been used as a platform for applications that link together processes both between and across organizations. Acting as distributed components, Web Services provide a standardized way of externalizing functionality on a global scale and as such enable accesses that transcend organizational boundaries to form federated applications. The design and evolution of these federated applications is now imposing new obligations for the disciplined engineering of composed Web solutions. To meet these obligations, we extend the WebComposition idea, which is an approach to apply component-based software development concepts on Web applications. This extension facilitates modeling the complex landscape of the components and services building the federated applications. In this context, we introduce the WebComposition Architecture Model that serves as a map to keep track of the interrelations between the federated partners in terms of the involved Web-technology. Among the modeled artifacts are Web services, Web Applications and organizational zones of control that are all subject to evolution in the sense of the WebComposition approach. 1 Introduction The rise of network technologies has enabled many fields of applications and ways to support businesses. Initially applied to connect only distributed components within single companies, it is nowadays increasingly used to link together systems of multiple enterprises for the purpose of cooperation and federation. The resulting applications are therefore characterized by their component-oriented architecture and distributed nature. This trend has also been influenced by recent advances in Web technologies, particularly in the field of Web services. In the context of enterprise integration, the service can take the role of the component in a distributed system by exposing functionality and data through defined interfaces and standardized Web protocols. The value they provide can then be used in applications from within as well as from outside the company. In order to facilitate their orchestration to higher-level services, solutions can rely on business-process engines that control the execution flow of multiple services based on formal descriptions of the process. The task of making the envisioned federate scenarios work successfully raises a diversity of issues to be addressed. Beyond the many questions related to the integration of business-processes, technical solutions for the operation and evolution of the system landscape have to be found. Approaches like e.g. the UML Profile for Enterprise Distributed Object Computing (EDOC) [14] focus the modeling of collaboration architectures on various levels of granularity, but understand the component as a logical concept. While such viewpoints are certainly helpful for describing systems on an abstract level, they are less appropriate for dealing with the concrete technical details that are important for operating and evolving a solution. With the mentioned technologies involved, this problem is actually a matter of systematically building Web-based solutions, i.e. the discipline Web Engineering [4]. As a sub-discipline, Component-Based Web Engineering (CBWE) supports especially development processes that have to cope with evolving Web applications composed of re-usable parts (in this case: the Web services). By focusing the principle of reuse, new applications can be created from already existing components (i.e. development with reuse), that are constructed in a separate process (i.e. development for reuse). During the lifecycle of the federated systems, evolution leads to the creation of new (re-)usable services, possibly belonging to new partners that join the federation. To effectively support its lifecycle in a controlled and systematic manner, the composed system has to be specified with evolution in mind first. This relates e.g. to the parts of the composition, their places and the way they can be accessed (i.e. used). Furthermore, it has to be stated how the relationships between the cooperating partners (i.e. trust) at business level are reflected by the solution on a technical level. When such system descriptions are integrated into a dedicated model, a guideline for evolution can be accomplished. In the following section, we first look at a number of related approaches to modeling distributed, federated systems. Section 3 adopts the view of the WebComposition approach [5] on the problem at hand. We then propose the WebComposition Architecture Model (WAM) as an answer to the demand for modeling federated service-based landscapes in section 4. Following that, Section 5 describes the application of the model in a real-world project. Finally, we conclude with a short summary in section 6. 2 Related Approaches The general idea of supporting systems composed of services throughout their lifecycles can be related to a number of already existing efforts in different disciplines. In this section, we describe three approaches related to our work. The Dynamic System Initiative and the Data Center Markup Language follow similar ideas concerning system modeling, whereas the Enterprise Application Integration concept is more focused on federating software systems on various levels 2.1 Dynamic System Initiative The Dynamic System Initiative (DSI) is a technological strategy devised by Microsoft that aims at an integrative support for the design, deployment and operation of dis- tributed systems [12]. The focus is centered on software products based on the Windows platform and is not confined to Web applications. The initiative is driven by the idea of combining the two processes of building and operating IT solutions to emphasize the application life cycle as a whole. This is realized by the introduction of the System Description Model (SDM), which is used to mirror the solutions throughout their life cycles. At design time, the SDM aids the developers planning their products on different levels of abstraction. At deployment time, the model enables an automatic installation of a distributed system as one unit. At runtime, it serves as a source of information for maintenance tools. Descriptions based on the SDM consist of layers that can be defined independently from each other by different authors. The number of used layers can vary between individual solutions. The subjects of the application layer include ASP.NET Web applications and services as well as databases. On the application host layer, the description refers e.g. to instances of database and Web servers. Below that, the network topology and operation system layer models the operation system, as far as its functionality and settings are affected, and the connections between the elements of the distributed system. Finally, the hardware layer describes computers and further devices with regard to the hardware requirements of the other system entities. The initiative also includes a product roadmap for supporting systems to be delivered. The overall approach is not platform-independent and does not specifically target scenarios of federated applications. 2.2 Data Center Markup Language Similarly to DSI, the DCML open industry initiative (Data Center Markup Language) aims at modeling IT infrastructures of individual companies [13]. The major focus lies on the application of XML-based standards for specifying data center environments, dependencies between data center components and the policies governing management and construction of those environments. This allows e.g. for creating descriptions of abstract server prototypes that cover all relevant aspects for setting up a particular type of machine, including the operation system configuration, network parameters and hardware requirements. By instantiating such abstract descriptions, servers that have a special dedicated purpose can be (re-)produced with a minimal set of manual steps. The sum of the descriptions make up a blueprint of the entire data center that reflects the system as it currently exists (or as it is supposed to exist in the future) in order to keep up with its changing shape and composition. Unlike the DSI, DCML offers a more platform-independent view: The modeling of the data centre environments is founded on the definition of an ontology describing the problem domain by using XML and OWL. Moreover, it offers a bottom-up perspective on a system, emphasizing the infrastructure rather then the hosted business applications. Enterprise Application Integration (EAI) [9] can be understood as an effort that aims at connecting single information systems in order to enable them to exchange data and support a common process. Its primary concern lies in integrating systems within one enterprise, but it basically applies the same techniques that are also used to achieve business-to-business integration. Typical scenarios include especially the integration of complex systems like large Enterprise Resource Planning (ERP) applications, Customer Relationship Management (CRM) applications, or various legacy applications. There are many different approaches to reach the general objective of EAI. A possible distinction can be made between the levels at which the individual strategies try to bind together the separate applications. As a relatively technical approach that does not directly involve the processes themselves, information-oriented integration connects information sources like databases to allow the information flow between different applications. Some solutions accomplish this by simply replicating the data. Others combine several databases to make them appear as one logical database. When the data sources are accessible via defined interfaces, integration can also be realized by connecting these with adapters. A more profound approach in terms of business processes is taken by the Service-oriented integration. The general strategy is concerned with the reuse of methods that expose business logic within or between companies. This allows for building composite applications that aggregate the functionality of other remote applications to achieve solutions of a higher value. Technical realizations include mechanisms based on Web services or distributed objects standards, like e.g. CORBA or COM+. The objective of business-process-oriented integration is to combine systems at a high level of abstraction by targeting the process itself. Technical integration methods are based on a specification of the company’s relevant processes, which have to be defined in advance. The model serves as a description of how the involved sub processes and supporting systems are related to each other. With the help of middleware, this information is then used to glue together the software systems. The interaction can e.g. be controlled by event-driven mechanisms. When this strategy is used, sub systems trigger events that are then interpreted by the middleware and lead to the invocation of other system parts. Business-process-oriented integration is supported by standards like ebXML or BPEL4WS. Unlike the other strategies, portal-oriented integration does not try to establish an exchange of information between system back-ends. Instead, the users are offered a single Web interface (a Web portal) that allows them to access multiple applications in a uniform way through a Web browser. The implementation of the portals is often realized with the help of application servers that also provide connectors to the back-end systems. As a higher form of integration, portal federation [8] strives for bringing together different autonomous portals, allowing the realization of new application scenarios in a distributed and self-organized manner. 3 Evolution Aspects of Component-Based Web Applications One of the well known problems of Software Engineering for the World Wide Web lies in the fact that its resource-based (document) implementation model was never truly intended for the kind of complex applications that are in use today. During the design of Web-based applications the entities handled by the designers are often defined at a much higher resolution than possible in the actual code produced during the development process. Component Based Software Engineering (CBSE) [11] (i.e. the construction of software from existing components) has been around for about three decades. CBSE is said to allow the construction of more complex software at lower costs. It is supposed to lead to easier maintenance and evolution (i.e. a higher flexibility of a software product throughout its entire life cycle), as well as an overall increase of quality if performed systematically. The WebComposition approach adopts these concepts by providing dedicated models that allow creating Web-based applications from components. It bridges the gap between design and implementation by capturing whole design artifacts in components of arbitrary granularity. The resolution of a component is not preset, but can vary depending on the level of detail required by the design concept in question. The requirements for a software system change as time goes by. It is obvious that many kinds of influences are responsible for this, e.g. new regulations, changes in corporate identity or an extension of functionality. Such maintenance tasks are difficult to handle, if we did not design the application with the possibility of future changes and extensions in mind. Therefore, the WebComposition approach focuses on the evolution of Web-based applications by reusing components. The process consists of three main-phases that are applied in an iterative way throughout the application lifecycle. They are derived from the common phases of software process models, but take the principles of the Web into account and address concepts of software reuse. The process model follows a spiral consisting of evolution analysis and planning, evolution design and the execution of evolution [7]. The first phase deals with common problems in strategic planning of the application’s functionality respectively with Domain Engineering. Domain Engineering has been described as a process for creating a competence in application engineering for a family of similar systems [15]. The last two phases reflect the two different views towards reuse: consumer view (development with reuse) and producer view (development for reuse). To support and enforce a disciplined and manageable evolution of a Web-based application in the future, it makes sense not to design the initial application on the basis of the concrete requirements identified at the start of the project. Instead the initial application should be regarded as an empty application that is suitable for accommodating functionality within a clearly defined evolution space. We denote this initial application with the term evolutionbus, which serves as the glue (or starting point) for all abstract application domains of a Web-based application (cf. Fig. 1). Evolutionbus Evolution by integrating domain-specific services Evolution by extending the domain set Domain-specific evolution by integrating domain-specific services Fig. 1: The dimensions of a single Web-based application's evolution space The evolutionbus enables the management and collaboration of domain-components, i.e. components that implement specific application domains such as Web-based procurement, reporting, or user-driven data exchange. These domain-components (called Services within the WebComposition approach) may also be reused in future application domains. The evolution can take place in two clearly defined ways: - **Domain specific evolution (vertical evolution)** – The extension of a domain through new services, e.g. by prototyping or referencing an existing service of a domain. Another possibility is that the domain itself changes or that it receives more functionality. - **Evolution of the domain set (horizontal evolution)** – The evolution of an application is also possible by adding a new application domain that is created through composing of already existing services. This modification of the domain set extends the application functionality, as for example in the case of a shopping basket that is added to a Web-based product catalog. Adopting the idea of linking together systems of multiple enterprises for the purpose of cooperation and federation (cf. section 1), it becomes obvious that a third form of evolution exists. This form focuses on reuse in the large by benefiting from already existing application domains in other evolutionbuses belonging to partner organizations, as depicted in Fig. 2. - **Evolution with partner domains (federated evolution)** – The evolution of an application by adding existing autonomous, remote application domains or services, which remain under the control of partner organizations. This form of evolution extends the application functionality in a horizontal or vertical evolution manner without any local changes. The evolution with partner domains adds additional levels of complexity to the already challenging task of modeling Web-based applications, as it has to cope with a lack of control over individual autonomous system parts. This demands for dedicated support that reduces the complexity by allowing the seamless specification of distributed and federated aspects. Therefore, in the next section, we introduce the WebComposition Architecture Model, which maps our abstract findings on evolution and federation to a concrete technological level. 4 WebComposition Architecture Model Today, a common approach to composing enterprise applications from components distributed across organizational boundaries is to employ Web service technologies. By exposing data and functionality via Web service interfaces, companies can invoke each others services or create higher-level services to support entire business processes. However, in order to make such solutions operable, further considerations beyond merely providing and consuming services are necessary. First, the partners have to agree on the exact communication protocols to be used. Additional security protocols as well as firewall rules need to be established to enforce a safe invocation of services between different zones of control. The federated nature of such applications also raises identity management issues [17]. This has resulted in various specifications like e.g. WS-Federation [3], SAML [10] or the Liberty Alliance Project [1]. One of their key concepts is centered on the idea of distributing the access control process. This is achieved by the introduction of specialized Web services for authenticating anonymous requestors (identity provider or IP) and for authorizing the access to protected resources (security token service or STS). The full potential of this distributed approach is reached, when multiple services of different organizations are set up to trust each other. For example, the STS of a car manufacturing company can be configured to accept tokens from the IP of a part supplier. Hence, employees who have an account at the supplier’s company can be authorized to view process-relevant infor- mation at the manufacture’s intranet portal. This form of federation avoids the need for multiple accounts for the same identity and thus lowers the cost of administration. Although the use of federation standards can solve many problems, it does not help to reduce the complexity of the overall solution without the proper modeling approaches. In this context, we propose a model with a UML-like notation, the WebComposition Architecture Model (WAM), for capturing the most relevant aspects of the characterized systems. This covers in particular the participating services and applications, the access paths between them as well as the restrictions imposed on these accesses. Fig. 3 contains the symbols for the main modeling elements. a. The services represent the system’s distributed components that originate from the different involved organizations. They typically take the form of SOAP Web services that expose their functionality through a defined interface. It can be distinguished between atomic services, which provide basic, reusable operations, and composite services, which invoke other services to perform their work. One way to support such compositions is to rely on business process engines and use a formal process description to control the execution. b. Users normally only interact with the overall system through the interfaces provided by applications. Within the model, this term relates to Web applications or portals that are accessed via standard Web protocols through a browser. In correspondence to the other model elements (e.g. the security realm), they can either be realized as internet or intranet applications, possibly requiring its users to authenticate. c. In cases where it is useful to distinguish between the services and the underlying systems that serve as the actual data sources, this can be modeled with a separate data provider. This symbol is connected to the service or application with an undirected line. Data providers are not necessarily Web-capable themselves, as e.g. a database or a legacy system, for which the service acts as a wrapper. d. Connected systems that perform functionality beyond data management are represented with a process unit symbol. This covers e.g. software that performs computations or triggers processes outside the modeled scope. e. Services and applications can be enveloped by security realms, which are depicted as big rounded rectangles with their name in the top-right corner. These groupings reflect the organizational zones of control over networks, hardware and software systems. Technically, their boundaries result e.g. from Web server configurations or firewall rules. When the systems employ the mentioned federation specifications (WS-Federation etc.), the realm also functions as a common identity and access management context. This means, there is exactly one designated security token service per realm that issues the tokens necessary for accessing the realm’s resources. Moreover, this implies a common authorization system, like e.g. a set of roles and permissions that are understood among all protected resources. Security realms can also be nested, e.g. to support groups of Web services with a dedicated role system (compare Realm C in Fig. 4). f. **Identity providers** serve as the places where the known users of the realm’s applications as well as the applications and services themselves can be registered and have their accounts. As such, they can authenticate the members of the realm through login forms or Web service interfaces. The tokens they issue form the foundation for the security token service’s authorization decisions. g. The potential accesses on services and applications are marked with **invocation** links. Optional labels indicate the designated protocols, like e.g. HTTP, HTTPS, SOAP via HTTP, SOAP via SMTP etc. When two realm entities (i.e. services or applications) are linked, the intended meaning is that the service at the pointed end is called by the other entity. A link between an entity and a surrounding realm states that the entity is principally accessible from outside the realm (i.e. it is public). In the example in Fig. 4, service WS2 is only available to the intranet (from applications residing in Realm B). h. Separate realms that are to form a federation can establish **trust relationships**. Semantically, this means that the STS of the trusting realm accepts the tokens originating from the trusted realm. Hence, the identities of the foreign requestors can be mapped to tokens that are locally valid, enabling realm-crossing accesses in a controlled and manageable way. ![Diagram](image) **Fig. 4:** Example showing the combined use of the introduced model elements The introduction of the graphical symbolism can be understood as an approach to establish a good overview of the modeled scenarios. In our experience, real systems possess far more characteristics than can sensibly be visualized all at once. For instance, the protocols cannot always be referred to by a simple name like HTTP. Instead, even when relying on standards, there exist a huge number of possibilities concerning e.g. cryptographic operations or ways of requesting and passing on security tokens. We therefore suggest the use of the Object Constraint Language (OCL) [16] in correspondence to the way OCL supplements UML diagrams. This allows for refining the model’s semantic with annotations like the one in Fig. 4. The added restrictions can be related to the attributes of a meta-class of the modeling element. In this case, the expression states that the messages exchanged during service invocation are signed and encrypted. Hence, the protocol labels can also be seen as abbreviations for a more precise, but at the same time more complex notation. 5 WAM Applied for Integrated Information Management To demonstrate the practical relevance of the WebComposition Architecture Model, we outline its application in a real-world scenario in the context of an integration project conducted at the University of Karlsruhe [2]. The principal goal of the project is to integrate existing applications from different domains and business units based on a service-oriented architecture (SOA) at the University and to support its seamless evolution. By exposing the functionality of a diversity of distributed legacy systems as atomic Web services that can be composed to higher-level service domains, the resulting architecture brings together systems under the control of different university divisions based on heterogeneous platforms. The following example involves the applications of two sub organizations: the central administration department and the Department of Economics and Business Engineering (DEBE). Both departments maintain stored data about students independently from one another. In the case of the administration, the functionality of interest is provided by HISLSF, an information system widely used at German universities. The DEBE, on the other hand, employs a Web-based system implemented with the scripting language PHP. A demand for integration arose from the process of controlling admission to computing facilities, which requires student data from both sub organizations. The scenario involves several aspects covered by WAM: the exchange of data with different protocols, non-public Web services and trust relationships between distinct realms of control. The realization of the scenario has been achieved with the help of a support system we implemented to provide for the necessary token-issuing services and communication infrastructure in correspondence to WAM [6]. Fig. 5 depicts an overview of the implemented scenario using the proposed notation. Representing the servers and applications that form autonomous segments of the university network (UKA) to be integrated, the model contains the two realms: DEBE and Administration Department. According to the WAM definition given in section 4, both realms are equipped with a security token service for access control. As the scenario involves just users of one organizational unit, there is only one identity provider (IP) required to authenticate these users. Similarly, the one-way trust relationship reflects the fact that inter-realm service accesses occur only in one direction. Applied to the example, this means that the administration department trusts the Department of Economics and Business Engineering to authenticate their own users and allows some of these to access (HISLSF) services. In a typical use case, a computer facility administrator at the DEBE, who has been authenticated by the identity provider, accesses the DEBE Portal to request some student status information from a composite Web service (Comp Svc). The service itself calls atomic Web services of both realms (HIS Svc and PHP Svc) that provide the data from the underlying legacy systems: a database (DB) and the HISLSF-system (HISLSF). As can be seen in Fig. 5, the Web services are not publicly available: they can only be accessed from within the realm and with the help of valid security tokens issued by the realm’s security token service. Technical details like token emission, transport and translation, as covered by the WS-Federation specification, are hidden by the model to offer a perspective focused on the actual trust structure and Web service dependencies. In Fig. 6, the visible parts of the outlined use case as well as their connection to the model elements can be seen. The login screen is actually the front end of a separate Web application, the identity provider. The credentials entered by the accessing users are checked against the central LDAP directory of the department. Integrated into the portal, a page component provides the student data functionality by serving as the user interface for the composite Web service. Search queries are translated into SOAP calls that, among other parameters, contain the user’s security token. Inter-organizational Web-based applications evolve over time and as such need to be modeled in order to keep track of their current condition and to plan future changes in a systematic manner. Comparable attempts to model distributed systems exist, but do not specifically target cases where the modeled applications are controlled by several partners that take part in a federation. We discussed the problem from an evolution-oriented Web Engineering perspective and identified three types of evolution and their impact on the resulting development process. From these insights we concluded that federated evolution particularly requires modeling support. To complement the state of the art, we proposed the WebComposition Architecture Model (WAM) as a model with a UML-like notation that can be employed to describe landscapes of Web applications and Web services across multiple zones of control. As issues like e.g. the diversity of communication protocols can result in a complexity that exceeds the usefulness of graphical notations, we introduced OCL expressions as means to further specify system details. WAM has been successfully put to use in an integration project at the University of Karlsruhe. The federated, trust-oriented view turned out to be very appropriate, as requirements demanded that the involved sub-organizations would be able to stay in full control of the accesses on their applications. As a next step, we are now working towards runtime support to achieve a tighter interaction between the model and the applications. By making the federation modeling information itself available via a Web service interface and linking it to the configuration of the participating services and applications, we hope to lay the foundation for tools that can be used to further facilitate federated evolution. This includes e.g. the process of joining the federation with new services or altering the structure of trust relationships between the security realms. Moreover, we are now formulating WAM in higher detail, possibly adding a meta model. References
{"Source-Url": "http://www.researchgate.net/profile/Martin_Nussbaumer3/publication/221329127_A_Web_Engineering_Approach_to_Model_the_Architecture_of_Inter-_Organizational_Applications/links/02bfe50fd3ca779b6f000000.pdf", "len_cl100k_base": 5685, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28338, "total-output-tokens": 7016, "length": "2e12", "weborganizer": {"__label__adult": 0.0002689361572265625, "__label__art_design": 0.0005326271057128906, "__label__crime_law": 0.00030112266540527344, "__label__education_jobs": 0.0006480216979980469, "__label__entertainment": 5.9604644775390625e-05, "__label__fashion_beauty": 0.00013458728790283203, "__label__finance_business": 0.0004246234893798828, "__label__food_dining": 0.0002636909484863281, "__label__games": 0.0003256797790527344, "__label__hardware": 0.0007891654968261719, "__label__health": 0.0003764629364013672, "__label__history": 0.0002551078796386719, "__label__home_hobbies": 6.479024887084961e-05, "__label__industrial": 0.0004000663757324219, "__label__literature": 0.00021266937255859375, "__label__politics": 0.0002149343490600586, "__label__religion": 0.00035262107849121094, "__label__science_tech": 0.0288238525390625, "__label__social_life": 6.67572021484375e-05, "__label__software": 0.00994110107421875, "__label__software_dev": 0.95458984375, "__label__sports_fitness": 0.00018990039825439453, "__label__transportation": 0.0004603862762451172, "__label__travel": 0.00019562244415283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34242, 0.02231]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34242, 0.30893]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34242, 0.9199]], "google_gemma-3-12b-it_contains_pii": [[0, 2653, false], [2653, 5879, null], [5879, 8911, null], [8911, 12157, null], [12157, 15411, null], [15411, 17424, null], [17424, 19609, null], [19609, 22170, null], [22170, 24347, null], [24347, 27340, null], [27340, 29587, null], [29587, 31650, null], [31650, 34242, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2653, true], [2653, 5879, null], [5879, 8911, null], [8911, 12157, null], [12157, 15411, null], [15411, 17424, null], [17424, 19609, null], [19609, 22170, null], [22170, 24347, null], [24347, 27340, null], [27340, 29587, null], [29587, 31650, null], [31650, 34242, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34242, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34242, null]], "pdf_page_numbers": [[0, 2653, 1], [2653, 5879, 2], [5879, 8911, 3], [8911, 12157, 4], [12157, 15411, 5], [15411, 17424, 6], [17424, 19609, 7], [19609, 22170, 8], [22170, 24347, 9], [24347, 27340, 10], [27340, 29587, 11], [29587, 31650, 12], [31650, 34242, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34242, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
56bf8a54d3cfcbd5ea64442fac9041ffcad15cab
rpcgen Programming Guide 1. The rpcgen Protocol Compiler The details of programming applications to use Remote Procedure Calls can be overwhelming. Perhaps most daunting is the writing of the XDR routines necessary to convert procedure arguments and results into their network format and vice-versa. Fortunately, rpcgen(1) exists to help programmers write RPC applications simply and directly. rpcgen does most of the dirty work, allowing programmers to debug the main features of their application, instead of requiring them to spend most of their time debugging their network interface code. rpcgen is a compiler. It accepts a remote program interface definition written in a language, called RPC Language, which is similar to C. It produces a C language output which includes stub versions of the client routines, a server skeleton, XDR filter routines for both parameters and results, and a header file that contains common definitions. The client stubs interface with the RPC library and effectively hide the network from their callers. The server stub similarly hides the network from the server procedures that are to be invoked by remote clients. rpcgen’s output files can be compiled and linked in the usual way. The developer writes server procedures—in any language that observes Sun calling conventions—and links them with the server skeleton produced by rpcgen to get an executable server program. To use a remote program, a programmer writes an ordinary main program that makes local procedure calls to the client stubs produced by rpcgen. Linking this program with rpcgen’s stubs creates an executable program. (At present the main program must be written in C). rpcgen options can be used to suppress stub generation and to specify the transport to be used by the server stub. Like all compilers, rpcgen reduces development time that would otherwise be spent coding and debugging low-level routines. All compilers, including rpcgen, do this at a small cost in efficiency and flexibility. However, many compilers allow escape hatches for programmers to mix low-level code with high-level code. rpcgen is no exception. In speed-critical applications, hand-written routines can be linked with the rpcgen output without any difficulty. Also, one may proceed by using rpcgen output as a starting point, and then rewriting it as necessary. (If you need a discussion of RPC programming without rpcgen, see the Remote Procedure Call Programming Guide). 2. Converting Local Procedures into Remote Procedures Assume an application that runs on a single machine, one which we want to convert to run over the network. Here we will demonstrate such a conversion by way of a simple example—a program that prints a message to the console: */ * printmsg.c: print a message on the console */ #include <stdio.h> main(argc, argv) int argc; char *argv[]; { char *message; if (argc < 2) { fprintf(stderr, "usage: %s <message>\n", argv[0]); exit(1); } message = argv[1]; if (!printmessage(message)) { fprintf(stderr, "%s: couldn’t print your message\n", argv[0]); exit(1); } printf("Message Delivered!\n"); exit(0); } printmessage(msg) char *msg; { FILE *f; f = fopen("/dev/console", "w"); if (f == NULL) { return (0); } fprintf(f, "%s\n", msg); fclose(f); return(1); } And then, of course: example% cc printmsg.c -o printmsg example% printmsg "Hello, there." Message delivered! example% If printmessage() was turned into a remote procedure, then it could be called from anywhere in the network. Ideally, one would just like to stick a keyword like remote in front of a procedure to turn it into a remote procedure. Unfortunately, we have to live within the constraints of the C language, since it existed long before RPC did. But even without language support, it’s not very difficult to make a procedure remote. In general, it’s necessary to figure out what the types are for all procedure inputs and outputs. In this case, we have a procedure `printmessage()` which takes a string as input, and returns an integer as output. Knowing this, we can write a protocol specification in RPC language that describes the remote version of `printmessage()`. Here it is: ```c /* * msg.x: Remote message printing protocol */ program MESSAGEPROG { version MESSAGEVERS { int PRINTMESSAGE(string) = 1; } = 1; } = 99; ``` Remote procedures are part of remote programs, so we actually declared an entire remote program here which contains the single procedure `PRINTMESSAGE`. This procedure was declared to be in version 1 of the remote program. No null procedure (procedure 0) is necessary because `rpcgen` generates it automatically. Notice that everything is declared with all capital letters. This is not required, but is a good convention to follow. Notice also that the argument type is “string” and not “char *”. This is because a “char *” in C is ambiguous. Programmers usually intend it to mean a null-terminated string of characters, but it could also represent a pointer to a single character or a pointer to an array of characters. In RPC language, a null-terminated string is unambiguously called a “string”. There are just two more things to write. First, there is the remote procedure itself. Here’s the definition of a remote procedure to implement the `PRINTMESSAGE` procedure we declared above: ```c /* * msg_proc.c: implementation of the remote procedure "printmessage" */ #include <stdio.h> #include <rpc/rpc.h> /* always needed */ #include "msg.h" /* need this too: msg.h will be generated by rpcgen */ /* Remote version of "printmessage" */ int * printmessage_1(msg) char **msg; { static int result; /* must be static! */ FILE *f; f = fopen("/dev/console", "w"); if (f == NULL) { result = 0; return (&result); } fprintf(f, "%s\n", *msg); fclose(f); result = 1; return (&result); } ``` Notice here that the declaration of the remote procedure `printmessage_1()` differs from that of the local procedure `printmessage()` in three ways: 1. It takes a pointer to a string instead of a string itself. This is true of all remote procedures: they always take pointers to their arguments rather than the arguments themselves. 2. It returns a pointer to an integer instead of an integer itself. This is also generally true of remote procedures: they always return a pointer to their results. 3. It has an “_1” appended to its name. In general, all remote procedures called by `rpcgen` are named by the following rule: the name in the program definition (here `PRINTMESSAGE`) is converted to all lower-case letters, an underbar (“_”) is appended to it, and finally the version number (here 1) is appended. The last thing to do is declare the main client program that will call the remote procedure. Here it is: /* * rprintmsg.c: remote version of "printmsg.c" */ #include <stdio.h> #include <rpc/rpc.h> /* always needed */ #include "msg.h" /* need this too: msg.h will be generated by rpcgen */ main(argc, argv) int argc; char *argv[]; { CLIENT *cl; int *result; char *server; char *message; if (argc < 3) { fprintf(stderr, "usage: %s host message\n", argv[0]); exit(1); } /* Save values of command line arguments */ server = argv[1]; message = argv[2]; /* Create client "handle" used for calling MESSAGEPROG on the * server designated on the command line. We tell the RPC package * to use the "tcp" protocol when contacting the server. */ cl = clnt_create(server, MESSAGEPROG, MESSAGEVERS, "tcp"); if (cl == NULL) { /* Couldn't establish connection with server. * Print error message and die. */ clnt_pcreateerror(server); exit(1); } /* Call the remote procedure "printmessage" on the server */ result = printmessage_1(&message, cl); if (result == NULL) { /* An error occurred while calling the server. * Print error message and die. */ clnt_perror(cl, server); exit(1); } } Okay, we successfully called the remote procedure. if (*result == 0) { /* * Server was unable to print our message. * Print error message and die. */ fprintf(stderr, "%s: %s couldn’t print your message\n", argv[0], server); exit(1); } /* * The message got printed on the server’s console */ printf("Message delivered to %s!\n", server); } There are two things to note here: 1. First a client “handle” is created using the RPC library routine clnt_create(). This client handle will be passed to the stub routines which call the remote procedure. 2. The remote procedure printmessage_1() is called exactly the same way as it is declared in msg_proc.c except for the inserted client handle as the first argument. Here’s how to put all of the pieces together: example% rpcgen msg.x example% cc rprintmsg.c msg_clnt.c -o rprintmsg example% cc msg_proc.c msg_svc.c -o msg_server Two programs were compiled here: the client program rprintmsg and the server program msg_server. Before doing this though, rpcgen was used to fill in the missing pieces. Here is what rpcgen did with the input file msg.x: 1. It created a header file called msg.h that contained #define’s for MESSAGEPROG, MESSAGEVERS and PRINTMESSAGE for use in the other modules. 2. It created client “stub” routines in the msg_clnt.c file. In this case there is only one, the printmessage_1() that was referred to from the printmsg client program. The name of the output file for client stub routines is always formed in this way: if the name of the input file is FOO.x, the client stubs output file is called FOO_clnt.c. 3. It created the server program which calls printmessage_1() in msg_proc.c. This server program is named msg_svc.c. The rule for naming the server output file is similar to the previous one: for an input file called FOO.x, the output server file is named FOO_svc.c. Now we’re ready to have some fun. First, copy the server to a remote machine and run it. For this example, the machine is called “moon”. Server processes are run in the background, because they never exit. moon% msg_server & Then on our local machine (“sun”) we can print a message on “moon”’s console. sun% printmsg moon "Hello, moon." The message will get printed to “moon”’s console. You can print a message on anybody’s console (including your own) with this program if you are able to copy the server to their machine and run it. 3. Generating XDR Routines The previous example only demonstrated the automatic generation of client and server RPC code. rpcgen may also be used to generate XDR routines, that is, the routines necessary to convert local data structures into network format and vice-versa. This example presents a complete RPC service—a remote directory listing service, which uses rpcgen not only to generate stub routines, but also to generate the XDR routines. Here is the protocol description file: ```c /* * dir.x: Remote directory listing protocol */ const MAXNAMELEN = 255; /* maximum length of a directory entry */ typedef string nametype[MAXNAMELEN]; /* a directory entry */ typedef struct namenode *namelist; /* a link in the listing */ /* * A node in the directory listing */ struct namenode { nametype name; /* name of directory entry */ namelist next; /* next entry */ }; /* * The result of a READDIR operation. */ union readdir_res switch (int errno) { case 0: namelist list; /* no error: return directory listing */ default: void; /* error occurred: nothing else to return */ }; /* * The directory program definition */ program DIRPROG { version DIRVERS { readdir_res READDIR(nametype) = 1; } = 1; } = 76; ``` Note: Types (like readdir_res in the example above) can be defined using the “struct”, “union” and “enum” keywords, but those keywords should not be used in subsequent declarations of variables of those types. For example, if you define a union “foo”, you should declare using only “foo” and not “union foo”. In fact, rpcgen compiles RPC unions into C structures and it is an error to declare them using the “union” keyword. Running rpcgen on dir.x creates four output files. Three are the same as before: header file, client stub routines and server skeleton. The fourth are the XDR routines necessary for converting the data types we declared into XDR format and vice-versa. These are output in the file dir_xdr.c. Here is the implementation of the READDIR procedure. /* * dir_proc.c: remote readdir implementation */ #include <rpc/rpc.h> #include <sys/dir.h> #include "dir.h" extern int errno; extern char *malloc(); extern char *strdup(); readdir_res * readdir_1(dirname) nametype *dirname; { DIR *dirp; struct direct *d; namelist nl; namelist *nlp; static readdir_res res; /* must be static! */ /* * Open directory */ dirp = opendir(*dirname); if (dirp == NULL) { res.errno = errno; return (&res); } /* * Collect directory entries. * Memory allocated here will be freed by xdr_free * next time readdir_1 is called */ nlp = &res.readdir_res_u.list; while (d = readdir(dirp)) { nl = *nlp = (namenode *) malloc(sizeof(namenode)); nl->name = strdup(d->d_name); nlp = &nl->next; } *nlp = NULL; /* * Return the result */ res.errno = 0; closedir(dirp); return (&res); } Finally, there is the client side program to call the server: / * rls.c: Remote directory listing client */ #include <stdio.h> #include <rpc/rpc.h> /* always need this */ #include "dir.h" /* will be generated by rpcgen */ extern int errno; main(argc, argv) int argc; char *argv[]; { CLIENT *cl; char *server; char *dir; readdir_res *result; namelist nl; if (argc != 3) { fprintf(stderr, "usage: %s host directory\n", argv[0]); exit(1); } /* Remember what our command line arguments refer to */ server = argv[1]; dir = argv[2]; /* Create client "handle" used for calling MESSAGEPROG on the * server designated on the command line. We tell the RPC package * to use the "tcp" protocol when contacting the server. */ cl = clnt_create(server, DIRPROG, DIRVERS, "tcp"); if (cl == NULL) { /* Couldn’t establish connection with server. * Print error message and die. */ clnt_pcreateerror(server); exit(1); } /* Call the remote procedure readdir on the server */ result = readdir_1(&dir, cl); if (result == NULL) { /* An error occurred while calling the server. */ * Print error message and die. */ clnt_perror(cl, server); exit(1); } /* * Okay, we successfully called the remote procedure. */ if (result->errno != 0) { /* * A remote system error occurred. * Print error message and die. */ errno = result->errno; perror(dir); exit(1); } /* * Successfully got a directory listing. * Print it out. */ for (nl = result->readdir_res_u.list; nl != NULL; nl = nl->next) { printf("%s ", nl->name); } exit(0); Compile everything, and run. sun% rpcgen dir.x sun% cc rls.c dir_clnt.c dir_xdr.c -o rls sun% cc dir_svc.c dir_svc.c dir_xdr.c -o dir_svc sun% dir_svc & moon% rls sun /usr/pub . .. ascii eqnchar greek kbd marg8 tabclr tabs tabs4 moon% A final note about rpcgen: The client program and the server procedure can be tested together as a single program by simply linking them with each other rather than with the client and server stubs. The procedure calls will be executed as ordinary local procedure calls and the program can be debugged with a local debugger such as dbx. When the program is working, the client program can be linked to the client stub produced by rpcgen and the server procedures can be linked to the server stub produced by rpcgen. NOTE: If you do this, you may want to comment out calls to RPC library routines, and have client-side routines call server routines directly. 4. The C-Preprocessor The C-preprocessor is run on all input files before they are compiled, so all the preprocessor directives are legal within a "x" file. Four symbols may be defined, depending upon which output file is getting generated. The symbols are: <table> <thead> <tr> <th>Symbol</th> <th>Usage</th> </tr> </thead> <tbody> <tr> <td>RPC_HDR</td> <td>for header-file output</td> </tr> <tr> <td>RPC_XDR</td> <td>for XDR routine output</td> </tr> <tr> <td>RPC_SVC</td> <td>for server-skeleton output</td> </tr> <tr> <td>RPC_CLNT</td> <td>for client stub output</td> </tr> </tbody> </table> Also, rpcgen does a little preprocessing of its own. Any line that begins with a percent sign is passed directly into the output file, without any interpretation of the line. Here is a simple example that demonstrates the preprocessing features. ```c /* * time.x: Remote time protocol */ program TIMEPROG { version TIMEVERS { unsigned int TIMEGET(void) = 1; } = 1; } = 44; #ifndef RPC_SVC %int * %timeget_1() {% static int thetime; thetime = time(0); return (&thetime); % #endif ``` The '%' feature is not generally recommended, as there is no guarantee that the compiler will stick the output where you intended. 5. rpcgen Programming Notes 5.1. Timeout Changes RPC sets a default timeout of 25 seconds for RPC calls when clnt_create() is used. This timeout may be changed using clnt_control(). Here is a small code fragment to demonstrate use of clnt_control(): ```c struct timeval tv; CLIENT *cl; cl = clnt_create("somehost", SOMEPROG, SOMEVERS, "tcp"); if (cl == NULL) { exit(1); } tv.tv_sec = 60; /* change timeout to 1 minute */ ``` tv.tv_usec = 0; cllnt_control(cl, CLSET_TIMEOUT, &tv); 5.2. Handling Broadcast on the Server Side When a procedure is known to be called via broadcast RPC, it is usually wise for the server to not reply unless it can provide some useful information to the client. This prevents the network from getting flooded by useless replies. To prevent the server from replying, a remote procedure can return NULL as its result, and the server code generated by `rpcgen` will detect this and not send out a reply. Here is an example of a procedure that replies only if it thinks it is an NFS server: ```c void * reply_if_nfsserver() { char notnull; /* just here so we can use its address */ if (access("/etc/exports", F_OK) < 0) { return (NULL); /* prevent RPC from replying */ } /* return non-null pointer so RPC will send out a reply */ return ((void *)&notnull); } ``` Note that if procedure returns type “void *”, they must return a non-NULL pointer if they want RPC to reply for them. 5.3. Other Information Passed to Server Procedures Server procedures will often want to know more about an RPC call than just its arguments. For example, getting authentication information is important to procedures that want to implement some level of security. This extra information is actually supplied to the server procedure as a second argument. Here is an example to demonstrate its use. What we’ve done here is rewrite the previous `printmessage_1()` procedure to only allow root users to print a message to the console. ```c int * printmessage_1(msg, rq) char **msg; struct svc_req *rq; { static in result; /* Must be static */ FILE *f; struct suthunix_parms *aup; aup = (struct suthunix_parms *)rq->rq_clntcred; if (aup->aup_uid != 0) { result = 0; return (&result); } /* Same code as before. */ return } 6. RPC Language RPC language is an extension of XDR language. The sole extension is the addition of the program type. For a complete description of the XDR language syntax, see the External Data Representation Standard: Protocol Specification chapter. For a description of the RPC extensions to the XDR language, see the Remote Procedure Calls: Protocol Specification chapter. However, XDR language is so close to C that if you know C, you know most of it already. We describe here the syntax of the RPC language, showing a few examples along the way. We also show how the various RPC and XDR type definitions get compiled into C type definitions in the output header file. 6.1. Definitions An RPC language file consists of a series of definitions. ```plaintext definition-list: definition ; definition ; definition-list ``` It recognizes five types of definitions. ```plaintext definition: enum-definition struct-definition union-definition typedef-definition const-definition program-definition ``` 6.2. Structures An XDR struct is declared almost exactly like its C counterpart. It looks like the following: ```plaintext struct-definition: "struct" struct-ident "{" declaration-list "}" declaration-list: declaration ";" declaration ";" declaration-list ``` As an example, here is an XDR structure to a two-dimensional coordinate, and the C structure that it gets compiled into in the output header file. ```plaintext struct coord { int x; int y; }; ``` ```plaintext typedef struct coord coord; ``` The output is identical to the input, except for the added typedef at the end of the output. This allows one to use "coord" instead of "struct coord" when declaring items. 6.3. Unions XDR unions are discriminated unions, and look quite different from C unions. They are more analogous to Pascal variant records than they are to C unions. union-definition: "union" union-ident "switch" "(" declaration ")" "{" case-list "}" case-list: "case" value "":" declaration ";" "default" "":" declaration ";" "case" value ":" declaration ";" case-list Here is an example of a type that might be returned as the result of a “read data” operation. If there is no error, return a block of data. Otherwise, don’t return anything. union read_result switch (int errno) { case 0: opaque data[1024]; default: void; } It gets compiled into the following: struct read_result { int errno; union { char data[1024]; } read_result_u; } typedef struct read_result read_result; Notice that the union component of the output struct has the name as the type name, except for the trailing “_u”. 6.4. Enumerations XDR enumerations have the same syntax as C enumerations. enum-definition: "enum" enum-ident "{" enum-value-list "}" enum-value-list: enum-value enum-value "," enum-value-list enum-value: enum-value-ident enum-value-ident ":=" value Here is a short example of an XDR enum, and the C enum that it gets compiled into. enum colortype { RED = 0, GREEN = 1, BLUE = 2 }; typedef enum colortype colortype; 6.5. Typedef XDR typedefs have the same syntax as C typedefs. typedef-definition: "typedef" declaration Here is an example that defines a `fname_type` used for declaring file name strings that have a maximum length of 255 characters. ```typedef string fname_type<255>; --> typedef char *fname_type;``` 6.6. Constants XDR constants symbolic constants that may be used wherever a integer constant is used, for example, in array size specifications. const-definition: "const" const-ident "=" integer For example, the following defines a constant `DOZEN` equal to 12. ```const DOZEN = 12; --> #define DOZEN 12``` 6.7. Programs RPC programs are declared using the following syntax: program-definition: "program" program-ident "{" version-list "}" "=" value version-list: version ";" version ";" version-list version: "version" version-ident "{" procedure-list "}" "=" value procedure-list: procedure ";" procedure ";" procedure-list procedure: type-ident procedure-ident "(" type-ident ")" "=" value For example, here is the time protocol, revisited: ```c /* * time.x: Get or set the time. Time is represented as number of seconds * since 0:00, January 1, 1970. */ program TIMEPROG { version TIMEVERS { unsigned int TIMEGET(void) = 1; void TIMESET(unsigned) = 2; } = 1; } = 44; This file compiles into #defines in the output header file: #define TIMEPROG 44 #define TIMEVERS 1 #define TIMEGET 1 #define TIMESET 2 ``` 6.8. Declarations In XDR, there are only four kinds of declarations. ``` declaration: simple-declaration fixed-array-declaration variable-array-declaration pointer-declaration ``` 1) **Simple declarations** are just like simple C declarations. ``` simple-declaration: type-ident variable-ident ``` Example: ``` color type color; --> colortype color; ``` 2) **Fixed-length Array Declarations** are just like C array declarations: ``` fixed-array-declaration: type-ident variable-ident "[" value "]" ``` Example: ``` color type palette[8]; --> colortype palette[8]; ``` 3) **Variable-Length Array Declarations** have no explicit syntax in C, so XDR invents its own using angle-brackets. ``` variable-array-declaration: type-ident variable-ident "<" value ">" type-ident variable-ident "<" ">" ``` The maximum size is specified between the angle brackets. The size may be omitted, indicating that the array may be of any size. ``` int heights<12>; /* at most 12 items */ int widths<>; /* any number of items */ ``` Since variable-length arrays have no explicit syntax in C, these declarations are actually compiled into “struct”s. For example, the “heights” declaration gets compiled into the following struct: struct { u_int heights_len; /* # of items in array */ int *heights_val; /* pointer to array */ } heights; Note that the number of items in the array is stored in the “.len” component and the pointer to the array is stored in the “.val” component. The first part of each of these component’s names is the same as the name of the declared XDR variable. 4) Pointer Declarations are made in XDR exactly as they are in C. You can’t really send pointers over the network, but you can use XDR pointers for sending recursive data types such as lists and trees. The type is actually called “optional-data”, not “pointer”, in XDR language. pointer-declaration: type-ident "**" variable-ident Example: listitem *next; --> listitem *next; 6.9. Special Cases There are a few exceptions to the rules described above. Booleans: C has no built-in boolean type. However, the RPC library does a boolean type called bool_t that is either TRUE or FALSE. Things declared as type bool in XDR language are compiled into bool_t in the output header file. Example: bool married; --> bool_t married; Strings: C has no built-in string type, but instead uses the null-terminated “char *” convention. In XDR language, strings are declared using the “string” keyword, and compiled into “char *”s in the output header file. The maximum size contained in the angle brackets specifies the maximum number of characters allowed in the strings (not counting the NULL character). The maximum size may be left off, indicating a string of arbitrary length. Examples: string name<32>; --> char *name; string longname<>; --> char *longname; Opaque Data: Opaque data is used in RPC and XDR to describe untyped data, that is, just sequences of arbitrary bytes. It may be declared either as a fixed or variable length array. Examples: opaque diskblock[512]; --> char diskblock[512]; opaque filedatal024>; --> struct { u_int filedatalen; char *filedata_val; } filedatal024; Voids: In a void declaration, the variable is not named. The declaration is just “void” and nothing else. Void declarations can only occur in two places: union definitions and program definitions (as the argument or result of a remote procedure).
{"Source-Url": "https://docs.freebsd.org/44doc/psd/22.rpcgen/paper.pdf", "len_cl100k_base": 6714, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 34126, "total-output-tokens": 7976, "length": "2e12", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.00019216537475585935, "__label__crime_law": 0.00018870830535888672, "__label__education_jobs": 0.00018405914306640625, "__label__entertainment": 4.297494888305664e-05, "__label__fashion_beauty": 9.864568710327148e-05, "__label__finance_business": 9.78708267211914e-05, "__label__food_dining": 0.0003006458282470703, "__label__games": 0.0004119873046875, "__label__hardware": 0.0008587837219238281, "__label__health": 0.00021326541900634768, "__label__history": 0.00011402368545532228, "__label__home_hobbies": 4.8220157623291016e-05, "__label__industrial": 0.0002465248107910156, "__label__literature": 0.0001277923583984375, "__label__politics": 0.0001392364501953125, "__label__religion": 0.0003685951232910156, "__label__science_tech": 0.00238037109375, "__label__social_life": 4.076957702636719e-05, "__label__software": 0.004199981689453125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00021409988403320312, "__label__transportation": 0.0002830028533935547, "__label__travel": 0.00015997886657714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27716, 0.01344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27716, 0.73718]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27716, 0.81217]], "google_gemma-3-12b-it_contains_pii": [[0, 2747, false], [2747, 3901, null], [3901, 5963, null], [5963, 6883, null], [6883, 8144, null], [8144, 10584, null], [10584, 12627, null], [12627, 13659, null], [13659, 14827, null], [14827, 16052, null], [16052, 17790, null], [17790, 19634, null], [19634, 21501, null], [21501, 22770, null], [22770, 23844, null], [23844, 25468, null], [25468, 27716, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2747, true], [2747, 3901, null], [3901, 5963, null], [5963, 6883, null], [6883, 8144, null], [8144, 10584, null], [10584, 12627, null], [12627, 13659, null], [13659, 14827, null], [14827, 16052, null], [16052, 17790, null], [17790, 19634, null], [19634, 21501, null], [21501, 22770, null], [22770, 23844, null], [23844, 25468, null], [25468, 27716, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27716, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27716, null]], "pdf_page_numbers": [[0, 2747, 1], [2747, 3901, 2], [3901, 5963, 3], [5963, 6883, 4], [6883, 8144, 5], [8144, 10584, 6], [10584, 12627, 7], [12627, 13659, 8], [13659, 14827, 9], [14827, 16052, 10], [16052, 17790, 11], [17790, 19634, 12], [19634, 21501, 13], [21501, 22770, 14], [22770, 23844, 15], [23844, 25468, 16], [25468, 27716, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27716, 0.01005]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b8827968d3eea67e4381bff7c0b05b2ef2c6d2b9
MircoRV32: An Open Source RISC-V Cross-Level Platform for Education and Research Sallar Ahmadi-Pour Institute of Computer Science, University of Bremen Bremen, Germany sallar@uni-bremen.de Vladimir Herdt Institute of Computer Science, University of Bremen Cyber-Physical Systems, DFKI GmbH Bremen, Germany vherdt@uni-bremen.de Rolf Drechsler Institute of Computer Science, University of Bremen Cyber-Physical Systems, DFKI GmbH Bremen, Germany drechsler@uni-bremen.de ABSTRACT In this paper we propose µRV32 (MicroRV32) an open source RISC-V platform for education and research. µRV32 integrates several peripherals alongside a 32 bit RISC-V core interconnected with a generic bus system. It supports bare-metal applications as well as the FreeRTOS operating system. Beside an RTL implementation in the modern SpinalHDL language (µRV32 RTL) we also provide a corresponding binary compatible Virtual Prototype (VP) that is implemented in standard compliant SystemC TLM (µRV32 VP). In combination the VP and RTL descriptions pave the way for advanced cross-level methodologies in the RISC-V context. Moreover, based on a readily available open source tool flow, µRV32 RTL can be exported into a Verilog description and simulated with the Verilator tool or synthesized onto an FPGA. The tool flow is very accessible and fully supported under Linux. As part of our experiments we provide a set of ready to use application benchmarks and report execution performance results of µRV32 at the RTL, VP and FPGA level together with a proof-of-concept FPGA synthesis statistic. KEYWORDS RISC-V, RTL, FPGA, Virtual Prototype, Open Source ACM Reference Format: 1 INTRODUCTION RISC-V [21, 22] is a modern Instruction Set Architecture (ISA) with enormous potential in particular for embedded systems used in several application areas such as IoT or edge computing. A key factor for the success story of RISC-V is the free and open nature of the ISA. Moreover, RISC-V is designed in a very modular and extensible way which makes it possible to build highly application specific solutions. Naturally, RISC-V has been strongly adopted by the industry and also in the academic community. Meanwhile, RISC-V offers a very comprehensive but still growing ecosystem with a plethora of tools, simulators and Register Transfer Level (RTL) implementations, both commercial as well as open source. Recently, Virtual Prototypes (VPs) emerged in the RISC-V ecosystem to support the design flow for embedded systems. A VP is essentially an abstract model of the entire Hardware (HW) platform and predominantly created in SystemC using the Transaction Level Modeling (TLM) style [1, 9]. VPs are an industry proven solution to enable early SW development as well as other system-level use-cases and thus complement a RTL implementation [10, 15, 16, 19]. We believe that the availability of a modern, accessible and FPGA friendly RISC-V RTL implementation together with a corresponding VP configuration would be very beneficial for the academic community to stimulate further research and for educational purposes. Such a VP/RTL combination provides a strong foundation for advanced cross-level methodologies. Therefore, in this paper we propose µRV32 (MicroRV32) an open source RISC-V platform for education and research. µRV32 integrates several peripherals alongside a 32 bit RISC-V core interconnected with a generic bus system. The core supports the base integer instruction set and provides trap and interrupt handling facilities. This allows µRV32 to run bare-metal applications as well as operating systems tailored for the embedded domain such as FreeRTOS. µRV32 is available as RTL description (µRV32 RTL) and implemented in the modern Scala-based SpinalHDL language. Based on a readily available open source tool flow, µRV32 RTL can be exported into a Verilog description and simulated with the Verilator tool or synthesized onto an FPGA. The tool flow is very accessible and fully supported under Linux. In addition to µ32 RTL we also provide a corresponding VP configuration, called µRV32 VP, which is implemented in standard compliant SystemC TLM and binary compatible with µ32 RTL. We built µRV32 VP on top of the open source RISC-V VP [18] available at GitHub [5]. The VP enables early and fast Software (SW) simulations while the RTL description enables cycle-accurate simulations. In combination the VP and RTL descriptions pave the way for advanced cross-level methodologies. As part of our experiments we provide a set of ready to use application benchmarks and report execution performance results of µRV32 at the RTL, VP and FPGA level together with a proof-of-concept FPGA synthesis statistic. Visit http://system-verification.org/risc-v for a GitHub link to obtain µRV32 RTL/VP together with the benchmarks as well as information on our most recent RISC-V related approaches. 2 RELATED WORK RISC-V already comes with an extensive ecosystem that includes several simulators as well as RTL implementations that can also be used for FPGA prototyping purposes. With respect to simulators, they are predominantly designed to enable high-speed simulations such as SPIKE [7] or QEMU [6]. Another direction are full platform simulators such as gem5 [4]. Recently, also VP-based solutions that leverage SystemC TLM such as RISC-V VP [5] or DBT-Rise [3] have been introduced into the ecosystem to lay the foundation for advanced SystemC-based system level use-cases for RISC-V. In this work we built upon RISC-V VP to design our μR32 VP to complement our cross-level platform. Similarly, there exist various RTL implementations for RISC-V ready to use on FPGAs. Many of the cores rely on commercial FPGA tool flows which makes them not very accessible. Thus, in the following we exemplarily review cores that also rely on open source tools but follow different goals than our μR32 cross-level platform. For example, the PicoRV32/PicoSoC [13] is a Verilog HDL implementation of the RISC-V ISA which is optimized for size. An exemplary System-on-Chip (SoC) with a small amount of peripherals and firmware is available. However, while Verilog provides very good tool support, it is missing many features of the modern emerging Hardware Description Languages (HDLs) such as SpinallHDL [11] or Chisel [2]. RocketChip [8] is a RISC-V SoC generator that leverages the Chisel HDL to provide a highly configurable general purpose solution. VexRiscV [12] is a SpinallHDL-based implementation of the RISC-V ISA. VexRiscV makes use of a SW oriented approach by leveraging SpinallHDL to offer a broad range of parametric and customizable RISC-V platforms. Thus, making VexRiscV a powerful family of RISC-V implementations that can even support a Linux operating system. However, the complexity of VexRiscV and RocketChip makes them significantly less accessible. Moreover, with μR32 we propose a combined RTL and VP-based implementation which provides the foundation for advanced cross-level methodologies tailored for RISC-V. 3 PRELIMINARIES In this section we provide relevant background information on the RISC-V ISA (Section 3.1), the open source RISC-V VP which we used as foundation to build μR32 VP (Section 3.2), and the open source tool flow that covers the VP, RTL and FPGA level (Section 3.3). 3.1 RISC-V ISA RISC-V is an open, free and modular Instruction Set Architecture (ISA). For this work we consider the RISC-V base RV32I ISA. It provides a set of basic mandatory instructions that cover arithmetic, branch and jump, as well as load and store instructions. RV32I defines 32 general purpose registers x0 to x31 (with x0 being hard-wired to zero) with 32 bit width each. The RISC-V ISA also defines Control and Status Registers (CSRs) which are special purpose registers for extended HW/SW interactions such as trap handling and interrupt processing capabilities. For example, the MTVEC CSR stores the trap handler address which is configured by the SW and used by the HW. For a comprehensive description of the RISC-V instruction set please refer to the official specifications. You can find more information on the instruction set specifications in volume 1 [21] and details on the privileged architecture which in particular covers CSRs and their behavior in volume 2 [22]. 3.2 RISC-V VP The RISC-V VP is an open source VP tailored for RISC-V and implemented in SystemC TLM. It is designed as a configurable and extensible platform around a generic TLM bus system. The VP supports ELF loading (as generated by the GCC or LLVM toolchain) and provides coverage tracking (via GCOV) and debugging (via GDB) support of the SW applications running on the VP. Through these characteristics the VP enables fast SW development iterations. The TLM-based description also allows for quick explorations of new extensions of the ISA or HW platform. For the μR32 cross-level platform we added a configuration to the VP which represent μR32 RTL. The RISC-V VP has already been used in several research studies that cover modeling, verification and simulation aspects eg. [17, 18, 20]. The RISC-V VP also supports the Direct Memory Interface (DMI) and Time Quantum (TQ) optimizations commonly used to speed-up SystemC TLM simulations. Essentially, DMI boosts the performance of memory access operations by using a direct memory pointer instead of TLM transations to access the main memory, while TQ avoids costly context switches in the Instruction Set Simulator (ISS) by postponing synchronizations with the SystemC simulation kernel. 3.3 Open Source Cross-Level Toolflow Fig. 1 shows the co-design and co-simulation toolflow. The diagram contains user artifacts (green), applications (yellow) and generated artifacts (red). The user artifacts can be divided into three parts: 1. The SpinallHDL HW description and the respective testbench and Pin Constraint File (PCF) for the use of the HW description on the FPGA 2. The RISC-V VP used for TLM simulations 3. The RISC-V SW written in C and assembly The HW description available in SpinallHDL can be exported to a Verilog description and then used in two ways: 1) simulated with SpinallHDL and Verilator according to the testbench, or 2) The µ4 MIRCORV32 is suitable for education and research. It is a modern, accessible and FPGA friendly RISC-V platform designed to be used with the open source FPGA toolchain IceStorm [14] which supports Lattice Semiconductor iCE40 FPGAs. In combination µRV32 provides a strong foundation for investigating advanced cross-level methodologies and design flow techniques. µRV32 RTL/VP and the complete tool flow is available open source and fully supported under Linux, making it very accessible for education and research. 4 MIRCORV32 In this section we present implementation details on µRV32. It is a modern, accessible and FPGA friendly RISC-V platform designed around a 32 bit RISC-V core. It consists of two parts: (1) µRV32 RTL: a modern, accessible and FPGA friendly RISC-V RTL implementation in SpinalHDL designed to be used with the open source FPGA toolchain IceStorm [14] (cf. Section 3.3). (2) µRV32 VP: a corresponding binary compatible RISC-V VP (cf. Section 3.2) configuration representing µRV32 RTL at a high level of abstraction. In combination µRV32 provides a strong foundation for investigating advanced cross-level methodologies and design flow techniques. µRV32 RTL/VP and the complete tool flow is available open source and fully supported under Linux, thus making it very accessible for education and research. The transaction then is routed to the respective peripheral. For the bus slaves on a valid command and payload on the bus. On the valid in SpinalHDL. The bus master asserts the arbitration. Listing 1 shows the memory bus interface definition an interface defined by an address, a command and data. For this purpose a lightweight bus interface with a valid-ready handshake is chosen. It is used to interconnect the core and the surrounding peripherals. In this context the core is the bus master while per- ipherals and other modules act as bus slaves. With the core as the only bus master in place there is no need to consider bus master arbitration. Listing 1 shows the memory bus interface definition in SpinalHDL. The bus master asserts the valid signal to notify the bus slaves on a valid command and payload on the bus. On the top level module the address space is mapped onto the peripherals. The transaction then is routed to the respective peripheral. For 4.2 Memory Bus Interface The core interacts with peripherals and its environment through an interface defined by an address, a command and data. For this purpose a lightweight bus interface with a valid-ready handshake is chosen. It is used to interconnect the core and the surrounding peripherals. In this context the core is the bus master while per- ipherals and other modules act as bus slaves. With the core as the only bus master in place there is no need to consider bus master arbitration. Listing 1 shows the memory bus interface definition in SpinalHDL. The bus master asserts the valid signal to notify the bus slaves on a valid command and payload on the bus. On the top level module the address space is mapped onto the peripherals. The transaction then is routed to the respective peripheral. For Listing 1: Memory Bus Interface definition in SpinalHDL ```scala 1 case class SimpleBus(dataWidth:Int, addressWidth:Int) extends Bundle with IMasterSlave { 2 val SBaddress = UInt(addressWidth bits) 3 val SBvalid = Bool 4 val SBdata = Bits(dataWidth bits) 5 val SBSize = UInt(4 bits) 6 val SBready = Bool 7 val SBdata = Bits(dataWidth bits) 8 override def asMaster(): Unit = { 9 out(SBvalid, SBaddress, SBdata, SBSize) 10 in(SBready, SBdata) 11 } 12 } 13 ``` lightweight design it is defined that peripherals respond one clock cycle after the transaction request. A peripheral should finish its tasks within one clock cycle, otherwise the CPU is stalled. Addition- ally the identification of incorrectly behaving peripherals becomes easier. Listing 2 shows how the components are interconnected and wired in SpinalHDL. This feature of SpinalHDL allows for less errors in the interconnection of modules. 4.3 SoC Platform The core is embedded in a SoC platform composed around the Sim- pleBus. Fig. 4 shows the top-level view of the µRV32 architecture. The memory and the peripherals are mapped on the top level of the SoC, in the address mapping logic. First, the core-local interrupt controller (CLINT) peripheral provides the timer interrupts based on the 64 bit registers mtime and mtimecmp. mtime is a read-only register that increments with the platforms clock frequency. If the value of mtime is greater or equal than mtimecmp then the timer interrupt is triggered until cleared by the core. In the Shutdown pe- ripheral a defined transition into a halting state can be triggered for the core. The halting state will end program execution and halts the Listing 2: Interconnecting µRV32 SoC components in SpinalHDL ```scala 1 // ... 2 // Instantiate components of SoC 3 val cpu = new RV32Core() 4 val ram = new Memory(Bits(32 bits),4104,initHexfile) 5 val gpio_led = new GPIOLED() 6 val shutdown_periph = new Shutdown() 7 val uartPeriph = new SBuart() 8 val rvCLIC = new RVCLIC() 9 // ... 10 // Interconnect components via memory bus interface 11 cpu.io.sb <> ram.io.sb 12 cpu.io.sb <> gpio_led.io.sb 13 cpu.io.sb <> shutdown_periph.io.sb 14 cpu.io.sb <> uartPeriph.io.sb 15 cpu.io.sb <> rvCLIC.io.sb 16 // ... ``` ![Figure 4: Block Diagram of the top level architecture of the µRV32 SoC platform](image-url) platform in the defined state until reset. In the Memory peripheral the instruction memory and the data memory are contained. The LED peripheral is used to map up to eight LEDs of a development board into the address space of the SoC platform. The Universal Asynchronous Receive and Transmit (UART) peripheral provides the platform with external communication abilities. UART is commonly used as serial communication between platforms and devices. For the SW running on the SoC platform, the print statements can be redirected to the UART peripheral. The SoC platform memory is initialized with a SW program binary at synthesis time. 5 EXPERIMENTS In this section we present a comparison of the performance of our µRV32 with the RISC-V VP. Our comparison includes the execution of µRV32 in an RTL simulator (SpinalHDL with Verilator) and on an FPGA development board. These modes of execution are compared to the µRV32 VP. The FPGA-based emulation runs on a Lattice Semiconductor HX8K Development Board with the board frequency of 12 MHz. The VP and the RTL simulation are executed on an Intel i7 10510U CPU @ 1.80 GHz on Ubuntu 20.04 LTS. Additionally, we collect the statistics on the FPGA design process, that is the time for synthesis, place & route, the area utilization on the targeted FPGA and the maximum clock frequency at which the design can be used. In the following, we provide the evaluation setup (Section 5.1), the obtained results (Section 5.2) and the FPGA synthesis statistics (Section 5.3). 5.1 Performance Evaluation Setup Fig. 5 shows the setup of the experiments. The benchmarks are used to initialize the SoC as well as the VP. For the VP the runtime gets traced with the `time` command from Linux. On the RTL simulation the testbench is instrumented to output the real processing time needed for the simulation. The RTL simulation is executed through the SpinalHDL Verilator backend. At the level of the FPGA-based emulation, the time start of the execution (falling edge of the reset signal) and the end of execution (rising edge of the halt signal) are measured with a logic analyzer. For the experiments we use five benchmarks: 1. **Fibonacci** calculates the numbers of the Fibonacci sequence up to a defined sequence length of 6000 numbers. 2. **Greatest Common Divisor (GCD)** calculates the greatest common divisor \(gcd(a, b)\) of two numbers \(a\) and \(b\) (for our experiments we calculate \(gcd(50000, 1)\)) 3. **Bubblesort** sorts an array with 300 elements. 4. **FreeRTOS-queues** is a FreeRTOS example of two senders putting data into a queue and a receiver pulling the data from the queue. The example is setup to terminate after 10 iterations. 5. **FreeRTOS-tasks** is a FreeRTOS example of two scheduled tasks sending data via the UART interface. The example is setup to terminate after 20 iterations. 5.2 Performance Evaluation Results Table 1 shows the results of the benchmarks. The table is divided double columns into three parts: The left part shows each executed benchmark. The middle part shows the number of instructions executed (column: #instr-exec.) for a benchmark, the lines of code in C (column: LoC C) of the application and the lines of code for the assembly file (column: LoC ASM) respectively. The right part shows the execution times of each benchmark in seconds. The fourth column (column: FPGA) shows the execution time of the FPGA emulation of the µRV32 platform. The character '-' denotes that the benchmark could not have been executed on the FPGA due to the SW binary being too large for the memory on the development board. The third column (column: RTL-Sim) shows the execution time of the RTL simulation with SpinalHDL and Verilator. In the sixth and seventh column the execution times of the VP simulation are shown without optimizations (column: VP normal) and with optimizations (column: VP opt), namely DMI and TQ (cf. Section 3.2). From Table 1 it can be observed that the VP simulation and the FPGA emulation is significantly faster than the RTL simulation. When comparing the FPGA emulation with the RTL simulation a factor of improvement between \(\times 134\) and \(\times 217\) is observed. Comparing the VP simulations with the RTL simulation the unoptimized VP is \(\times 25\) to \(\times 96\) faster the optimized VP is \(\times 75\) to \(\times 325\). The unoptimized VP shows slightly bigger execution times than the FPGA-based emulation (between \(\times 1.5\) to \(\times 2.3\)). In all cases the optimized VP simulation has the fastest execution time. ### Table 1: Results of the benchmark experiments <table> <thead> <tr> <th>Benchmark</th> <th>#instr-exec.</th> <th>LoC in C</th> <th>LoC in ASM</th> <th>FPGA</th> <th>RTL-Sim.</th> <th>VP normal</th> <th>VP opt</th> </tr> </thead> <tbody> <tr> <td>Fibonacci</td> <td>240,118</td> <td>24</td> <td>122</td> <td>0.09 s</td> <td>12.07 s</td> <td>0.16 s</td> <td>0.08 s</td> </tr> <tr> <td>GCD</td> <td>500,075</td> <td>31</td> <td>105</td> <td>0.19 s</td> <td>32.24 s</td> <td>0.43 s</td> <td>0.14 s</td> </tr> <tr> <td>Bubblesort</td> <td>1,518,041</td> <td>45</td> <td>194</td> <td>0.36 s</td> <td>78.18 s</td> <td>0.81 s</td> <td>0.24 s</td> </tr> <tr> <td>FreeRTOS-queues</td> <td>416,897</td> <td>220</td> <td>11,048</td> <td>-</td> <td>31.67 s</td> <td>0.64 s</td> <td>0.22 s</td> </tr> <tr> <td>FreeRTOS-tasks</td> <td>3,078,543</td> <td>93</td> <td>10,988</td> <td>-</td> <td>40.55 s</td> <td>1.63 s</td> <td>0.54 s</td> </tr> </tbody> </table> ### Table 2: FPGA timing and area statistics, Synthesis, Place & Route statistics <table> <thead> <tr> <th>Description</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>$f_{\text{max}}$</td> <td>28.61 MHz</td> </tr> <tr> <td>Logic Cells</td> <td>4297 / 7680 (55%)</td> </tr> <tr> <td>BRAM Cells</td> <td>25 / 32 (78%)</td> </tr> <tr> <td>IO Cells</td> <td>33 / 256 (12%)</td> </tr> <tr> <td>Synthesis time</td> <td>11.6 s</td> </tr> <tr> <td>Place &amp; Route time</td> <td>18.96 s</td> </tr> </tbody> </table> ### 5.3 FPGA Synthesis Statistics Table 2 shows the various statistics of the FPGA design flow. The μRV32 SoC can be operated at a maximum clock frequency $f_{\text{max}}$ of 28.61 MHz with a device utilization of 55%. The use of 78% of BRAM Cells of the FPGA vary with the program used to initialize the memory. To synthesize the design into a netlist Yosys took 11.6 s. The place & route of the netlist took NextPNR 18.96 s. This sums to circa 30 seconds for the FPGA toolchain to generate a bitstream that can be configured onto the FPGA. ### 6 DISCUSSION AND FUTURE WORK The combination of RTL and VP implementation provided by μRV32 delivers a strong foundation for investigating advanced cross-level methodologies and design flow techniques. While the VP allows for fast and early SW development and HW prototyping through its TLM simulations, the RTL simulation provides cycle-accurate results and FPGA realization. At the same time, μRV32 is very accessible for education and research as the platform and complete tool flow is available open source and fully supported under Linux. To further extend and boost the capabilities of this cross-level platform we plan to: - Investigate cross-level methodologies between the VP and the RTL descriptions for verification, simulation and modeling purposes. One direction is the integration of RTL peripherals into the SystemC TLM simulation using the C++ RTL models obtained through the Verilator tool to selectively obtain fast and cycle-accurate simulation results. - Extend the μRV32 SoC platform to integrate additional peripherals at the RTL/VP level and extend the core to include support for more standard RISC-V instruction set extensions. Further, investigate a Domain Specific Language (DSL) to integrate custom instruction set extensions at the RTL and VP level. - Consider formal methods and comprehensive simulation-based techniques to validate the platform and in particular the RISC-V core. A cross-level co-simulation setting in combination with advanced test generation techniques such as fuzzing and constrained random approaches seem very promising to pursue this direction. ### ACKNOWLEDGMENTS This work was supported in part by the German Federal Ministry of Education and Research (BMBF) within the project Scale4Edge under contract no. 16ME0127 and within the project VerSys under contract no. 01IW19001. ### REFERENCES
{"Source-Url": "http://www.informatik.uni-bremen.de/agra/doc/work/destion2021-final45.pdf", "len_cl100k_base": 5669, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22081, "total-output-tokens": 7260, "length": "2e12", "weborganizer": {"__label__adult": 0.0006990432739257812, "__label__art_design": 0.0008883476257324219, "__label__crime_law": 0.0005693435668945312, "__label__education_jobs": 0.0012645721435546875, "__label__entertainment": 0.00016641616821289062, "__label__fashion_beauty": 0.0003147125244140625, "__label__finance_business": 0.0003707408905029297, "__label__food_dining": 0.0005598068237304688, "__label__games": 0.0014581680297851562, "__label__hardware": 0.044219970703125, "__label__health": 0.0008039474487304688, "__label__history": 0.0007390975952148438, "__label__home_hobbies": 0.0002949237823486328, "__label__industrial": 0.0024585723876953125, "__label__literature": 0.00027370452880859375, "__label__politics": 0.0005178451538085938, "__label__religion": 0.001140594482421875, "__label__science_tech": 0.39794921875, "__label__social_life": 0.00010412931442260742, "__label__software": 0.00901031494140625, "__label__software_dev": 0.533203125, "__label__sports_fitness": 0.0005936622619628906, "__label__transportation": 0.0020351409912109375, "__label__travel": 0.00033164024353027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27224, 0.04998]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27224, 0.18264]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27224, 0.85519]], "google_gemma-3-12b-it_contains_pii": [[0, 5181, false], [5181, 10489, null], [10489, 11828, null], [11828, 15945, null], [15945, 20498, null], [20498, 27224, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5181, true], [5181, 10489, null], [10489, 11828, null], [11828, 15945, null], [15945, 20498, null], [20498, 27224, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27224, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27224, null]], "pdf_page_numbers": [[0, 5181, 1], [5181, 10489, 2], [10489, 11828, 3], [11828, 15945, 4], [15945, 20498, 5], [20498, 27224, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27224, 0.07463]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d0aaf16d6d097540d944df47132671e21fe5a1cd
Only Recommend to You: Towards Personalized Models for Question Recommendation in Community Question Answering* Xin LIAN, Xiangyu HU, Haiwei ZHANG, Xinyu CHEN, Xiaojie YUAN* Institute of Computer Science and Technology, Nankai University, Tianjin 300071, China Abstract Question Answering communities such as Yahoo! Answers have emerged as a popular medium for online information seeking and knowledge sharing. However, as these QA sites always have thousands of new questions posted daily, it’s time-consuming for users to find the questions that are of interest to them. Some QA sites conduct question recommendation by category filtering. Category filtering is efficient but not very effective. While smart use of non-textual features is crucial in many web services, there has been little research to develop systematic and formal approaches to process these features. To solve the problem, we present a machine learning approach to predict whether a user will be interested in an unsolved question after category filtering. Whether a question is recommended to a user depends on the user’s prior experience, expectations, and personal preferences. We develop personalized models, formalize the problem, and explore a variety of content, structure, and interaction features for this task using standard machine learning techniques. The experimental results show that our approach leads to a better performance than other baseline approaches and increases the F-measure by a factor ranging from 15 to 20%.. Keywords: Community Question Answering; Question Recommendation; Personalized Model; Machine Learning 1 Introduction Community Question Answering (CQA) has become a popular medium for online information seeking and knowledge sharing. In contrast to search engines, questions in the CQA are posted in the form of natural language and attract more specific users to answer. In the last few years, many CQA systems have been launched, including Yahoo! Answers\(^1\), BuyAns\(^2\), Live QnA\(^3\). Since their inception, CQA sites have rapidly gained popularity. Hundreds of millions of answers have --- \(^1\)http://answers.yahoo.com \(^2\)http://buyans.com \(^3\)http://qna.live.com *Project supported by the National Nature Science Foundation of China (No. 61170184). *Corresponding author. Email address: forwarding82@gmail.com (Xiaojie YUAN). already been posted for tens of millions of questions in Yahoo! Answers. Unfortunately, it’s time-consuming for users to find the questions that are of interest to them. Users often focus on two or three fields, but they have to decide which field to browse this time. As a result, large quantities of questions are in the state of no response and askers would have to wait for a long time before getting answers. Question recommendation is to help users find interesting questions and expedite the answering of new questions. Many previous approaches can be classified into the three categories. 1) Content-based rec- ommendation [1, 2]: The methods assume that the user has high interest in new question if he/she has answered many similar questions before. Their target is to find similar questions. The methods are based on similarity algorithms, which have a lot of theory basic and classical models. Nevertheless, the methods focus on text contents, weakening the structure information of forums. 2) Collaborative Recommendation[6, 7]: The methods assume that the similar users may interest in the same questions which are often used to recommend in the field of movie, music, books and so on. They cluster people into groups with similar interest. Computing similarities between people and movies/music/books allows to recommendation or not. However, users may interest in two or three fields, the complexity of clustering and low precision are big challenges. 3) Authority-based recommendation[8, 9]: The methods’ target to find domain experts by user networks. Experts indeed improve the quality of the answers. The disadvantage is the number of experts is limited. They are not able to deal with tens of millions of questions. In addition, they ignore the content of the posts and rank users globally. The low participation rate of users in CQA service is the crucial problem which limits its development potential. In a word, most researches are based on text statistics, not making full use of the features of CQA. Especially, the precision is low when the users’ history data is little. To tackle this problem, we present a machine learning approach after category filtering. The category information of questions is proven to be positive information for question answering recommendation[5, 10, 11]. Sina iAsk and Yahoo! Answers recommend the unsolved questions to the answerer based on category analysis. This paper makes use of category filtering to reduce the number of candidates for question recommendation. Some researchers have solved the problems in the CQA by adapting machine learning techniques, including user satisfaction[12, 13],questions popularity[14], predicting the best answer[15]. They got higher precision and less dependency on the data size. We explore structure features of a forum and give a detailed analysis. Then we choose an appropriate classification algorithm. The experimental results show our approach leads to a better performance than other baseline approaches and increases the F-measure by a factor ranging from 15 to 20%. The rest of this paper is organized as follows. Section 2 reviews some prior work related to our approach. Section 3 details the proposed framework including models, features and classifiers. Section 4 reports on the experimental evaluation. At last, we conclude the paper and discuss about the future work in Section 5. 2 Related Work Content-based recommendation can be divided into question search and expert search. Question search is to return questions semantically equivalent or close to a given question. Duan et al.[2] proposed to conduct question search by identifying question topic and question focus. Expert search is to estimate the probability p(u|q) of a user being an expert for a given question based on the previous question answering of the user. M. Qu et al.[1] adopted the Probabilistic Latent Semantic Analysis (PLSA) model for question recommendation. PLSA model is known for its ability of capturing underlying topics. Authority-based recommendation is to discover users’ authorities by user networks, which is also called expert finding. Jurczyk et al.[8] and Zhang et al.[9] evaluated link algorithms PageRank and HITS to rank users based on their authority scores. The difference is that Zhang et al. is applied to a small data set. Some researches[4, 3] integrated the contents of posts and users networks to rank users. Some researchers have solved some problems in the CQA by adopting machine learning techniques, which is closely related to our work. Liu et al.[12, 13] proposed towards personalized models for predicting satisfaction in CQA. Features were organized around the basic entities in a question answering community: questions, answers, question-answer pairs, users and categories. Closest to our work, Sun et al.[14] designed a ‘majority-based perception algorithm’ and explored only two aspects of features: features about question and features about asker to predict question popularity. Our work differs from it on predicting question popularity for a user, not for all users. We explored more features and gave rational analysis. 3 Question Recommendation In this section, we present our approach to address the problem of question recommendation by the means of machine learning. It consists of four steps: (a) propose a new scheme, which is illustrated in Figure 1; (b) give problem definition and basic assumption; (c) learn features; (d) explore some families of classification algorithms. 3.1 Scheme Users can play four different roles in a community question answering: asker, answerer, voter and searcher. The paper focuses on the roles of asker and answerer. Askers post questions. Answerers answer questions. A user can be both an asker and an answerer. However, the community rules forbid askers from answering their own questions, and then a user would be either an asker or an answer for a question. For conciseness, we use roles to describe users in a question and answer. The scheme of our approach is shown in Figure 1. The sold line shows the system behavior, the dotted line shows users’ behavior. It consists of three main components: Information Storage, Category Filtering and Personalized Classification Model. Information Storage stores questions, answers and the operation history of each user, including posting a question, answering a question, refusing a recommendation. For a recommended user, the Category Filtering component selects candidate questions according to the category of the questions that the user has just answered. The Personalized Classification Model component chooses a machine learning training algorithm to build a personalized model with the features of the user’s history information. The model assigns each candidate question to recommendation category or not and adjusts with the feedback from the user. 3.2 Problem statement The problem is described as a two-class classification problem: recommendation or no-recommendation, without the distinction of the possibly recommendation. For a pair of user $u$ and question $q$, the problem is defined as: $$c(u, q) \rightarrow \text{recommendation/\text{non-recommendation}}$$ The best way of evaluating recommendation is the evaluation of the recommended users. However, there's no evaluation information in the initial state of question recommendation system and it's hard to find those answerers to cooperate our research. Therefore, we give a basic assumption. We assume that users are interested in the questions that they answered while they are uninterested in the last and next questions which near the questions that they answered. The assumption is based on an observation. Questions are often displayed by the way of list in a category. There are about 20 questions on a page of questions list. Users scan two or three questions once, and then choose interesting questions to answer. We don’t confirm the number of questions that users scan once. But the questions, nearest to the questions that are answered by the users, must be seen. If users don’t choose to answer, it shows that they are uninterested or unable to answer. Such questions shouldn’t be recommended to the users. **Definition 1** A user in a CQA is considered to be interested in a question if the user scans and answers the question. Otherwise, the user is considered to be uninterested if the user doesn’t answer after scanning it. The question which nearest to the question answered belongs to the uninterested situation if it isn’t answered by the user. 3.3 Learning features There are four basic entities in a community question answering. They are questions, answers, askers, answerers. The paper is towards personalized. In a personalized model, the answerer features are all the same. Therefore, the portion of answerers can be omitted. We will use Yahoo! Answers as the example to describe our approach although our approach can be applied to the questions from other CQA sites as well. We derived a set of 18 different features for questions, answers and askers entities. These features are listed as follows, each followed by a brief explanation of its underlying rationale. They have obvious personalized characteristics. - **Category**: CQA sites build corresponding categories and subcategories according to users’ interest fields, although the number and the name of categories are not the same between the CQA sites. The behavior of a user in different categories may be different as well. - **QuestionLength**: Answerers see questions title first when they scan questions list. Statistics show that most answerers prefer to longer question title. - **QuestionType**: QuestionType indicates the information type that askers need, which can be divided into who, where, when, what, which, why, how and yes/no. - **DescriptionLength**: The lengths of question descriptions are very different. Statistics show that some answerers prefer to no description or elaborate description (hundreds of words), while some answerers prefer to simple description (tens of words). - **QuestionStars**: In Yahoo! Answers, if users regard a question as interesting, they can add a star to the question no matter whether they answer it or not. The number of stars reflects the popularity of a question. - **AnswersNumber**: AnswersNumber is the number of answers before an answerer answers. The reason why we choose the number of answers before an answerer answers is to simulate the situation when the answers receive the question. - **VotesNumber**: If users support or oppose to an existed answer, they can give a positive vote (thumbs up) or a negative vote (thumbs down). VotesNumber is the number of votes on the answers before an answerer answers. It reflects the popularity of a question and the quality of the existed answers. - **AskerProfilePhoto**: Yahoo! Answers supplies some system profile photo. Statistics show that users who use a personalized profile photo post or answer questions actively. - **AskerBestRate**: The best rate is the rate of the user’s answers being regarded as the best answer for users’ answers. It indicates askers’ authority. In General, the more specific askers attract the more answerers. - **AskPoint and AskerPointRate**: In Yahoo! Answers, users get points by answering, commenting, voting and so on except posting questions. AskPoint is the total points from registration. AskerPointRate is the average points every day. Therefore, more points indicate more active in answering. - **AskerQuestionsNumber and AskerAnswersNumber**: Askers who post/answer more questions are more experienced. They may give more accurate question description, more attractive question title. - **AskerQuestionsStars**: AskerQuestionsStars is the total number of question stars for an asker’s all posted questions. - **Textual Features**: We derive word n-gram (unigram and bigram) features from the text of the question. We ignore the stop words and useless words in the unigram features. Textual feature selection is based on Information Gain (IG). 3.4 Classification algorithms We conducted a set of experiments with some families of classification algorithms: Decision trees, Support Vector Machines, Boosting, Naive Bayes and Logistic regression, all using the implementations in the Weka\cite{16} framework. - **Decision Trees**: A decision tree is a decision support tool that uses a tree-like graph or model of decisions. Decision trees model is built according to the training data set. We apply the Weka implementation of J48. - **SVM**: An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are predicted to belong to a category based on which side of the gap they fall on. We apply the Weka implementation of SMO. - **Naive Bayes**: A naive Bayes classifier is a simple probabilistic classifier based on applying Bayes’ theorem with strong (naive) independence assumptions. It assumes that the presence (or absence) of a particular feature of a class is unrelated to the presence (or absence) of any other feature. We apply the Weka implementation of NaiveBayes. - **Boosting-based**: Boosting is a machine learning meta-algorithm for performing supervised learning. Most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners’ accuracy. We apply the Weka implementation of AdaBoost. - **Logistic regression**: In statistics, logistic regression (sometimes called the logistic model or logit model) is a type of regression analysis used for predicting the outcome of a binary dependent variable (e.g. "yes" vs. "no") based on one or more predictor variables. We apply the Weka implementation of SimpleLogistic. It incorporates attribute selection by fitting simple regression functions in LogitBoost. 4 Experimental Evaluation We now describe the measures used for the evaluation, the dataset and the experimental results. We want to recommend current question to a user who is interested in the question. In other words, we predict whether the user will be interested for a given user and current question. 4.1 Dataset and evaluation metrics Our data was based on a snapshot of Yahoo! Answers, crawled in the early 2012. There are 26 top-level categories at Yahoo! Answers, such as “Arts & Humanities”, “Education & Reference”. We randomly selected 100 users whose activities are public in each top-level category, 2600 users in total. Then we collected the questions that have been answered by the 100 users in the latest two months and last/next questions that are assumed to be uninterested in by the users in the definition 1. Statistics on the data sets is shown in Table 1. Even though ours is formally a two-class classification problem, we primarily focus on the recommended or positive class. The reason for this is that we have higher certainty about the true positive likelihood of our recommended labels compared to the non-recommended more properly to be stated as unknown cases. We made use of three measures for evaluating the experimental results. They are Precision, Recall and F1. Precision is the fraction of the predicted interesting questions that were indeed answered by the user. Recall is the fraction of all answered questions that were correctly recommended by the system. F1 is the geometric mean of Precision and Recall measures, computed as $\frac{2PR}{P+R}$. 4.2 Methods compared We now describe the baseline, the representations of our specific methods and compared methods in other papers. The reason why we didn’t choose other methods is that they need manual annotation. The paper [4] proposed three methods. The first one is based on users’ profile, the second one is based on latent topic, the last one is based on the clustering of users. The former two methods got higher performance. Therefore, we choose the two methods to compare and parameters are set according to [4]. It adopted probability to rank the recommended users and recommended a question to the top-k users. The Precision of top-k users is the percentage of the top-k candidate answers retrieved that are correct. In our dataset, the number of users who can be recommended to a question is at most 5. Therefore, the Recall of top-k users is the same as the definition in the section 4.1. The experiment shows that the performance in the situation of top-5 outperforms that of top-10. We compare with the two methods in the situation of top-5. - **QRM_C4.5**: Our system implementing a decision tree using the C4.5 algorithm. - **QRM_SVM**: Our system implementation using the SVM classifier. - **QRM_Boosting**: Our system implementing the AdaBoost algorithm. - **QRM_NB**: Our system implementing the Naive Bayes classifier. - **QRM_SL**: Our system implementing the SimpleLogitstic classifier. - **Profile**: A system computing the user expertise with Profile-based model. - **Thread**: A system computing the user expertise with Thread-based model. 4.3 Experimental results First, we report the main classification results of the paper. Second, we choose the best one to compare with Profile, Thread, varying with the number of users’ history data. Finally, to gain a better understanding of the important features for this domain, we report the top 10 non-textual features and textual features with highest Information Gain. To derive text features, we use the Lucene[17] to preprocess the question text, including tokenization, stop words filtering and stemming. Table 2 shows the recommendation accuracy for the different implementations of QRM, in particular comparing the choice in classifier algorithm and feature sets (whether to use the textual features). In comparison, QRM_SL results in the best performance of all the classification variants, while QRM_NB results in the worst performance. QRM_NB assumes that the presence (or absence) of a particular feature of a class is unrelated to the presence (or absence) of any other feature. Its low performance shows that dependencies between features exist. QRM_SLogistic builds a logistic regression model using LogitBoost. LogitBoost minimizes the logistic loss and places less emphasis on examples that are very badly classified. LogitBoost is more appropriate when there is noise in the labels. Our features are numeric, which SimpleLogistic is very fit for. Hence, in the next experiment, we choose QRM_SLogistic to compare with other methods. Figure 2 reports the Precision, Recall and F1 for Profile, Thread, QRM_SL and QRM_SL+Text with varying number of previous questions answered. The abscissa shows the number of questions answered for training. Then we use users’ latest 10 questions (5 answered and 5 unanswered) to test. QRM_SL and QRM_SL+Text methods outperform Profile and Thread contrast methods especially with less than 30 previous questions answered. For QRM_SL+Text, textual information become helpful for users with more than 30 previous questions answered. The contrast methods have sensible trend of ascent with the number of questions answered. It’s because those methods are based on probability statistics which depends on the quantities of data. Nevertheless, 30 questions in training are sufficient to achieve F1 of 0.61 for our methods. Therefore, our methods are more effective with small quantities of data and increase the F-measure by a factor ranging from 15 to 20%. 5 Conclusions This paper describes a personalized model to conduct question recommendation in the CQA. We formalized the problem, explored a variety of content, structure, and interaction features for this task using standard machine learning techniques and gave a brief explanation of these features’ underlying rationale. Experimental results on real data from Yahoo! Answers show that the proposed approaches can effectively recommend questions. Especially, our methods outperform methods based on probability statistics with small quantities of data. We increase the F-measure by a factor ranging from 15 to 20%. Table 2: Comparison of QRM with different classifiers <table> <thead> <tr> <th>Classifier</th> <th>Without Text</th> <th>With Text</th> </tr> </thead> <tbody> <tr> <td></td> <td>P</td> <td>R</td> </tr> <tr> <td>QRM_C4.5</td> <td>0.65</td> <td>0.66</td> </tr> <tr> <td>QRM_NB</td> <td>0.64</td> <td>0.59</td> </tr> <tr> <td>QRM_SVM</td> <td>0.62</td> <td>0.72</td> </tr> <tr> <td>QRM_SLogistic</td> <td>0.65</td> <td>0.73</td> </tr> <tr> <td>QRM_Boosting</td> <td>0.65</td> <td>0.7</td> </tr> </tbody> </table> Fig. 2: The scheme of question recommendation References
{"Source-Url": "http://www.joics.com/publishedpapers/2012_9_16_4987_4995.pdf", "len_cl100k_base": 4865, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 24080, "total-output-tokens": 6278, "length": "2e12", "weborganizer": {"__label__adult": 0.0003914833068847656, "__label__art_design": 0.0006203651428222656, "__label__crime_law": 0.0005025863647460938, "__label__education_jobs": 0.012969970703125, "__label__entertainment": 0.0003418922424316406, "__label__fashion_beauty": 0.0003445148468017578, "__label__finance_business": 0.0011720657348632812, "__label__food_dining": 0.0005631446838378906, "__label__games": 0.0017671585083007812, "__label__hardware": 0.0009469985961914062, "__label__health": 0.0009169578552246094, "__label__history": 0.0006055831909179688, "__label__home_hobbies": 0.00020241737365722656, "__label__industrial": 0.0003788471221923828, "__label__literature": 0.0013484954833984375, "__label__politics": 0.0005145072937011719, "__label__religion": 0.0005927085876464844, "__label__science_tech": 0.2176513671875, "__label__social_life": 0.0007243156433105469, "__label__software": 0.21337890625, "__label__software_dev": 0.54296875, "__label__sports_fitness": 0.00037479400634765625, "__label__transportation": 0.00047707557678222656, "__label__travel": 0.0003685951232910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25839, 0.03376]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25839, 0.19203]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25839, 0.91382]], "google_gemma-3-12b-it_contains_pii": [[0, 2364, false], [2364, 6162, null], [6162, 8173, null], [8173, 11424, null], [11424, 14467, null], [14467, 17668, null], [17668, 20495, null], [20495, 23209, null], [23209, 25839, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2364, true], [2364, 6162, null], [6162, 8173, null], [8173, 11424, null], [11424, 14467, null], [14467, 17668, null], [17668, 20495, null], [20495, 23209, null], [23209, 25839, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25839, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25839, null]], "pdf_page_numbers": [[0, 2364, 1], [2364, 6162, 2], [6162, 8173, 3], [8173, 11424, 4], [11424, 14467, 5], [14467, 17668, 6], [17668, 20495, 7], [20495, 23209, 8], [23209, 25839, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25839, 0.05634]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5b3b91468e31778e9e1febd2535d3a0fef2ee9cf
Secure Coding. Practical steps to defend your web apps. Copyright SANS Institute Author Retains Full Rights This paper is from the SANS Software Security site. Reposting is not permitted without express written permission. Interested in learning more? Check out the list of upcoming events offering "Defending Web Applications Security Essentials (DEV522)" at http://software-security.sans.orghttp://software-security.sans.org/events/ Application security (AppSec) is maturing for most organizations, according to the 475 respondents who took the SANS 2016 State of Application Security survey. In it, respondents recognize the need for AppSec programs and are working to improve them, despite a lack of the necessary skills, lack of funding and management buy-in, and silos between departments hampering their AppSec programs. Despite these mostly organizational inhibitors, the majority say their programs are maturing or mature: 38% say their AppSec programs are “Maturing,” while 22% say their programs are “Mature” and 4% report programs that are “Very Mature.” The majority (67%) have also partially integrated AppSec into their overall security, risk management and incident response (IR) programs, while another 17% have achieved full integration. They are also making stronger demands on third-party vendors: 40% of the 2016 survey respondents have documented approaches and policies to which third-party software vendors must adhere, while in 2015, only 28% had any comprehensive vendor risk management program and the majority relied on the word of the vendors.¹ Respondents identified training as the most useful AppSec process, even ahead of vulnerability scanning. Much of that training may be going to developers. Unlike last year, when 22% of respondents indicated that the development team was responsible for security testing, now 30% of respondents assign responsibility for security testing to the development team. Results also show that organizations are defining AppSec testing roles and responsibilities across their security, development, business, architecture and QA teams. This may explain why only 23% said their applications were the source of actual breaches that resulted in attacks on others or loss of sensitive data. Of those, public-facing web applications were the largest items involved in breaches and experienced the most widespread breaches, which aligns with respondents’ ranking of different applications by risk. Accordingly, most AppSec resources are allocated to public-facing web applications. Overall, the survey results reveal that it is critical for an overall enterprise security program to coordinate efforts among developers, architects and system administrators—particularly since many software vulnerabilities are rooted in configuration issues or third-party components, not just in code written by the development team. AppSec is not a problem of a particular industry. Today’s companies all rely on data and software to process data. As a result, AppSec affects all sectors and sizes of organizations, and our respondents represent a wide array of businesses of different sizes. The respondents for our survey were split about evenly between small and medium size companies (<1,000 employees), large companies (1001–10,000 employees) and very large enterprises and governments (> 10,001 employees). Even smaller companies often invest heavily in custom applications to achieve a competitive advantage. AppSec protects these systems and ensures not only that proprietary data is secure from theft, but that decisions are made based on correct and reliable data. **Industry Type** The financial services, government and application development verticals were the most common industries chosen by participants. As noted in the 2015 survey, application development companies feel pressure from customers to provide security assurance for their products. See Figure 1. The “Other” category ranks second highest among the industries represented. It includes a variety of respondents, such as consulting and professional services firms, as well as media-related industries, engineering and construction, transportation and pharmaceuticals, that reflect the ubiquitous nature of software development and the need for AppSec. ![Figure 1. Top Industries Represented](image-url) Roles Security administrators and analysts made up 30% of respondents, while 21% represented senior-level security managers and 12% were security architects, as illustrated in Figure 2. This survey base is consistent with the SANS membership, which is made up of administrators, engineers and managers focused on security and risk management. Responsibility for AppSec Although security professionals represented the largest group in this survey, they are not necessarily the ones who are managing risk associated with their applications. For example, responses reveal a large and distributed group of roles that are responsible for testing AppSec, developing and executing the corrective action plan, performing final acceptance and signing off on test results. See Figure 3. Who is responsible for running the application security testing for your organization or work group? Who is responsible for final acceptance of the testing results and any corrective actions resulting from that testing? Select all that apply to your organization. ![Figure 3. Responsibility for AppSec Testing, Acceptance and Correction](image-url) As expected, for most respondents, internal teams take the lead for testing, with the development team taking the lead for the corrective action plan. Business owners take the lead for final acceptance. Unlike last year, when 22% of respondents indicated that the development team is responsible for security testing, now 30% of respondents assign responsibility for security testing to the development team. This may reflect a difference in responding organizations, who is considered a member of the development team, or a trend toward developing more security competencies on the development team. Such a trend follows what we saw in last year’s survey, where developers indicated they were improving their secure DevOps practices and finding secure development training to be highly effective in reducing their risk.² Use Independent Testers Treat quality assurance and security bugs as having equal importance. Use an independent team of testers who are, necessarily, separate from the developers who write the original code. A different set of eyes is more likely to find bugs because they don’t already know how the application is supposed to work. AppSec is still a developing area and is not as mature as many infrastructure and system security programs. The largest response group (38%) considers its AppSec program to be “maturing,” while only 26% of respondents consider their programs to be “mature” or “very mature,” as shown in Figure 4. Any corporate risk assessment should include an AppSec security component to be meaningful. For instance, more mature organizations use models, such as the Capability Maturity Model Integration for Development (CMMI-DEV), as the guide for their application development programs. However, many organizations have a limited focus on security-related best practices. To that end, the CMMI Institute released a guide for improving processes relating to the development and delivery of secure applications. Organizations invested in CMM-DEV should review the application guide, “Security by Design with CMMI for Development,” Version 1.3, which provides guidance on improving the existing processes with security components. --- Most Mature Sectors Only 3% of respondents have no AppSec program at all and no plans to enact one, which indicates the importance of AppSec. In particular, in the financial industry, and for larger companies that are subject to industry and government regulations, AppSec is becoming a compliance issue and receiving C-level attention as a result. Table 1 provides an informal look at how mature respondents believe their AppSec programs are by the most represented industries. <table> <thead> <tr> <th>Industry</th> <th>Very Mature</th> <th>Mature</th> <th>Maturing</th> <th>Immature</th> <th>Nonexistent (w/AppSec Plans)</th> <th>Nonexistent (No AppSec Plans)</th> </tr> </thead> <tbody> <tr> <td>Financial Services/Banking</td> <td>1%</td> <td>28%</td> <td>47%</td> <td>19%</td> <td>1%</td> <td>3%</td> </tr> <tr> <td>Government</td> <td>4%</td> <td>14%</td> <td>38%</td> <td>24%</td> <td>12%</td> <td>4%</td> </tr> <tr> <td>Application Development Firm</td> <td>10%</td> <td>29%</td> <td>24%</td> <td>29%</td> <td>10%</td> <td>0%</td> </tr> <tr> <td>High Tech</td> <td>8%</td> <td>50%</td> <td>19%</td> <td>15%</td> <td>4%</td> <td>4%</td> </tr> <tr> <td>Health Care</td> <td>4%</td> <td>9%</td> <td>17%</td> <td>70%</td> <td>0%</td> <td>0%</td> </tr> <tr> <td>Telecom or ISP</td> <td>13%</td> <td>22%</td> <td>39%</td> <td>13%</td> <td>4%</td> <td>4%</td> </tr> <tr> <td>Education</td> <td>0%</td> <td>17%</td> <td>11%</td> <td>50%</td> <td>6%</td> <td>17%</td> </tr> <tr> <td>Retail or E-commerce</td> <td>0%</td> <td>11%</td> <td>50%</td> <td>28%</td> <td>6%</td> <td>0%</td> </tr> </tbody> </table> In viewing these results, it is important to note that sample sizes for each industry varied, potentially affecting results. These results illustrate a trend that is not necessarily statistically significant. However, it is clear that the relative maturity of implementation of AppSec programs is higher in some industries. The high-tech industry, financial and banking organizations, and telecom, for example, appear to have higher levels for program maturity, as evidenced by the higher totals of the top maturity levels (77%, 76% and 74%, respectively. Maturity for these industries is essential, given the number of applications they likely develop. A second tier, including retail and application development firms, are maturing. Again, this is not surprising, given today’s digital world. Perhaps surprisingly, though, education leads the list of verticals with immature or nonexistent AppSec programs, with 73% across those options. Most enlightening is that 17% of education respondents neither have an AppSec program nor plans to institute one. This lack of concern for application security is alarming when we consider the number of public-facing web applications used by educational institutions for everything from registration to purchasing textbooks. Integration One of the best measures of AppSec maturity is how integrated these processes are with security and IR operations. Despite their concerns about silo mentalities, 67% of respondents have partially integrated AppSec into these operations, and 65% are partially satisfied with this stage of their integration. Another 17% have integrated fully, and 13% are satisfied with this full level of integration. See Figure 5. A fully integrated AppSec program can reap benefits in overall security posture and IR capabilities. An AppSec program spans internally developed applications and applications procured from outside vendors. Integrating such a program provides valuable input for the overall enterprise security program, including IR. For example, for a purchased application, a predeployment AppSec review will identify configuration requirements to ensure that the application is used securely. The review will also identify log management/review requirements and establish a baseline for expected application behavior. In case of an incident, this information can be valuable in helping responders identify the incident and analyze a possible compromise of the application. Respondents report worrying most about public-facing web applications, as well as their legacy applications. These applications are also those most frequently breached, according to the 23% of respondents who say that applications were the source of actual breach, data loss and attacks on others. See Figure 6. ![Figure 6. Applications Leading to Breaches](image) What applications or components were involved or were the cause of these breaches, and how widespread was their impact? Leave blank those that don’t apply. Many web applications are directly exposed to external attacks and, while infrastructure systems such as web application firewalls exist, they are often considered inadequate for deterring a sophisticated attacker. Interestingly, we are also seeing breaches into applications hosted in the cloud, which is an area we should be watching more. Cloud-based web applications are often more exposed than web applications hosted in traditional enterprise networks. In cloud environments, implementing network controls such as firewalls, web application firewalls, intrusion detection systems and similar controls can be difficult. In many cases, implementing these controls requires buying additional expensive services from the cloud provider. Risky Languages As they were in last year’s survey, respondents are most concerned about applications developed in Java and .NET, the predominant languages used in modern enterprise web applications. The focus on these languages is likely due to their popularity in these environments, not a particular weakness in these languages. JavaScript has been an up and coming language in many large web applications on the client side. With technologies such as Ajax and browsers using newer JavaScript APIs as part of HTML5, web applications are taking advantage of JavaScript by pushing more business logic and data to the client. In particular, on websites designed for mobile devices, JavaScript is used heavily to provide users with an “app-like” user experience. However, this trend does make applications more vulnerable by exposing internal data and APIs to external users. Testing tools need to mature enough to adequately support this new breed of applications. More recently, JavaScript has also become popular as an option for server-side tools, with frameworks such as AngularJS and Node.js being used to deliver complex applications. The security implications of these frameworks have not yet been fully explored. As with client-side JavaScript, testing of these applications is difficult to automate in the same way testing for traditional web applications is automated. Resources Aligned to Risk When it comes to risk and investment to protect against that risk, web applications are directly followed by legacy applications, in particular legacy applications for which the source code is available. Because they are difficult to patch and upgrade, legacy applications are often considered to be at high risk, even if they are not exposed to the public. Figure 7 illustrates which types of applications are consuming the most security resources. .NET Improving .NET has added incrementally improved security controls in each version. Regularly review any legacy applications written in .NET to take advantage of these additional controls. For example, ASP.NET 5 added a completely new authorization API. The old API used specific, hard-coded role or even usernames to provide access control, which has been difficult to maintain for larger applications. The new authorization API allows for more flexible policies that can be defined with specific requirements and privileges. JavaScript has been an up and coming language in many large web applications on the client side. With technologies such as Ajax and browsers using newer JavaScript APIs as part of HTML5, web applications are taking advantage of JavaScript by pushing more business logic and data to the client. In particular, on websites designed for mobile devices, JavaScript is used heavily to provide users with an “app-like” user experience. However, this trend does make applications more vulnerable by exposing internal data and APIs to external users. Testing tools need to mature enough to adequately support this new breed of applications. More recently, JavaScript has also become popular as an option for server-side tools, with frameworks such as AngularJS and Node.js being used to deliver complex applications. The security implications of these frameworks have not yet been fully explored. As with client-side JavaScript, testing of these applications is difficult to automate in the same way testing for traditional web applications is automated. Resources Aligned to Risk When it comes to risk and investment to protect against that risk, web applications are directly followed by legacy applications, in particular legacy applications for which the source code is available. Because they are difficult to patch and upgrade, legacy applications are often considered to be at high risk, even if they are not exposed to the public. Figure 7 illustrates which types of applications are consuming the most security resources. In the fast-moving world of security, organizations often review and amend secure coding guidelines as new attack vectors are uncovered. The result is that older applications need to be reviewed from time to time to apply new protective measures to the code. This can be a rather time-consuming and expensive undertaking that usually does not add any new features or improve performance. Quite the opposite, the revisions may reduce performance if, for example, newer and stronger cryptographic algorithms are added. Survey results, however, show that organizations recognize the problem and are dedicating a high level of resources to securing legacy applications. The lack of AppSec skills, tools and methods was ranked as being in the top three challenges to implementing AppSec by 38% of respondents, followed by lack of funding or management buy-in (37%), silos between security, development and business units (33%), and identifying all applications in the portfolio (32%), as shown in Figure 8. What are your top three challenges in implementing application security for systems in production at your organization? *Indicate the top three, in no particular order.* ![Figure 8. Top Challenges](image-url) To solve this fundamental gap—the lack of AppSec skills, tools and methods—training emerges as an important enabler and foundational process needed to conduct testing and implement better development practices. Organizations appear to have the right idea about how to address their concerns on this issue. Respondents overwhelmingly pointed to training developers on AppSec as among the top three AppSec processes and tools, with 48% choosing that option. See Figure 9. Select in no particular order the three most useful application security processes and tools your organization uses. ![Figure 9. Top AppSec Processes and Controls in Use](image-url) Funding and Budget Lack of funding or management buy-in is the second biggest challenge to AppSec programs, as illustrated in Figure 8 (on page 12). In the survey, 18% spend less than 1% of their IT budgets on security, 11% spend 1%, and 23% spend between 2% and 5%. See Figure 10. Comparing the percentage of the overall IT budget to results from last year’s survey does not show a clear trend. In both surveys, the largest portion of respondents didn’t know what portion of their budget was devoted to AppSec. However, that percentage did decrease in the 2016 survey. In 2015, 37% of respondents reported up to 5% of their budget went to AppSec, compared with 51% making the same report this year. However, 2015 saw a larger percentage (29%) of organizations devoting more than 6% of their budgets to AppSec efforts, compared to 25% in 2016. See Figure 11. ![Figure 10. Portion of Budget Devoted to AppSec](chart) It is not surprising that AppSec spending varies widely for the diverse set of organizations represented in this survey. The size of an AppSec program and the budget dedicated to it depends widely on the amount of internal and external development being undertaken. Another important factor is whether the cost for AppSec is rolled into purchase agreements or broken out as a line item in purchase contracts. The overwhelming majority of organizations (61%) expect AppSec spending to increase in the future. However, about a fifth of respondents didn’t provide an answer, which may be due, in part, to those respondents not having budget authority. Testing The software development life cycle (SDLC) begins with the planning and development of applications and upgrades and doesn’t end until the application has expired and been removed from the environment. Fortunately, respondents get this. All but 14% of respondents test their AppSec. Test schedules are diverse, but 60% indicate that they test applications continuously, with 27% using continuous assessment in their Agile development processes and 53% of respondents testing applications when they are initially launched into production. This means some of those doing continuous monitoring may not be testing at initial launch. However, it is possible that many of these organizations use a faster update cycle and usually test applications when they are updated, patched or otherwise changed, an option 41% of respondents selected. Figure 12 illustrates the testing cycles followed by respondents’ organizations. When do you assess or test the security of your business-critical applications? Select those that most apply. - Continuously as part of our continuous delivery/Agile process - Before systems are initially launched into production - Monthly - Annually - When applications are updated, patched or otherwise changed - When we need to address compliance or internal audit cycles - When we sense or know there’s a problem with the applications - Other Figure 12. Testing Cycles and Practices Organizations still rely heavily on various forms of runtime testing that are typically performed in the final stages of development or after the application has been deployed. This is also reflected in most of the testing being performed by the security department. Internal teams are responsible for testing applications, according to 62% of respondents. Note that this survey was primarily focused on nondeveloper organizations, which would explain why, for these organizations, the IT team typically performs vulnerability scans and penetration tests. **What They’re Finding** With all the ways they’re testing their apps, organizations are finding fewer flaws than we had expected, which can potentially be attributed to the fact that this survey was more focused on apps in production than apps in development. The largest group (57%) said they find one to 25 vulnerabilities per month, while 12% find 26 to 50 vulnerabilities per month through their testing efforts. Of those vulnerabilities discovered, 54% said that only 1% to 10% were critical and in need of immediate patching or countermeasures (such as virtual patching or RASP). See Figure 13. How many vulnerabilities are you discovering per month in your applications? <table> <thead> <tr> <th>Number of Vulnerabilities</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>None</td> <td>26.9%</td> </tr> <tr> <td>1–25</td> <td>4.1%</td> </tr> <tr> <td>26–50</td> <td>12.0%</td> </tr> <tr> <td>51–100</td> <td>6.5%</td> </tr> <tr> <td>101–250</td> <td>5.7%</td> </tr> <tr> <td>251–500</td> <td>4.7%</td> </tr> <tr> <td>501–1000</td> <td>6.5%</td> </tr> <tr> <td>More than 1,000</td> <td>10.9%</td> </tr> </tbody> </table> Of these, how many do you rank as critical and in need of immediate remediation? <table> <thead> <tr> <th>Criticality Level</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>Can't tell</td> <td>4.6%</td> </tr> <tr> <td>1–10% are critical</td> <td>6.4%</td> </tr> <tr> <td>11–25% are critical</td> <td>20.5%</td> </tr> <tr> <td>26–50% are critical</td> <td>15.9%</td> </tr> <tr> <td>51–75% are critical</td> <td>54.2%</td> </tr> <tr> <td>76–100% are critical</td> <td>0.8%</td> </tr> </tbody> </table> *Figure 13. Vulnerabilities Found and Their Criticality* In the survey, the largest number (24%) said 50–74% of critical vulnerabilities they found were related to code bugs rather than to misconfigurations, while 21% indicated that only 10–24% of the critical vulnerabilities they found were the result of code-based bugs, as shown in Figure 14. ![Figure 14. Vulnerabilities as a Result of Code Errors Versus Misconfiguration or Other Vulnerabilities](image) Recently, SSL configuration issues have gotten a lot of attention, and many web server installations have had to be reviewed to harden the SSL configuration. An application often cannot easily verify that SSL is configured correctly. In fact, developers are usually not deploying SSL configurations. Although an application may test that it is accessed over SSL, and could block access without SSL, the cipher used, or the SSL version used, is usually not something the application can control. On the other hand, encryption of data at rest is often handled by the application. For example, password hashing requirements have typically been increased over the last few years. While in the past, a simple salted MD5 or SHA1 hash may have been considered sufficient, advances in brute-forcing techniques and computing power require modern web applications to use stronger hashing algorithms or to apply the same algorithm multiple times. Other common vulnerabilities, for example SQL injection, are not mitigated by configuration choices. The impact of the vulnerability can be reduced by restricting access to a database to a user account with limited privileges to connect to the database. Using an administrator account to connect a web application to a database may be considered a vulnerability, even though exploitation of that vulnerability would require a SQL injection or business logic problem. Remediation Respondents unfortunately register a low level of satisfaction with their patching and repair process. Less than 30% are achieving a 75%–99% level of satisfaction with the speed it takes to repair their vulnerabilities, while only 11% felt 100% satisfied. The speed at which patches are applied is comparable to last year’s survey, with 26% of vulnerabilities being patched within two to seven days, and another 26% within eight to 30 days, as illustrated in Figure 15. ![Figure 15. Time to Patch a Vulnerability](image-url) Vulnerabilities are repaired a variety of ways, with 58% saying they do thorough updates to the entire environment, while 51% work to resolve the root cause through secure SDLC practices. “Quick and dirty” software patching was cited by the third largest respondent group (50%), and third-party libraries and configuration issues took fourth place (48%). See Figure 16. Patching the operating system and fixing configurations are very common methods used to fix flaws caused by configuration issues and third-party libraries, so it is no surprise that these are two of the top four methods to resolve vulnerabilities. Implementing secure SDLC practices often takes time. Developers have to be educated, and existing code has to be reviewed for similar flaws, which tends to be time consuming. A quick fix can make sense if it prevents exploitation of the flaw until the more permanent fix can be applied after the issue has been sufficiently researched. Vendor Accountability In the 2016 survey, 40% of respondents have documented approaches and policies that third-party software vendors must adhere to, while in 2015, only 28% had any comprehensive vendor risk-management program. It has become common practice, particularly among larger customers, to add security performance benchmarks to contract language. Application development companies are asked to provide long-term support to provide security updates. The total cost to create software may depend on the cost of these long-term support agreements. To correctly estimate and reduce these costs, application development companies need to invest more up front to limit their exposure to security vulnerabilities. It’s in the Contract If you are with an application development company, review your contract to determine what your obligations are with respect to long-term AppSec and be sure to add the related expenses to your price quotes. Security should be part of your software development process. You may find it beneficial to engage the support of vendors that are experts in AppSec. If you are with a company seeking to purchase an application or retain the services of an application development company, be sure to include contract language that places responsibility for securing the application on the vendor. Results show that it takes a village to protect applications. Security teams, developers, business units, architects and quality assurance personnel are all part of the ecosystem that protects applications. Together, all parties are maturing their AppSec security programs and are aware that they need to mature more. Skills shortages will continue to be a problem as new technologies emerge. Skills shortages have, historically, been a problem for almost all InfoSec disciplines. Organizations will need to continue to leverage training and education to develop their skill sets. Successful AppSec programs are tightly integrated with development life-cycle and procurement processes. Currently, most AppSec programs are still new, and growing them will require sufficient resources. To leverage limited budgets for AppSec, it is critical for these programs to overcome silos so that communication among all stakeholders will be promoted. Important ideas to strengthen AppSec programs include: - Use independent testers to check applications in production. - Consider legacy applications, public-facing web applications and cloud-based applications as key applications that need frequent testing. - Upgrade to continuously test AppSec. - Do penetration testing before releasing an application. - Be aware of how user SSL implementations might affect your AppSec. - Hold vendors accountable for AppSec through inclusion of specific AppSec contract language. Johannes Ullrich, dean of research at the SANS Technology Institute, is currently responsible for the SANS Internet Storm Center (ISC) and the GIAC Gold program. His research interests include IPv6, network traffic analysis and secure software development. In 2004, Network World named Johannes one of the 50 most powerful people in the networking industry, and SC Magazine named him one of the top five influential IT security thinkers for 2005. Prior to working for SANS, Johannes served as a lead support engineer for a web development company and as a research physicist. Eric Johnson, the Application Security Curriculum product manager at SANS, is the lead author and instructor for DEV544 Secure Coding in .NET, as well as an instructor for DEV541 Secure Coding in Java/JEE. A senior security consultant at Cypress Data Defense, Eric’s experience includes web and mobile application penetration testing, secure code review, risk assessment, static source code analysis, security research and developing security tools. He currently holds the CISSP, GWAPT, GSSP-.NET and GSSP-Java certifications. ### Upcoming SANS App Sec Training <table> <thead> <tr> <th>Training</th> <th>Location</th> <th>Dates</th> <th>Platform</th> </tr> </thead> <tbody> <tr> <td>SANS 2020</td> <td>Orlando, FL</td> <td>Apr 03, 2020 - Apr 10, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Amsterdam May 2020</td> <td>Amsterdam, Netherlands</td> <td>May 11, 2020 - May 18, 2020</td> <td>CyberCon</td> </tr> <tr> <td>SANS Silicon Valley - Cupertino 2020</td> <td>Cupertino, CA</td> <td>Jun 22, 2020 - Jun 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Copenhagen August 2020</td> <td>Copenhagen, Denmark</td> <td>Aug 24, 2020 - Aug 29, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS Network Security 2020</td> <td>Las Vegas, NV</td> <td>Sep 20, 2020 - Sep 27, 2020</td> <td>Live Event</td> </tr> <tr> <td>SANS OnDemand</td> <td>Online</td> <td>Anytime</td> <td>Self Paced</td> </tr> <tr> <td>SANS SelfStudy</td> <td>Books &amp; MP3s Only</td> <td>Anytime</td> <td>Self Paced</td> </tr> </tbody> </table>
{"Source-Url": "https://software-security.sans.org/resources/paper/reading-room/2016-state-application-security-skills-configurations-components", "len_cl100k_base": 6750, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 48991, "total-output-tokens": 7975, "length": "2e12", "weborganizer": {"__label__adult": 0.00042319297790527344, "__label__art_design": 0.0002815723419189453, "__label__crime_law": 0.002422332763671875, "__label__education_jobs": 0.0012655258178710938, "__label__entertainment": 8.434057235717773e-05, "__label__fashion_beauty": 0.00018727779388427737, "__label__finance_business": 0.0018587112426757812, "__label__food_dining": 0.0003094673156738281, "__label__games": 0.0008802413940429688, "__label__hardware": 0.0013666152954101562, "__label__health": 0.0006666183471679688, "__label__history": 0.00015223026275634766, "__label__home_hobbies": 0.00010663270950317384, "__label__industrial": 0.0005397796630859375, "__label__literature": 0.00016808509826660156, "__label__politics": 0.0003573894500732422, "__label__religion": 0.00027823448181152344, "__label__science_tech": 0.04010009765625, "__label__social_life": 0.00012010335922241212, "__label__software": 0.031463623046875, "__label__software_dev": 0.916015625, "__label__sports_fitness": 0.0002646446228027344, "__label__transportation": 0.00041556358337402344, "__label__travel": 0.00018990039825439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33768, 0.03045]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33768, 0.14395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33768, 0.9495]], "google_gemma-3-12b-it_contains_pii": [[0, 438, false], [438, 438, null], [438, 3059, null], [3059, 4514, null], [4514, 4859, null], [4859, 5645, null], [5645, 6966, null], [6966, 8518, null], [8518, 11697, null], [11697, 12885, null], [12885, 14148, null], [14148, 18068, null], [18068, 18734, null], [18734, 19281, null], [19281, 19935, null], [19935, 20854, null], [20854, 21503, null], [21503, 22919, null], [22919, 25081, null], [25081, 26891, null], [26891, 27430, null], [27430, 28386, null], [28386, 29890, null], [29890, 31352, null], [31352, 32456, null], [32456, 33768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 438, false], [438, 438, null], [438, 3059, null], [3059, 4514, null], [4514, 4859, null], [4859, 5645, null], [5645, 6966, null], [6966, 8518, null], [8518, 11697, null], [11697, 12885, null], [12885, 14148, null], [14148, 18068, null], [18068, 18734, null], [18734, 19281, null], [19281, 19935, null], [19935, 20854, null], [20854, 21503, null], [21503, 22919, null], [22919, 25081, null], [25081, 26891, null], [26891, 27430, null], [27430, 28386, null], [28386, 29890, null], [29890, 31352, null], [31352, 32456, null], [32456, 33768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33768, null]], "pdf_page_numbers": [[0, 438, 1], [438, 438, 2], [438, 3059, 3], [3059, 4514, 4], [4514, 4859, 5], [4859, 5645, 6], [5645, 6966, 7], [6966, 8518, 8], [8518, 11697, 9], [11697, 12885, 10], [12885, 14148, 11], [14148, 18068, 12], [18068, 18734, 13], [18734, 19281, 14], [19281, 19935, 15], [19935, 20854, 16], [20854, 21503, 17], [21503, 22919, 18], [22919, 25081, 19], [25081, 26891, 20], [26891, 27430, 21], [27430, 28386, 22], [28386, 29890, 23], [29890, 31352, 24], [31352, 32456, 25], [32456, 33768, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33768, 0.2378]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
5b4583c8f1304e0dce3fd5b5fe0cc6ca26345dd4
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Hernan_Astudillo2/publication/221045918_The_Tutelkan_Reference_Process_A_Reusable_Process_Model_for_Enabling_SPI_in_Small_Settings/links/004635162c2b7595f5000000.pdf", "len_cl100k_base": 5976, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37343, "total-output-tokens": 7504, "length": "2e12", "weborganizer": {"__label__adult": 0.0002655982971191406, "__label__art_design": 0.0003418922424316406, "__label__crime_law": 0.00026154518127441406, "__label__education_jobs": 0.0013885498046875, "__label__entertainment": 4.9233436584472656e-05, "__label__fashion_beauty": 0.00012505054473876953, "__label__finance_business": 0.000919342041015625, "__label__food_dining": 0.00024187564849853516, "__label__games": 0.0003573894500732422, "__label__hardware": 0.00040435791015625, "__label__health": 0.00023162364959716797, "__label__history": 0.0001852512359619141, "__label__home_hobbies": 7.128715515136719e-05, "__label__industrial": 0.00030803680419921875, "__label__literature": 0.0002105236053466797, "__label__politics": 0.00017511844635009766, "__label__religion": 0.00026154518127441406, "__label__science_tech": 0.00569915771484375, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.0083770751953125, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.0001971721649169922, "__label__transportation": 0.0002913475036621094, "__label__travel": 0.00016129016876220703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31033, 0.02426]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31033, 0.17373]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31033, 0.89925]], "google_gemma-3-12b-it_contains_pii": [[0, 2815, false], [2815, 6057, null], [6057, 8762, null], [8762, 11544, null], [11544, 14377, null], [14377, 15751, null], [15751, 18725, null], [18725, 21424, null], [21424, 21897, null], [21897, 24086, null], [24086, 27745, null], [27745, 31033, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2815, true], [2815, 6057, null], [6057, 8762, null], [8762, 11544, null], [11544, 14377, null], [14377, 15751, null], [15751, 18725, null], [18725, 21424, null], [21424, 21897, null], [21897, 24086, null], [24086, 27745, null], [27745, 31033, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31033, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31033, null]], "pdf_page_numbers": [[0, 2815, 1], [2815, 6057, 2], [6057, 8762, 3], [8762, 11544, 4], [11544, 14377, 5], [14377, 15751, 6], [15751, 18725, 7], [18725, 21424, 8], [21424, 21897, 9], [21897, 24086, 10], [24086, 27745, 11], [27745, 31033, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31033, 0.13836]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
24a518cb2b260178cdb4f0ffecafee1eb20b2d9f
Validation of reactive embedded systems against specification requirements Joanna Strug,Stanisław Deniziak, Krzysztof Sapiecha Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland Abstract In this paper a method of automatic generation of test scenarios for verification of specification requirements (temporal and functional) for reactive embedded systems is presented. 1. Introduction The aim of design-validation is to check whether or not specification requirements (functional and temporal) imposed on a system are met [1,2]. Most of recently proposed techniques of design-validation use formal verification methods, like model checking [1,3] and theorem proving [4]. These methods typically use automata based models [4] of a system and temporal logic (TL) [5] in order to express the required temporal properties. However, temporal properties, which may be expressed in this way are limited to safety and liveness [6,3]. Some extensions of TL can capture time properties more precisely. In Timed CTL [1,2] time-bounded versions of each time operators are introduced. Real-time logic (RTL) [6] includes special predicates, which relate events that happen in a system with the time they occur. The duration calculus [7] add operators to access intervals. On the basis of these extensions it is possible to verify certain design properties including temporal requirements. In [8] there are proposed two proof methodologies corresponding to two specification styles of real-time properties. A system is modeled as a real-time transitional one. Time properties are expressed in time-bounded logic or by explicit reference to a current time through a special clock variable. A deductive proof is then conducted to show the consistency with the specification. The formal verification methods are limited to small and medium size designs or are restricted to some subproblems. For large systems, simulation-based validation techniques are still most popular [9]. The main problem here is to * Corresponding author: e-mail address: pesapiec@cyf-kr.edu.pl develop a set of input stimuli giving high validation accuracy. Some efficient methods of automatic generation of test scenarios to validate a system against functional requirements have already been developed [10,11]. However, there are no such satisfactory methods as far as temporal requirements are concerned. Moreover there are no efficient methods for validation of both types of specification requirements. The aim of this paper is to present a method of automatic generation of test scenarios for validation of embedded systems [12] against temporal and functional requirements. Test scenarios are derived from system requirements and are then applied to a model or a prototype of the system. Each test scenario consists of verification sequence (sequence of stimuli to be applied to system inputs) and the expected responses which are then compared with those generated by the system while simulating. Main features of the proposed method are described in sections 2 and 3. Section 4 includes short comparison, considerations and conclusions. 2. Embedded system model It is assumed that a designer starts with gathering functional and temporal requirements (temporal constraints) for a system. These requirements are usually described in a textual form, but it is assumed that each of the requirements has a unique identifier. Manual translation to more formal specification (e.g. SCR [13]) is then performed and a suitable model of the functional requirements is automatically developed (as described in [10]). A model of an embedded system $S$ is defined as a couple $S = (T, G)$, where $T$ is a set of tasks$^{1}$ that should be executed by the system and $G = (V, E)$ is a directed graph representing its functional requirements. Each functional requirement or its separated part (if any) and each task have unique identifiers denoted by $RId$ and $TId$ correspondingly. Execution time of a task is fixed and data-independent. $V$ is a finite set of nodes. Nodes belonging to $V$ correspond to stable states of the system. Values of state variables determine the state of $S$. A single node denoted by $v_0$ distinguished from $V$ represents initial state of the system. $E$ is a set of edges. Each edge belonging to $E$ represents transition between a given pair of nodes. Edges are labeled with stimuli, responses (if any is generated), requirements and tasks identifiers. Graph $G$ can be a cyclic or an acyclic one. It depends on the system. Multiple edges are also enabled (in order to represent different causes of transition between the same states). Safety Injection System (SIS) for nuclear reactor [10] serves as an example for our method. Functional requirements for the system are given in Table 1. Each of the requirements is supplemented with identifiers of tasks which are --- $^{1}$ Tasks are extracted from a task graph [14,15]. executed to meet the given requirement or its part. On this basis a model of the system is developed (Figure 1). The state variables and their admissible values are: \( WP \) (\( P \) – permitted water pressure, \( TL \) – water pressure below the threshold \( LOW \)), \( Overridden \) (\( T \) – if \( Block \) has been asserted and \( F \) – if \( Reset \) has been asserted), \( TrefCnt \) (asserts counting of time, may have the values of 0, 1 and 2) and \( SJ \) (\( Off \) – if the valve is closed and \( On \) – if the valve is opened). Table 1. Functional requirements for SIS <table> <thead> <tr> <th>RId</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>R1</td> <td>The system shall assert SafetyInjection when WaterPres falls below LOW (opening a valve T1).</td> </tr> <tr> <td>R2</td> <td>(a) A The system shall be blocked (blocking T3) in response to Block being asserted while Reset is not asserted and WaterPres is below LOW, and shall remain blocked until either (c) Reset is asserted or (b) WaterPres crosses LOW from a larger to smaller value (unblocking T4 and setting TrefCnt to zero T6).</td> </tr> <tr> <td>R3</td> <td>Once SafetyInjection is asserted, it shall remain asserted until the system becomes blocked or WaterPres becomes greater than or equal to LOW (closing a valve T2).</td> </tr> <tr> <td>R4</td> <td>When the system is blocked and WaterPres is less than LOW, the system shall (a) start counting (increasing TrefCnt T5) and (b) automatically unblock (T4 and T6) itself after the third timing reference event is sensed on input Tref.</td> </tr> </tbody> </table> It is typical for reactive systems that they interact continuously with the environment in which they operate. Hence, constraints imposed on the system by the environment (external requirements) must be considered. These constraints include input signals frequency, time separation between signals occurrences on different inputs or inputs and outputs, etc [14]. There may also exist time constraints expressing desired time relation between a system and its environment and between different tasks (some tasks or devices may require specific timing). In order to represent these constraints (internal requirements) minimal and maximal delays may be introduced. They define the amount of time allowed for execution of particular task(s). The minimal time delay determines the first possible moment of the time at which the execution of specified task(s) may be completed, whereas maximal time delay determines the time at which it must be completed. A temporal constraint is violated if the execution of task(s) is completed to early or to late. A unique Constraint Identifier (CId) is associated with each temporal requirement. Temporal requirements imposed on SIS are given in Table 2 where: @A denotes A as an initial event for execution of tasks, ’ and ” indicate paths associated with different tasks and () and {} denote constraint associated with a particular path and marked subsets of nodes respectively. Requirement described in the second row of Table 2 belong to the requirements associated with a group of paths. The remaining requirements are associated with particular tasks. <table> <thead> <tr> <th>Cid</th> <th>T_min</th> <th>t_max</th> <th>Description</th> <th>Notation</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0</td> <td>1</td> <td>Time required for opening a valve (SJ=On) when water pressure falls below the allowed threshold (@WaterPres &lt; LOW).</td> <td>(1,2)</td> </tr> <tr> <td>2</td> <td>0</td> <td>0,5</td> <td>Time required for transition to a proper state (WP=P) when water pressure rise above the allowed threshold (@WaterPres &gt;= LOW).</td> <td>{2,3,4,5}=&gt; {1}</td> </tr> <tr> <td>3</td> <td>0</td> <td>2</td> <td>Time required for manual unblocking the system and to open the valve (@Reset=On, SJ=On) when water pressure is lower than the allowed threshold (WP=TL).</td> <td>(3,2)’, (4,2), (5,2)</td> </tr> <tr> <td>4</td> <td>0</td> <td>1,5</td> <td>Time required for closing a valve (SJ=Off) when Block is asserted (@Block=On) and water pressure is lower than the threshold (WP=TL).</td> <td>(2,3)</td> </tr> <tr> <td>5</td> <td>0</td> <td>3,0</td> <td>Time required for automatic unblocking and to open a valve (SJ=On) when the system have been blocked and three timing references have been sensed on input Tref.</td> <td>(3,2)’’</td> </tr> </tbody> </table> ### 3. Verification sequences A solution applied here is based on the concept of critical paths. A path $S_{ij}$ from node $v_i$ to node $v_j$ in graph $G$ is defined as a sequence of edges $<e_{i,i+1}, e_{i+1,i+2}, ..., e_{j-1,j}>$, where $e_{k,k+1}$ belonging to $E$ denotes an edge between nodes $v_k$, $v_{k+1}$ belonging to $V$. Each path, to which a temporal constraint is associated, is called critical path [16,17]. Generation of verification sequences for all critical paths results in exhaustive verification of all temporal constraints, thus reductions are necessary. In our approach a reduced set of critical paths is selected and then evaluated to check if the paths cover also all functional requirements (paths should include edges labeled with all RId). The set is then updated with one-edge paths for missing RId if necessary. Each critical path determines a subset of tasks, which should be executed in a time given by a temporal constraint. A constraint may be imposed on a path representing given (in specification) subset of tasks. This situation allows existence of multiple paths (between different pairs of nodes), but all of them represent the same subset of tasks. An example of such a constraint is presented in Figure 2. For task T1 three critical paths (<\text{e}_{1,2}>, <\text{e}_{3,4}> and <\text{e}_{5,4}>) are determined. A constraint may be also imposed on a transition between given states of the system (referred to as source and target nodes respectively). Hence all paths between these nodes are critical ones and may represent different subsets of tasks. Such a situation is shown in Figure 3. Paths <\text{e}_{2,3,4}>, <\text{e}_{2,5,6,4}> and <\text{e}_{2,4}> are all critical ones. Design-validation based on exhaustive verification sequences is always valid. On the contrary, design-validation based on reduced verification sequences might lead to optimistic conclusions. The goal of our work is to generate a reduced but still comprehensive set of test scenarios for a system. To this aim some assumptions are taken. These are the following: 1. each temporal constraint requires at least one verification sequence to be verified, but all tasks associated with any constraint have to be checked, 2. execution time of each of the tasks belonging to $T$ is fixed and it does not depend upon the way the task is started. Such assumption does not hold for general purpose systems but it usually holds for embedded ones. However, it is not true for tasks, whose execution time is data dependent. Then the validation results are only approximated ones, but they can be improved if we assume WCET (Worst Case Execution Time) for maximum delays or/and BCET (Best Case Execution Time) for minimal delays. On the basis of these assumptions, the number of paths to be generated and verified can be considerably limited. However, for some systems this might be too optimistic. Temporal correctness of execution of tasks is checked rather than of a particular critical path. Nevertheless, there is at least one verification sequence covering each temporal constraint in the generated set. The selection of critical paths to be generated and combined is based on comparison of subsets of tasks associated with these paths. Let two critical paths $P$ and $P^*$, and two sets of tasks $T$ and $T^*$, associated with $P$ and $P^*$ respectively, be given. Path $P$ covers $P^*$, if $T^*$ is a subset of $T$. In Figure 4 draft and main procedures of the algorithm of generation of test scenarios are presented. ```python # test_scenarios_generation() { for (each constraint $C_{id}$) do determine source and target nodes; for (each $C_{id}$) do if (constraint $C_{id}$ imposed on tasks) then chose random pair of nodes; generate and save a path; else generate and save path(s) ; combine critical paths; evaluate the set of paths; if (not all $Rid$) update ST; save test scenarios; } ``` Fig. 4. An algorithm of generation of test scenarios At first source and target nodes for possible (not yet generated) paths are determined. Next, for each constraint paths are selected and generated according to the following rules: 1. If a constraint is imposed on a subset of tasks then verification of any path containing these tasks is sufficient (they cover each other). The choice of the path to be generated is not of primary importance and may be random. For example, path \(<e_{1,2}>\) in Figure 2 may be chosen. The remaining paths associated with the constraint are those rejected. Reductions performed at this step are the most effective, because a number of paths may be significantly limited without their generation. 2. If only source and target nodes are specified, paths are generated and associated with them subsets of tasks are determined and compared (covered paths are rejected). The minimal subset of paths associated with a given constraint consists of paths representing execution of different subsets of tasks. In Figure 3 path \(<e_{2,4}>\) representing task \(T3\) and \(<e_{2,3}, e_{3,4}>\) representing tasks \(T1\) and \(T2\) belong to the minimal set for the constraint. Path \(<e_{2,5}, e_{5,4}>\) may be dropped as a covered one. The execution of this step produces a reduced set of critical paths. It is the smallest set that includes critical paths representing all different subsets of tasks. Two path generation algorithms are used. The first one searches for all possible paths between specified nodes. The second one makes it possible to determine edges belonging to a path if tasks to be executed are specified. Both algorithms use similar techniques. During the generation of critical paths a Paths Tree (\(PT\)) is built and accepted nodes are added to it. The acceptance functions prevent us from exploring already visited nodes. Combination of the generated paths allows for further reductions. Minimal coverage of generated paths is reached in a similar way as in [10], e.g. a Scenarios Tree (\(ST\)) is built and paths are added to it. In the next step the set is evaluated to determine whether all functional requirements are covered by paths from this set or not. It relies on checking if all \(RId\) are represented by labels of edges in \(ST\). In the case that not all \(RId\) have been found, a procedure similar to that in [10] is started. It explores the state space of \(G\) and adds one-edge path labeled with missing \(RId\) to \(ST\). The algorithm of test scenarios generation ends after saving stimuli and responses labeling edges of \(ST\). In Table 3 the final result of the application of the algorithm to SIS is given (\((PId)\) denotes a critical path; Path Identifier \((PId)\) is introduced to distinguish paths generated for \(CId\) time constraint). At the beginning ten critical paths were founded. Next, four of them were rejected during the generation process and another one during combination of the remaining paths. Because these paths did not cover the \(R2c\) requirement one extra edge was added to satisfy this requirement. Finally, a set of four test scenarios was produced. Experimentally calculated verification quality $Q_v^2$[17] for verification sequences from this set equals 1. It means that all errors, temporal as well as functional, randomly injected into the model were correctly detected. Table 3. Reduced set of test scenarios for SIS <table> <thead> <tr> <th>No</th> <th>Test scenarios</th> <th>(Pid)Sst(Cid)</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>WaterPres &lt; LOW/SafetyInjection = On</td> <td>1S1,2(1), 2S2,1(2)</td> </tr> <tr> <td></td> <td>WaterPres &gt;=LOW/SafetyInjection = Off</td> <td></td> </tr> <tr> <td>2</td> <td>WaterPres &lt; LOW/SafetyInjection = On</td> <td>1S1,2(1), 1S2,3(4), 3S4,1(2)</td> </tr> <tr> <td></td> <td>Block = On/SafetyInjection = Off</td> <td></td> </tr> <tr> <td></td> <td>Tref/</td> <td></td> </tr> <tr> <td></td> <td>WaterPres &gt;= LOW/</td> <td></td> </tr> <tr> <td>3</td> <td>WaterPres &lt; LOW/SafetyInjection = On</td> <td>1S1,2(1), 1S2,3(4), 1S3,2(3)</td> </tr> <tr> <td></td> <td>Block = On/SafetyInjection = Off</td> <td></td> </tr> <tr> <td></td> <td>Reset = On/ SafetyInjection = On</td> <td></td> </tr> <tr> <td>4</td> <td>WaterPres &lt; LOW/SafetyInjection = On</td> <td>1S1,2(1), 1S2,3(4), 1S5,2(5)</td> </tr> <tr> <td></td> <td>Block = On/SafetyInjection = Off</td> <td></td> </tr> <tr> <td></td> <td>Tref/</td> <td></td> </tr> </tbody> </table> The exhaustive set of test scenarios used for experimental evaluation of the reduced one consists of eight scenarios. The total length of all verification sequences belonging to the exhaustive set equals 31 stimuli, whereas the length of verification sequences in the reduced set is equal to only 14 stimuli. 4. Conclusions Actually an embedded system designer may choose one of the following approaches to verification specification requirements: time budget-based [14], formal [1-4,8] and simulation-based verification [10,11]. Some knowledge about time budgets for execution of tasks can help the designer to keep correctness of the system under control throughout the whole design flow. Though, it does not guarantee that any design error will occur. Moreover, usually calculation really true budgets is not easy. Formal verification techniques require the system specification requirements to be described in a form of logical expressions (formulas). It is assumed that the PRES+ model [1,2] is generated from an implementation of a system and it reflects exactly time relations in the real system. Such model may represent data and control flow, as well as concurrency. This is an advantage with respect to --- 2 Verification quality ($Q_v$) is defined as follows: $Q_v = 1 - C_0/C$, where $C_0$ is the number of optimistic verification conclusions (GO instead of NOGO), and $C$ is the total number of verifications [17]. other approaches. But to start the verification one requires an access to exact execution times for tasks and thus may be conducted very late in the design flow. Test scenarios generation for simulation-based verification does not require any time information and can be performed very early in the design process. Test scenarios can be reused for validation of the system (or its model) on multiple levels of design description and multiple design alternatives. In the paper a simulation-based method for validation of embedded systems against specification requirements is presented. Test scenarios obtained with the help of the method can be used for verification both, functional and temporal requirements. The method is easy to use in practice and verification sequences are short. Automating test scenarios generation makes the method fast and flexible. Our solution is inspired by the method presented in [10] which addresses only the problem of functional validation. We extended this method with the possibility of verification of temporal requirements. Distinguishing of tasks gives us an insight into internal behavior of the system and helps for appropriate selection of paths to be verified. Although, the method should usually provide good validation results there are some problems to be remembered. Reductions which are performed to get a set of paths and of test scenarios assume rejection of covered paths. In some situations (covered path represents fewer tasks than the covering one) it may lead to undetected violation of a temporal constraint, because the covering paths can compensate for the time. It must be also taken into consideration that if execution time of each task is not constant then the verification sequences are only rough ones. References
{"Source-Url": "http://journals.umcs.pl/ai/article/download/2971/2167", "len_cl100k_base": 4858, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 25528, "total-output-tokens": 5796, "length": "2e12", "weborganizer": {"__label__adult": 0.0005793571472167969, "__label__art_design": 0.0008358955383300781, "__label__crime_law": 0.0007190704345703125, "__label__education_jobs": 0.0010900497436523438, "__label__entertainment": 0.0001302957534790039, "__label__fashion_beauty": 0.00029087066650390625, "__label__finance_business": 0.00047898292541503906, "__label__food_dining": 0.0005412101745605469, "__label__games": 0.0011682510375976562, "__label__hardware": 0.00970458984375, "__label__health": 0.0009479522705078124, "__label__history": 0.0004227161407470703, "__label__home_hobbies": 0.00022530555725097656, "__label__industrial": 0.0015783309936523438, "__label__literature": 0.0003180503845214844, "__label__politics": 0.00047659873962402344, "__label__religion": 0.0008034706115722656, "__label__science_tech": 0.298828125, "__label__social_life": 0.00010925531387329102, "__label__software": 0.0068511962890625, "__label__software_dev": 0.6708984375, "__label__sports_fitness": 0.0005583763122558594, "__label__transportation": 0.0020503997802734375, "__label__travel": 0.0002808570861816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23842, 0.01884]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23842, 0.5793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23842, 0.90714]], "google_gemma-3-12b-it_contains_pii": [[0, 2080, false], [2080, 4947, null], [4947, 6780, null], [6780, 9423, null], [9423, 10726, null], [10726, 13021, null], [13021, 16129, null], [16129, 19487, null], [19487, 22411, null], [22411, 23842, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2080, true], [2080, 4947, null], [4947, 6780, null], [6780, 9423, null], [9423, 10726, null], [10726, 13021, null], [13021, 16129, null], [16129, 19487, null], [19487, 22411, null], [22411, 23842, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23842, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23842, null]], "pdf_page_numbers": [[0, 2080, 1], [2080, 4947, 2], [4947, 6780, 3], [6780, 9423, 4], [9423, 10726, 5], [10726, 13021, 6], [13021, 16129, 7], [16129, 19487, 8], [19487, 22411, 9], [22411, 23842, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23842, 0.225]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
924f762ec8197896e6a9c70128cc39d19145354e
An essential goal of virtual machine introspection (VMI) is security policy enforcement in the presence of an untrustworthy OS. One obstacle to this goal is the difficulty in accurately extracting semantic meaning from the hypervisor’s hardware-level view of a guest OS. Virtual machine introspection (VMI) techniques allow an external security monitor to observe software behavior inside a virtual machine (VM), including the guest OS. For example, we can use VMI to list programs running inside a VM—comparable to ps on Unix systems or Windows Task Manager. Obtaining a process list outside a VM is appealing from a security perspective because security administrators can identify illicit programs on a system, even if the OS kernel is compromised. There are also nonsecurity benefits to listing processes outside the VM, such as standardization of administrative utilities across multiple guest OSs. A simple VMI-based process list would identify process descriptors’ memory addresses and typecast them (in C parlance) to interpret their content. VMI developers must find the kernel data structures, such as process descriptors, by searching publicly available symbols for the addresses of the process descriptors’ data structure. Any guest OS abstraction can be introspected, including open file descriptors, network sockets, and interprocess communication abstractions. For instance, storage system prototypes have used VMI to track whether disk writes are data or metadata, writing metadata changes to disk more aggressively than data. In this article, we focus on in-memory data structures and CPU register state. VMI is appealing because it can move OS security monitoring out of the OS. Widely used OS kernels are generally very large and afford little fault or security isolation among components; are written in languages such as C or C++ that offer little protection against exploitable programmer errors; and have complex, hard-to-secure APIs. Thus, if any OS kernel component has an exploitable bug, all OS-level security measures are easily disabled. In our process listing example, a rootkit module could tamper with the kernel’s mechanism for listing the set of running processes, often to hide other malware running on the system. Not only could an effective rootkit hide malware from a process listing utility or antivirus system inside the OS, it could avoid detection and removal. A VMI monitor can view all guest OS memory and identify rootkits. The fundamental challenge underlying VMI is how to reliably infer what’s happening in the guest OS. In our simple example, the VMI monitor has direct access only to hardware-level state, such as CPU registers and memory contents, and must make inferences about high-level abstractions, such as process descriptors and open files. This mismatch is called the semantic gap. In this article, we summarize the major known techniques to bridge the semantic gap and discuss attacks on and defenses against these techniques. **Assumptions** Because bridging the semantic gap is such a challenging problem, most techniques have introduced assumptions that limit the threat model. In our process listing example, the VMI monitor uses knowledge obtained out of band, such as debugging symbols and structure definitions, and must assume that the potentially compromised guest OS is using these symbols and structures as expected. In some cases, we can detect deviation from assumed OS behavior, but many assumptions are hard to check. For instance, it’s difficult to verify that the binary name listed in a process descriptor is an accurate description of what’s running in the process. Each fragile assumption is a potential vector for adversaries to confuse and evade the VMI monitor. As a result, most VMI techniques assume the guest OS is benign—initially not malicious, but potentially compromised after boot. VMI designs currently assume generous limits on the degree to which a compromised guest OS can actively work to confuse a VMI monitor, and these limits aren’t always explained. Nonetheless, VMI tools designed under this threat model can still have practical value, as OSs can be benign in practice. **Basic VMI System Design** The first consideration in VMI design is where to place the monitor, which directly influences how the monitor accesses guest memory and CPU state. Figure 1 illustrates options for VMI monitor placement, including in the hypervisor (with possible hardware assistance), in the guest OS, in a sibling VM, or outside the hypervisor (in a Type 2, or hosted, hypervisor only—not shown). To access hardware information, such as CPU register contents, an in-hypervisor monitor can directly access hypervisor-internal data structures. When the monitor is moved out of the hypervisor, the hypervisor must export other hardware information to the monitor via an additional interface. Placing the introspection tool in a sibling VM is particularly popular for several reasons. First, a sibling VM can have a read-only or copy-on-write mapping of the guest’s memory, creating a high-bandwidth channel to traverse data structures. Second, this design requires minimal changes to the hypervisor and protects it from bugs in the VMI monitor. Finally, a VMI monitor developer can use a familiar environment, such as a comparable OS kernel and helper functions. --- **Figure 1. Monitor placement options in virtual machine introspection (VMI): in a sibling virtual machine (VM), the hypervisor, the guest OS, or the hardware. In-guest and hardware solutions require assistance from the hypervisor.** **Trading Risk for Performance with Asynchrony** The second question is when to introspect. In our example, suppose we want to know each time a process is created or destroyed. A synchronous mechanism requires one or more triggering events, such as changing the process descriptor list or scheduling a process. When a triggering event occurs, the hypervisor pauses the VM, and the VMI tool introspects the process descriptor list. In contrast, an asynchronous mechanism would introspect memory concurrently with guest execution, generally at a configurable interval. A typical introspection pass that checks data structure invariants takes from milliseconds to minutes; pausing the VM for this length of time in a synchronous design is unacceptable. Asynchrony limits CPU overhead, generally to a few percent, by adjusting the frequency of checks. Asynchrony’s primary disadvantage is that it must handle transient OS states, whereas a carefully placed synchronous triggering event can avoid transient states. While executing inside a critical section, an OS might violate its own invariants temporarily. A correct OS will, of course, restore the invariants before exiting the critical section. If an introspection monitor searches memory during a kernel critical section, the monitor might observe benign violations of these invariants. Current approaches to this problem include looking for repeated violations of an invariant (leaving the system vulnerable to race conditions with attackers) or introspecting only when the OS can’t be in critical sections, for example, by preempting each CPU while out of the guest kernel. **Hardware Acceleration** Several VMI prototypes have used hardware to accelerate or offload introspection. One major approach is snapshotting, wherein a device takes a snapshot of RAM, using, say, a tool on the PCI bus, and offloads the snapshot to another machine for asynchronous introspection. More recent systems have used snooping on the system memory bus as a lightweight triggering mechanism. On commodity hardware, page protections are the primary technique to monitor access to many memory locations. The coarse granularity of page protections leads to many needless checks triggered by memory accesses adjacent to the monitored structure. Unlike page protections, snooping systems can monitor writes at the finer granularity of cache lines, reducing needless checks. Initial snooping systems used customized hardware, although a recent design leveraged best-effort hardware transactional memory on commodity chips to implement snooping, at the cost of one dedicated core.4 Snooping can be synchronous or asynchronous. Prevention versus Detection Some introspection tools prevent security policy violations, such as execution of unauthorized code, whereas others detect a compromise only after the fact. Clearly, prevention is a more desirable goal but requires a mechanism to identify and interpose on low-level operations that might violate a system security policy. Certain goals map naturally onto hardware mechanisms, such as page protections on kernel code. Other goals, such as upholding kernel data structure invariants, are open questions. All current prevention systems employ some form of memory protection to synchronously interpose on sensitive data writes. As a result, current VMI tools detect only violations of more challenging properties, generally using periodic introspections. Periodic checks are a good fit for malware that leaves persistent modifications, but can miss transient modifications. A straw man approach to prevent violations of data structure invariants might trigger synchronous introspection on all writes to all security-relevant objects—which would be prohibitively expensive. Moreover, because some invariants span multiple writes, the straw man approach would likely yield false negatives without deeper analysis of the code behavior. Prevention techniques based on memory bus snooping might be more efficient, but this is an open research question. Bridges across the Semantic Gap To cross the semantic gap, a VMI system must extract high-level abstractions from the running guest system. We describe the three primary techniques to bridge the semantic gap—learning and reconstruction, code implanting, and process outgrafting—and their underlying trust assumptions in Table 1. One assumption common to all these techniques is that the executable kernel code doesn’t change between introspection tool creation and guest OS monitoring. This requires a measure of kernel integrity protection, discussed in more detail in “SoK: Introspections on Trust and the Semantic Gap.”3 Learning and Reconstruction The first technique reconstructs data structures from memory contents. Data structure reconstruction can be divided into learning and searching phases. The learning phase creates data structure signatures, using techniques including expert knowledge, source analysis, and dynamic analysis. A signature identifies and defines data structure instances. The search phase uses the signatures to identify and interpret data structures. A search can be either a linear scan of kernel memory or a traversal of data structure pointers, starting with public symbols. It is arguable which approach is more efficient, because many kernel data structures can have cyclic or invalid pointers but might require traversing less total memory. However, the linear scan of kernel memory is robust in the presence of “disconnected” structures or other attempts to obfuscate pointers. Both techniques can observe transient states when searching concurrently with OS operation. There are the three major techniques for learning data structure signatures. ### Table 1. VMI techniques, their underlying trust assumptions, and monitor placement. <table> <thead> <tr> <th>Technique</th> <th>Assumptions</th> <th>Monitor placement</th> </tr> </thead> <tbody> <tr> <td>Automated learning and reconstruction</td> <td>Benign copy of OS for training; OS will behave similarly during learning phase and monitoring; security-sensitive invariants can be automatically learned; and attacks will persist long enough for periodic scans</td> <td>Sibling VM, hypervisor, or hardware</td> </tr> <tr> <td>Code implanting (hypervisor protects monitor inside guest OS)</td> <td>Malicious guest schedules monitoring tool and reports information accurately</td> <td>Guest with hypervisor protection</td> </tr> <tr> <td>Process outgrafting (reuse monitoring tools from sibling virtual machine [VM] with shared kernel memory)</td> <td>Live, benign copy of OS behaves identically to monitored OS</td> <td>Sibling VM</td> </tr> </tbody> </table> Handcrafted signatures. Introspection and forensic analysis tools initially used handcrafted signatures, based on expert knowledge of the internal workings of an OS. Handcrafted signatures have an inherent limitation: each change to an OS kernel requires an expert to update the tools. For instance, a new version of the Linux kernel is released every two to three months; bug-fix updates can be as frequent as every few weeks. Each of these releases can change a data structure layout or invariant. Similarly, different compilers or versions of the same compiler can change the layout of a data structure in memory, frustrating handwritten tools. Automated techniques have become popular to keep pace with these release schedules and the variety of OS kernels and compilers. Source code analysis. Automated reconstruction tools might rely on source code analysis or debugging information to extract data structure definitions and leverage source invariants to reduce false positives during the search phase. A basic application of source analysis identifies all kernel object types, and then traverses the graph of pointers, starting from global symbols. A key challenge in creating this graph of data structures is that not all pointers in a data structure point to valid data. For example, the Linux dcache uses deferred memory reclamation of a directory entry structure, called a dentry, to avoid synchronization with readers. When a dentry is on a to-be-freed list, it might point to memory that has already been freed and reallocated for another purpose; an implicit invariant is that these pointers will no longer be followed once the dentry is on this list. Unfortunately, these implicit invariants can thwart simple pointer traversal. An alternative is to use the structure of this pointer graph as a signature. For instance, the pointers among task_struct structures in Linux form a different graph from pointers connecting inode structures. Dynamic learning. Rather than identifying code invariants from kernel source code, we can observe a running OS instance to learn data structure invariants. Analogous to supervised machine learning, the VMI tool trains on a trusted OS instance, and then classifies the data structures of potentially untrusted OS instances. During the training phase, these systems often control the stimuli by running programs that will manipulate a data structure of interest or incorporating debugging symbols to discern more quickly which memory regions might include a structure of interest. Some dynamic systems have also developed robust signatures, which are immune to malicious changes to live data structure instances. The primary utility of robust signatures is detecting when a rootkit attempts to hide persistent data by modifying data structures in ways that the kernel doesn’t expect. However, these attempts are fruitful only if they don’t crash the OS kernel. Thus, robust signatures leverage invariants an attacker can’t safely violate. Code Implanting A simpler approach to bridging the semantic gap is to inject code into the guest OS that reports semantic information back to the hypervisor. For instance, Syringe implants functions into the kernel, which can be called from the VM. A challenge to implanting code is ensuring that the implanted code isn’t tampered with and actually executes, and that the guest OS components it uses report correct information. Most of these implanting techniques ultimately rely on the guest kernel to faithfully represent information, such as the process list, to the injected code. Process Outgrafting To overcome the challenges with running a trusted process inside an untrusted VM, process outgrafting relocates a monitoring process from the monitored VM to a second, trusted VM. The trusted VM has some visibility into the monitored VM’s kernel memory, allowing VMI tools to access any kernel data structures without direct interference from an adversary in the monitored VM. The Virtual Machine Space Traveler generalizes this approach by running a trusted, clean copy of the OS with a roughly copy-on-write view of the monitored guest. Monitoring applications, such as ps, simply execute in a complete OS environment on the monitoring VM; each executed system call actually reads state from the monitored VM. This approach bridges the semantic gap by repurposing existing OS code. However, it has open problems, such as reconciling divergences in the guest kernel’s copy-on-write views. Attacks, Defense, and Trust Here, we explain the three major classes of attacks against VMI—kernel object hooking (KOH), dynamic kernel object manipulation (DKOM), and direct kernel structure manipulation (DKSM)—known defenses against those attacks, and how these attacks relate to trust placed in the guest OS. These issues are summarized in Table 2 and illustrated in Figure 2. Kernel Object Hooking KOH attack modifies function pointers (hooks) located in the kernel text or data sections, such as those used to implement an extensible virtual file system model. As Figures 2a and 2b illustrate, attackers might replace the iterate function call pointer to filter malware from monitoring software. Defenses against KOH attacks generally depend on whether the hook is located in the kernel’s text or data segment. Text section hooks. The primary text section hooks are the system call table and interrupt descriptor table. For instance, attackers could interpose on all file open system calls simply by replacing the function pointer with the `sys_open()` function in the system call table. To prevent malware from overwriting these hooks, most kernels now place them in the read-only text segment. In a VMI system, the hypervisor can prevent malware from changing read-only page permissions. Data section hooks. Kernel data section hooks are more difficult to protect than text section hooks, because they place function pointers in objects to facilitate extensibility. For instance, the Adore-ng rootkit replaces the directory listing function of the `/proc` directory (see Figure 2b), hiding itself from the output.10 The fundamental challenge is that, although these hooks generally do not change during the object’s lifetime, they are often located on the same page or even in the same cache line with fields that must change, thwarting defenses based on simple page protections. To defend against such attacks, function pointers must be protected from modification once initialized. Because of the high cost of moderating all writes to these data structures, most defenses either move the hooks to different locations that can be write protected11 or augment hooks in the kernel with checks against a whitelist of trusted functions.12 Trust. Preventing text section modification is a prerequisite for current VMI techniques. Defenses against KOH on data hooks effectively assume that kernel modules are benign, in order to provide meaningful protections without solving the significantly harder problem of kernel control flow integrity in the presence of untrusted modules. Dynamic Kernel Object Manipulation DKOM attacks modify the kernel heap through a loaded module or an application accessing `/dev/mem` or `/proc/kcore` on Linux.13 DKOM attacks modify only data values and thus are distinct from attacks that modify the control flow through function hooks (KOH). DKOM attacks invalidate latent assumptions in unmodified kernel code. A classic DKOM example is hiding a malicious process. The Linux kernel tracks processes in two separate data structures: a linked list for process listing and a tree for scheduling (see Figure 2c). A rootkit can hide malicious processes by taking the process out of the linked list but leaving the malicious process in the scheduler tree. Interestingly, loading a module is sufficient to alter the behavior of unrelated, unmodified kernel code. DKOM attacks are hard to prevent because they are a needle in a haystack of expected kernel heap writes. As a result, most practical defenses attempt to identify data structure invariants by hand, static, or dynamic analysis, and then detect data structure invariant violations asynchronously. Because attackers can create objects from any memory, not just the kernel heap allocator, data structure detection is a salient issue for detecting DKOM attacks. DKOM defenses introduce additional trust in the guest beyond a KOH defense and make several assumptions that attackers can violate. Most DKOM defenses work by identifying security-related data structure invariants. Because it is difficult for defenders to have confidence that all security-relevant invariants have been identified, this approach will likely be best effort and reactive in nature. Another problematic assumption is that all kernel data structures’ security-sensitive fields have invariants <table> <thead> <tr> <th>Attack</th> <th>Defense</th> <th>Trust assumption</th> </tr> </thead> <tbody> <tr> <td>Kernel object hooking (KOH; code and hooks)</td> <td>Memory-protect hooks from text section modification, or whitelist loadable modules</td> <td>Pristine initial OS copy and administrator’s ability to discern trustworthy kernel modules</td> </tr> <tr> <td>Dynamic kernel object manipulation (heap)</td> <td>Identify data structure invariants, or detect violations by scanning memory snapshots</td> <td>Guest kernel exhibits only desirable behavior during training, or source is trustworthy; all security-relevant data structure invariants can be identified a priori; all malware will leave persistent modifications that violate an invariant; all invariants can be checked in a single search; and attackers can’t win races with the monitor</td> </tr> <tr> <td>Direct kernel structure manipulation</td> <td>Prevent bootstrapping through KOH or return-oriented programming</td> <td>OS is benign and behaves identically during training and classification</td> </tr> </tbody> </table> Table 2. VMI attacks, defenses, and underlying trust assumptions. that can be checked easily in a single memory snapshot or scan. For instance, a VMI-based approach to detect network sockets could be thwarted by a rootkit that copies packets directly from the heap of an application to the outgoing network driver. In this example, the inconsistency between outgoing packets and open sockets spans a sequence of operations, which can’t be captured with one snapshot. DKOM defenses cement trust that the guest kernel is benign. These defenses train data structure classifiers on a clean kernel instance or derive the classifiers from source code, which is assumed to demonstrate only desirable behavior during the training phase. The interesting contrast between KOH and DKOM is that DKOM defenses can detect invalid data modification in the presence of an untrustworthy module, whereas common KOH defenses rely on module whitelisting. Thus, if a DKOM defense intends to tolerate untrusted modules, it must build on a KOH defense that’s robust to untrusted modules as well, which might require substantially stronger control flow integrity protection. Finally, these detection systems explicitly assume malware will leave persistent, detectable modifications and implicitly assume malware can’t win races with the detector. DKOM detectors rely on invariant violations being present in the view of memory they analyze—either a snapshot or a concurrent search. Because DKOM detectors run in increments of seconds, short-lived malware could evade detection. If a rootkit can reliably predict when a DKOM detector will view kernel memory, it can temporarily repair data structure invariants—racing with the detector. To our knowledge, no work has successfully exploited this race condition, but this issue deserves further investigation. Direct Kernel Structure Manipulation Direct kernel structure manipulation (DKSM) attacks change the interpretation of a data structure between training a VMI tool and classifying memory regions into data structures. Figure 2d illustrates a simple DKSM attack by a malicious kernel, which selectively swaps two data structure fields to hide the presence of malware from a VMI tool based on standard headers. Because most VMI tools assume a benign kernel, successful DKSM attacks hinge on changing kernel control flow without changing kernel text. Two previously proposed bootstrapping mechanisms are KOH attacks and return-oriented programming—both of which have known countermeasures. DKSM is an oddity in the literature because it’s effectively precluded by a generous threat model. However, a realistic threat model might allow an adversarial OS to demonstrate different behavior during the data structure training and classification phases—alike the diagram suggests. Figure 2. Overview of kernel process listing. (a) Pseudocode to list running process IDs by reading the /proc directory. (b) Virtual file system–level pseudocode for reading a directory, which calls low-level file system calls, such as proc_pid_read_dir, in Figure 2a. A kernel object hooking (KOH) attack replaces the iterate function pointer in the file handle for /proc. (c) A dynamic kernel object manipulation (DKOM) attack selectively violates data structure invariants, such as the assumption that all processes are on a list (for listing) and a tree (for scheduling). (d) Pseudocode example of a direct kernel structure manipulation (DKSM) attack, where process initialization changes the interpretation of process descriptor fields for a program name to confuse a tool searching for known malware. to “split personality” malware that behaves differently when it detects that it is under analysis. Under a stronger threat model, a malicious OS could actively mislead VMI tools to violate a security policy. The Semantic Gap Is Really Two Problems In the VMI literature, the semantic gap problem evolved to refer to two distinct issues: the largely solved engineering challenges of generating introspection tools, possibly without source code, and a malicious or compromised OS’s ability to exploit fragile assumptions underlying many introspection designs to evade a security measure. We suggest a clearer nomenclature for the two subproblems: the weak and strong semantic gap problems, respectively. The weak semantic gap is a solved engineering problem. The strong semantic gap problem is, to our knowledge, unsolved, and a solution would also prevent or detect DKSM attacks launched by malicious guest OSs. Our paper, “SoK: Introspections on Trust and the Semantic Gap,” provides a more complete treatment of these issues.3 Toward an Untrusted OS Some techniques from related research might help bridge the strong semantic gap. Paraverification Many VMI systems have an implicit design goal of working with an unmodified OS, which induces trust in the guest OS to simplify the problem. A useful stepping-stone might be to modify the OS to aid in its own introspection. InkTag introduced the idea of paraverification, in which a guest OS provides a hypervisor with evidence that it is servicing an application’s request correctly. The hypervisor can easily check the evidence offered by the guest OS without trusting the guest OS. For instance, if a trusted application requests a memory mapping of a file, the application would also report the request to the hypervisor. The OS then submits evidence to the hypervisor that changes to hardware-level page tables are an appropriate response to the memory-mapping request, which the hypervisor then verifies. Although InkTag’s goals differ from VMI’s, the idea of forcing an untrusted OS to aid in its own introspection could be fruitful if the techniques were simple enough to adopt. Mutual Distrust in Hardware Intel has recently taken an interesting direction, developing a mutual distrust model for hardware memory protection called Software Guard Extensions (SGX). SGX lets an OS or hypervisor manage an application’s virtual-to-physical OS mappings, but the lower-level software can’t access memory contents. In the context of introspection or the strong semantic gap, hardware like SGX could be useful for creating a finer-grained protection domain for code implanted in the guest OS. Reconstruction from Untrusted Sources Current tools that automatically learn data structure signatures assume the OS will behave similarly during training and classification. Among the assumptions in current VMI tools, this one has the best chance of being incrementally removed. For example, one approach might train VMI classifiers on the live OS and continue incremental training as the guest OS runs. Similarly, continuous monitoring might detect inconsistencies between the VMI’s training and classification stages. Virtual machine introspection is a relatively mature research topic that has made substantial advances in the 12 years since the semantic gap problem was posed. However, efforts in this space should focus on removing trust from the guest OS to strengthen overall system security. Acknowledgments We thank Virgil Gligor, Bill Jannen, and the anonymous reviewers for their insightful comments on earlier drafts. This research was supported in part by NSF grants CNS-1149229, CNS-1161541, CNS-1228839, CNS-1318572, CNS-1223239, and CCF-0937833; the US ARMY award W911NF-13-1-0142; the Office of the Vice President for Research at Stony Brook University; and gifts from Northrop Grumman Corporation, Parc/Xerox, Microsoft Research, and CA. References Bhushan Jain is a PhD candidate in computer science at Stony Brook University. His research interests include virtualization security, memory isolation, and system security. Jain received a B.Tech in computer engineering from College of Engineering Pune. Contact him at bpjain@cs.stonybrook.edu. Mirza Basim Baig is an MS candidate in computer science at Stony Brook University. His research interests include data mining, machine learning, and graph theory. Basim Baig received a BS in computer science from Lahore University of Management Sciences, School of Science and Engineering (LUMS-SSE). Contact him at mbaig@cs.stonybrook.edu. Dongli Zhang is a PhD candidate in computer science at Stony Brook University. His research interests include system security, virtualization, and cloud computing. Zhang received an MS in computer science from Stony Brook University. Contact him at dozhang@cs.stonybrook.edu. Donald E. Porter is an assistant professor of computer science at Stony Brook University. His research interests include system security, operating systems, and virtualization. Porter received a PhD in computer science from The University of Texas at Austin. Contact him at porter@cs.stonybrook.edu. Radu Sion is an associate professor of computer science at Stony Brook University. His main interests lie in systems, cybersecurity, and efficient and large-scale computing. Sion received a PhD in computer science from Purdue University. Contact him at sion@cs.stonybrook.edu. Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
{"Source-Url": "https://zxr.io/research/sion2015introspection.pdf", "len_cl100k_base": 6081, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 23531, "total-output-tokens": 7214, "length": "2e12", "weborganizer": {"__label__adult": 0.000457763671875, "__label__art_design": 0.0004181861877441406, "__label__crime_law": 0.0011587142944335938, "__label__education_jobs": 0.0009613037109375, "__label__entertainment": 0.00010383129119873048, "__label__fashion_beauty": 0.00020325183868408203, "__label__finance_business": 0.0003306865692138672, "__label__food_dining": 0.0003294944763183594, "__label__games": 0.0010862350463867188, "__label__hardware": 0.00951385498046875, "__label__health": 0.0006709098815917969, "__label__history": 0.0003151893615722656, "__label__home_hobbies": 0.00016736984252929688, "__label__industrial": 0.0007658004760742188, "__label__literature": 0.00031447410583496094, "__label__politics": 0.0003578662872314453, "__label__religion": 0.0005064010620117188, "__label__science_tech": 0.29248046875, "__label__social_life": 0.00010985136032104492, "__label__software": 0.029754638671875, "__label__software_dev": 0.6591796875, "__label__sports_fitness": 0.00026917457580566406, "__label__transportation": 0.0005884170532226562, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34347, 0.01088]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34347, 0.67093]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34347, 0.89256]], "google_gemma-3-12b-it_contains_pii": [[0, 2793, false], [2793, 7361, null], [7361, 12448, null], [12448, 17622, null], [17622, 22703, null], [22703, 26256, null], [26256, 31072, null], [31072, 34347, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2793, true], [2793, 7361, null], [7361, 12448, null], [12448, 17622, null], [17622, 22703, null], [22703, 26256, null], [26256, 31072, null], [31072, 34347, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34347, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34347, null]], "pdf_page_numbers": [[0, 2793, 1], [2793, 7361, 2], [7361, 12448, 3], [12448, 17622, 4], [17622, 22703, 5], [22703, 26256, 6], [26256, 31072, 7], [31072, 34347, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34347, 0.08264]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
7888c8b8f4a058e3bdaf2da135a39cb9948f02ba
Additional Notes and Derivations Physical Constraints on Serial Computers (Page 4) The speed of light is \( c = 3 \times 10^8 \text{ m/s} \) and the code given must execute 3 (one for each component of \( x, y, \) and \( z \)) trillion memory transfers each second. Thus the transfer flux out of memory is \( n = 3 \times 10^{12} \text{ transfers/s} \). From elementary physics \[ \text{distance} = \text{rate} \times \text{time} \] Thus if \( r \) is the average distance from a single memory location to the CPU in one second on a serial machine we travel a total distance of \( r \times n \). Then Eq. 1 gives the following expression for \( r \) \[ r = \frac{tc}{n} = \frac{1 \times 3 \times 10^8}{3 \times 10^{12}} = 10^{-4} \text{m} \] As suggested in the book, placing our CPU in the center of a square grid with side length \( s \), the average distance to each memory location is \( r = s/2 \). From the above this gives a linear dimension of \( s = 2 \times 10^{-4} \text{m} \). Since we are assuming that all three trillion memory modules are inside this square, the number of memory modules along any given linear dimension would be \[ n_m = \sqrt{3 \times 10^{12}} = \sqrt{3} \times 10^6 \] Based on the previous length estimate of a side of \( s = 2 \times 10^{-4} \text{m} \) we see that the physical length of each memory module \( m_t \) must satisfy \[ n_m m_t = s \quad \text{or} \quad \sqrt{3} \times 10^6 m_t = 2 \times 10^{-4} \] Solving for \( m_t \) we obtain \( m_t = \frac{2}{\sqrt{3}} \times 10^{-10} \text{m} \approx 1 \text{ angstrom} \). Clearly an impossible situation. * wax@alum.mit.edu Problem Solutions Chapter 1 (Introduction) Chapter 1 had no problems. Chapter 2 (An Overview of Parallel Computing) Exercise 1 Part (a) In store and forward routing each node must store the entire message before it gets passed on to the next node in the transmission. Thus assuming that one packet can be transmitted per timestep it will require $O(n)$ timesteps to transmit this message to each node. For $k$ intermediate nodes we have $k + 1$ “edges” in our connection graph giving a total transmission time of $O((k + 1)n) = O(nk)$ Part (b) Using cut-through routing each intermediate node can send any packet of a message to the next host as it is received. Thus assuming host A is sending to host B, the first timestep will have one packet from A to the first intermediate node between A and B. In two timesteps packets will have propagated two the first two intermediate nodes. Thus in $k + 1$ timesteps we will have packets arriving at B. If $k < n$ we require $O(n)$ timesteps to transmit our message. If $k > n$ we require $O(k)$ timesteps to transmit our message. Exercise 2 Shared memory programming has three basic primitives: - Variables can be accessed by all processors - There exists a means to prevent improper access of shared resources (via binary semaphores or some other means) - There exists a means for synchronizing the processes (via barriers). To solve this problem consider the odd-even sorting algorithm which can sort in $O(n)$ steps, see [1] for more details. In this sorting algorithm, during odd-numbered steps the odd-numbered processors compare their number with that of their next higher numbered even processor and exchange if the two numbers are out of sequence. During even numbered steps the even-numbered processor compare their number with that of their next higher odd-numbered processor. A pseudo-code implementation (with care to avoid deadlock in critical regions) is given by the following **Odd-Even Sort**\((a,n)\) 1. for \(i \leftarrow 1\) to \(n\) 2. do 3. if \(i \mod 2 = 1\) 4. then 5.▷ \(i\) is an odd-timestep 6. if \(p \mod 2 = 1\) 7. then 8.▷ processor \(p\) is an odd-processor 9. Lock array elements \(a[p]\) and \(a[p+1]\) 10. Sort elements \(a[p]\) and \(a[p+1]\) 11. Insert back into global array \(a\) in sorted order 12. Unlock array elements \(a[p]\) and \(a[p+1]\) 13. else 14.▷ Do nothing 15. 16. 17. 18. else 19.▷ \(i\) is an even-timestep 20. if \(p \mod 2 = 0\) 21. then 22.▷ processor \(p\) is an even-processor 23. Lock array elements \(a[p]\) and \(a[p+1]\) 24. Sort elements \(a[p]\) and \(a[p+1]\) 25. Insert back into global array \(a\) in sorted order 26. Unlock array elements \(a[p]\) and \(a[p+1]\) 27. else 28.▷ Do nothing **Exercise 3** If we ran the given pseudocode on a large number of processors, depending on the scheduling of the processes some process may never execute the critical section/region. In other words some subset of the processors may lock the binary semaphore \(s\) for all time not allowing access to the complementary subset of processors. The effect is to exclude some of the processors from the critical section. Exercise 4 Part (a): WWX: Finish me!!! Chapter 3 (Greetings!) Exercise 1 See the code prob_3.6.1.c. When this code is run on only one processor no output is produced. Exercise 2 See the code prob_3.6.2.c. When using wildcards for in the receives I didn’t get any noticeable difference in output, which is expected since the code is issuing it MPI_Recv calls in a particular order and thus blocks until it receives each message before printing. Exercise 3 Please see the code prob_3.6.3.c. I experimented with the following modifications to the calls to MPI_Send and MPI_Recv. - Covert the destination to 1 in all sending process in order to test incorrectly matched MPI_Send and MPI_Recv calls. This results in the program hanging forever since MPI_Recv blocks and is never able to complete. - Execute MPI_Send with an incorrect string length by removing the required +1 from the MPI_Send call. The results of this modification were that the program still worked but the executed printf call will print characters until it encounters the first terminating null located randomly in memory. - Specifying an incorrect MPI data type in the MPI_Send call only. For instance specifying INT rather than CHAR causes the code to crash. - Specifying an incorrect receive size of 10 rather than the correct value of 100 resulted in the code crashing. - Specifying an incorrect MPI data type in the MPI_Recv call. For instance specifying INT rather than CHAR resulted in a program that seemed to execute correctly. - Specifying an incorrect tag field in the MPI_Recv call results in the programming hanging since it waits forever for messaging passing to complete. Exercise 4 See the code prob_3.6.4.c. On my system the process \( p - 1 \) could print to the screen. Printing on any processor other than 0 is not required by an MPI implementation however. Programming Assignment 1 See the code prob_3.7.1.c. Calculating who to send a message to is simple and is given by \[ \text{dest} = (\text{my\_rank}+1) \mod p; \] as suggested in the text. Calculating who to receive a message from is done with code like the following \[ \text{recv} = ( (\text{my\_rank}==0) \, ? \,(p-1) \, : \,(\text{my\_rank}-1) ); \] where we have been careful to correctly specify that the first process sends to the last process. Each process must send its message first and then receive. In the other order each process hangs waiting for messages that never arrive. In the source code coming from this problem we see coded another message sending strategy where the even processors send first and then receive while the odd processor receive first and then send. This message scheduling works as well. When run on one processor, processor 0 sends and receives a message from itself. Chapter 4 (An Application: Numerical Integration) Exercise 1 See the code prob_4.6.1.c. When run on one processor the code gives the correct result of 1/3, since in that case the local integration is equivalent to the global integration. Exercise 2 See the code prob_4.6.2.c. The routine \texttt{Get\_data} should be called before the individual processors integration domain. No modification besides including \texttt{Get\_data} are required to implement this program. Programming Assignment 1 See the code `prob_4.7.1.c`. The most complicated part of this problem is the specification of the choice of functions in which a user could choose to integrate. This was done by specifying an array of function pointers (a `fn_array`) with the command ```c float (*fn_array[])(float) = {f1,f2,f3}; ``` the user only then has to input an integer specifying the function to be integrated. Much more complicated menuing systems could be considered. Programming Assignment 2 Part (a): See the code `prob_4.7.2.a.c`, where a serial version of Simpson’s rule is implemented. Part (b): See the code `prob_4.7.2.b.c`, where a parallel version of Simpson’s rule is implemented. Chapter 5 (Collective Communication) Exercise 1 WWX: Finish!!! Exercise 2 When this section of code is executed each processor executes its corresponding block of commands. As such, each processor begins by executing a `MPI_Bcast` statement. Since the root argument for all of these `MPI_Bcast` calls is 0 all processors update their variable argument based on that which is sent from processor 0. As coded, processor 0 is “sending” the variable `x`, processor 1 is “receiving” the variable `x`, and processor 2 is “receiving” the variable `z`. After the first `MPI_Bcast` call we have - Process 0 with no change to the variable `x` giving `x = 0` - Process 1 with an updated variable `x` giving `x = 0` - Process 2 with an updated variable `z` giving `z = 0` After each process has finished its calls to `MPI_Bcast` process 0 and 2 must execute an `MPI_Send` and an `MPI_Recv` respectively, while processor 1 must execute a collective communication `MPI_Bcast`. Since the `MPI_Bcast` acts as a synchronization point in the subsequent processing process 1 must wait until the other processes call `MPI_Bcast` themselves. Thus the `MPI_Send` and `MPI_Recv` on process 0 and 2 causes the variable `x` on process 2 to become the value of the variable `y` on process 0, or the numerical value of 1. After this exchange, each processor calls (or has called in the case of processor 1) a `MPI_Bcast` routine. We can analyze this exchange in the same way that as for the first example of the global communication primitive `MPI_Bcast`. Since the root argument for all of these `MPI_Bcast` calls is 1 all processors update their variable argument based on that which is sent from processor 1. As coded, processor 1 is “sending” the variable `y`, processor 0 is “receiving” the variable `z`, and processor 2 is “receiving” the variable `y`. Thus after the completion of this `MPI_Bcast` call we have caused the following updates: - Process 0 has `z` updated with the value of `y` in process 1 giving `z = 4` - Process 1 has the variable `y` unchanged giving `y = 4` - Process 2 has `y` updated with the value of `y` in process 1 giving `y = 4` Keeping track of all the variable values after all communication calls we have the state of the system of <table> <thead> <tr> <th>Process 0</th> <th>Process 1</th> <th>Process 2</th> </tr> </thead> <tbody> <tr> <td><code>x = 0</code></td> <td><code>x = 0</code></td> <td><code>x = 1</code></td> </tr> <tr> <td><code>y = 1</code></td> <td><code>y = 4</code></td> <td><code>y = 4</code></td> </tr> <tr> <td><code>z = 4</code></td> <td><code>z = 5</code></td> <td><code>z = 0</code></td> </tr> </tbody> </table> **Exercise 4** On process 0, the sequential `MPI_Send` calls access the following data structures/elements in this order: - `x`, second row of matrix `B`, `x`, fourth column of matrix `B`, first column of matrix `B` Similarly on process 1, the sequential `MPI_Recv` calls access the following data structures/elements on process 1 in this order: - `x`, second row of matrix `B`, `x`, second column of matrix `B`, first column of matrix `B`. Exercise 1 Part (a): See the code prob_7.11.1.a.c. Rather than create a communicator associated with the processors in the first column of a virtual grid of processors the program prob_7.11.1.a.c creates a communicator associated with an input row index (zero based) of the virtual grid of processors. The modification to perform the requested column based communicator (using MPI_Comm_group, MPI_Group_incl, etc.) is straightforward. Part (b): See the code prob_7.11.1.b.c. There we use MPI_Comm_split to create n communicators, broadcast a value of 1 along each column, and then use MPI_Reduce to compute the global sum. Part (c): I would think that the processors would be identical since in both MPI calls our implicit assumption is that the global processors 0, n, 2n, 3n, ... would be associated with the first column. Exercise 2 Part (a): In a call to MPI_Comm_create we would have to first construct a unique integer representing the new communicators context. This would entail looking at each process in the group to be created and determining an integer that is unique among all of the existing contexts already held by the processors included in this new communicator. This would entail global communication among processors. Part (b): In a call to MPI_Comm_split we would use the input argument split_key to construct the associated communicator array. In addition, to the implementation of MPI_Comm_create our implementation of MPI_Comm_split would then have to find a unique integer to represent the communicator’s context. This could be performed as above. Exercise 3 In the modified basic algorithm given in the book we distributed our matrices in a block checkerboard fashion along the processors. In this problem we are to modify this basic algorithm (where each processor stores only a single element from each matrix) to the situation where each processor will store a block of rows from each matrix, specifically if we assume that n (the size of our square matrix) is divisible by p (the number of processors) then each processor will store $n/p$ rows. A version of Fox’s algorithm with this data distribution might be given by This modified version Fox’s algorithm requires storage $O(2\frac{n}{p}n)$ for each processors share of the global matrices rows. The 2 is for storage for both matrix $A$ and $B$. In addition, after a gather statement each processor will require an addition amount of storage given by $O(\frac{n}{p}n)$ to store the newly obtained columns. Finally, after multiplication each processor will have to store the $C$ matrix requiring an additional $O(\frac{n}{p}n)$ storage. In total this modified version of Fox’s algorithm requires $O(4\frac{n^2}{p})$ storage. In a similar, way the block checkerboard basic algorithm requires $O(4\frac{n^2}{p^2})$ storage. From these two results we see a trade off between memory usage and required message passing. The block checkerboard algorithm requires less memory but at the cost of more message passing (the broadcast of a specific matrix at each timestep), while the modified Fox’s algorithm requires more storage but fewer actual sent messages (since messages are only needed in performing the column gather). **Exercise 4** The program discussed in this exercise would use a call like the following to construct the original Cartesian coordinate grid ```c dim_sizes[0] = l; dim_sizes[1] = m; dim_sizes[2] = n; wrap_around[0] = 0; wrap_around[1] = 0; wrap_around[2] = 0; MPI_Cart_create(MPI_COMM_WORLD,3,dim_sizes,wrap_around,0,grid_comm); ``` In the above we have not considered a periodic grid, and we have not allowed the underlying MPI implementation to reorder the global processors when creating this communicator. **Part (a):** To create the desired communicator one would execute something like ```c free_coords[0] = 1; free_coords[1] = 0; free_coords[2] = 1; MPI_Cart_sub(grid_comm,free_coords,&part_a_comm); ``` **Part (b):** To create the desired communicator one would execute something like free_coords[0] = 0; free_coords[1] = 0; free_coords[2] = 1; MPI_Cart_sub(grid_comm,free_coords,&part_b_comm); Part (c): To create the desired communicator one would execute something like free_coords[0] = 0; free_coords[1] = 0; free_coords[2] = 0; MPI_Cart_sub(grid_comm,free_coords,&part_c_comm); I would not assume that the communicator defined on process 0 is the same as the communicator defined in part c (above). Exercise 5 As suggested in the text we can implement a safe circular shift of data using only MPI_Send and MPI_Recv (that will work if there is no buffering) if we take care to issue our send and recives in a certain manner. In case one is working on a system which does not provide buffering we can have the even processors issue MPI_Send and the odd processors issue the MPI_Recv second. A piece of code that demonstrates this is given below (where we are sending a string message from processor to processor) if( my_rank % 2 == 0 ){ /* Use strlen+1 so that \0 gets transmitted */ printf("Process %d sending: %s\n", my_rank, messageS); MPI_Send(messageS, strlen(messageS)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); MPI_Recv(messageR, 100, MPI_CHAR, recv, tag, MPI_COMM_WORLD, &status); printf("Process %d recieved: %s\n", my_rank, messageR); }else{ MPI_Recv(messageR, 100, MPI_CHAR, recv, tag, MPI_COMM_WORLD, &status); printf("Process %d recieved: %s\n", my_rank, messageR); /* Use strlen+1 so that \0 gets transmitted */ printf("Process %d sending: %s\n", my_rank, messageS); MPI_Send(messageS, strlen(messageS)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } In general, I believe that many MPI systems do provide some amount of buffering so that the circular shift discussed here can be coded and will work if the \texttt{MPI\_Send}'s are issued \textit{first}. In fact \texttt{prob\_3.7.1.c} was first implemented in that manner and later coded to be made safe. \section*{Programming Assignment 1} \section*{Programming Assignment 2} \section*{Chapter 10 (Design and Coding of Parallel Programs)} \section*{Exercise 2} I would have each processor seed the random number generator with its process rank. This way the random numbers would be guaranteed to at least be different in each processor. Code like the following should work \begin{verbatim} int p; MPI_Comm_rank(MPI_COMM_WORLD, &p); srand48( (long int) p); \end{verbatim} Where the cast is required by the input arguments of \texttt{srand48}. \section*{Programming Assignment 3} Because the MPI standard prohibits argument aliasing, if a program calls \texttt{MPI\_Alltoall} correctly it will need to have two matrix variables declared. One to represent the original matrix and the other representing its transpose. In the implementation of this problem provided here we loop over the rows of \texttt{A} calling \texttt{MPI\_Alltoall} and receiving its results in a temporary variable. We then copy the data into the additional storage representing the transposed matrix. We could not copy this transpose data back into the original matrix \texttt{A} or we would overwrite needed elements on the rows yet to be seen. In many matrix algorithms (with block distribution of the rows among the processors) when \texttt{p} does not evenly divide \texttt{n} the following simple modification permit the use of non evenly divisible matrices by placing all “spill over” elements in the processor “\texttt{p-1}” (the last processor). This can be accomplished with the following code snippet \begin{verbatim} int n_bar, n_rem, n_local; n_bar = n / p; /* using C's integer (truncated division) */ \end{verbatim} n_rem = n % p; /* computes the remainder */ n_local = ( my_rank != p-1 ? n_bar : n_bar+n_rem ); Then the computation in each processor proceeds by looping over the \texttt{n.local} rows as normal. \textbf{Chapter 11 (Performance)} \textbf{Exercise 1} Some searching algorithms can have superlinear speedup. This is because in general the more processors one has the more of a give data structure that can be searched. This in tern can produce very quick query times which can (in some cases) result in superlinear speedup. \textbf{References}
{"Source-Url": "http://waxworksmath.com/Authors/N_Z/Pacheco/WriteUp/weatherwax_pacheco_problems.pdf", "len_cl100k_base": 5103, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 27294, "total-output-tokens": 5774, "length": "2e12", "weborganizer": {"__label__adult": 0.00038695335388183594, "__label__art_design": 0.00038504600524902344, "__label__crime_law": 0.0003769397735595703, "__label__education_jobs": 0.00229644775390625, "__label__entertainment": 0.00011920928955078124, "__label__fashion_beauty": 0.00018584728240966797, "__label__finance_business": 0.0003058910369873047, "__label__food_dining": 0.0005655288696289062, "__label__games": 0.0007433891296386719, "__label__hardware": 0.00470733642578125, "__label__health": 0.0006728172302246094, "__label__history": 0.0004122257232666016, "__label__home_hobbies": 0.0002346038818359375, "__label__industrial": 0.0011548995971679688, "__label__literature": 0.00028395652770996094, "__label__politics": 0.0003566741943359375, "__label__religion": 0.0006890296936035156, "__label__science_tech": 0.1724853515625, "__label__social_life": 0.00013780593872070312, "__label__software": 0.0074462890625, "__label__software_dev": 0.8037109375, "__label__sports_fitness": 0.0005002021789550781, "__label__transportation": 0.0012722015380859375, "__label__travel": 0.00029969215393066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20050, 0.04539]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20050, 0.87731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20050, 0.86111]], "google_gemma-3-12b-it_contains_pii": [[0, 1631, false], [1631, 3448, null], [3448, 4770, null], [4770, 6437, null], [6437, 8017, null], [8017, 9483, null], [9483, 11604, null], [11604, 13762, null], [13762, 15615, null], [15615, 17359, null], [17359, 19371, null], [19371, 20050, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1631, true], [1631, 3448, null], [3448, 4770, null], [4770, 6437, null], [6437, 8017, null], [8017, 9483, null], [9483, 11604, null], [11604, 13762, null], [13762, 15615, null], [15615, 17359, null], [17359, 19371, null], [19371, 20050, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20050, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20050, null]], "pdf_page_numbers": [[0, 1631, 1], [1631, 3448, 2], [3448, 4770, 3], [4770, 6437, 4], [6437, 8017, 5], [8017, 9483, 6], [9483, 11604, 7], [11604, 13762, 8], [13762, 15615, 9], [15615, 17359, 10], [17359, 19371, 11], [19371, 20050, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20050, 0.02326]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
aa5a6c3b04f6d3194481008c776addde32bc2283
A. Additional implementation details **Image and Word Features.** Following [1], we use a Faster R-CNN networks [10] with ResNet-101 [5] as a backbone to train on Visual Genome dataset [8], and we extract a 2048-dimensional feature vector for each object. We use the Byte Pair Encoding (BPE) [12], which effectively incorporate sub-word information and is beneficial for dealing with out-of-vocabulary words. We employ learnable positional encoding and initialize token embedding from pretrained weights of GPT-2. **Architecture and Hyperparameters.** We have 3 layers in the encoder and 12 layers in the decoder with 12 heads in each layer. The hidden size $D$ in each layer is 768. We load the GPT-2 (small) pretrained weights, which has 117M parameters into the decoder. We use the learning rate of $1 \times 10^{-4}$ under XE loss and $1 \times 10^{-5}$ during the reinforcement learning. We train the models with the AdamW optimizer [9] and a batch size 25. The beam size is equal to 5. The threshold $\tau$ is tuned on the validation set for different training data. **Training Details.** We train all the models in two steps. We first train the models with cross-entropy (XE) loss and then finetune them using reinforcement learning. The cross-entropy loss $L_{XE}$ is the traditional autoregressive classification loss $$L_{XE} = - \sum_{t=1}^{T} \log (p(w_{t} | w_{1:t-1}))$$ (1) where $w_{1:T}$ represents the target ground truth sequence. For reinforcement learning, we employ a variant of Self-Critical Sequence training [11]. Following [3], we sample $L$ sentences, $\hat{w}^1_{1:T}, \ldots, \hat{w}^L_{1:T}$, with beam search and use the mean reward from the $L$ sentences as the baseline $b$. The gradient is $$\nabla_{\theta} L_{RL}(\theta) = - \frac{1}{k} \sum_{i=1}^{L} \left( r(\hat{w}^i_{1:T}) - b \right) \nabla_{\theta} \log p(\hat{w}^i_{1:T})$$ (2) where $r(\cdot)$ represents the CIDEr-D reward. <table> <thead> <tr> <th>Models</th> <th>B-1</th> <th>B-2</th> <th>B-3</th> <th>B-4</th> </tr> </thead> <tbody> <tr> <td>Direct Translation</td> <td>26.5</td> <td>11.6</td> <td>4.5</td> <td>1.9</td> </tr> <tr> <td>ElJundi et al.</td> <td>33.2</td> <td>19.3</td> <td>10.5</td> <td>5.7</td> </tr> <tr> <td>VisualGPT</td> <td>52.6</td> <td>28.5</td> <td>20.8</td> <td>11.2</td> </tr> </tbody> </table> Table 1. Arabic Image Captioning. Direct translation is to directly translate from English caption to Arabic captions. B. Image Captioning in Low-resource languages Evaluation Image captioning in low-resource languages suffers from having sufficient image-pairs to train a good-quality model. Currently, there are only very few major languages such as English or Chinese are well studied in image captioning domains, but a lot of low-resource languages have not been covered. Developing good multi-modal technologies for those low-resource languages opens considerable economic perspective and benefit a huge number of inhabitants in the world. In this work, we attempt to evaluate our model on Arabic image captioning challenges, which is much less covered in the literature compared to English. There are very few good-quality image-caption pairs since it is very expensive to acquire the annotations. Some optional solutions are translating English captions to Arabic languages, but it requires to have a good language translation system and the translated captions need to maintain good grounding ability with the image contents, which is challenging to modern translation systems especially for those low-resource languages. We further evaluate our model on ElJundi et al.’s Arabic image captioning dataset [4] which is built based on Flickr8K [6] and contains 8K images. We follow their evaluation setting and train our VisualGPT on it. To adapt our VisualGPT on Arabic vocabulary, we instead use the pre-trained GPT-2 in Arabic version [2]. The experimental results in Table 1. It shows that our VisualGPT can easily outperform the baseline models. C. Train VisualGPT with more COCO and Conceptual Caption Datasets Figure 1 shows other results obtained by training networks on the 5%, 10%, 20%, 50% and 100% (82,783 images) MS COCO data. Figure 2 shows the performance with the data scaling up to 2.5% (82,958 images) Conceptual Captions, in which the dataset scale is similar to the whole COCO data. For MS COCO, VisualGPT outperforms other baseline models when we sample $\leq 20\%$ training data. For Conceptual Caption, VisualGPT consistently outperforms all the baselines when we sample $\leq 2.5\%$ training images. The whole experiments highlight our model’s effectiveness on low data regimes. On the other hand, we should also notice that $M^2$ Transformer surpasses the VisualGPT’s performance when there are 50% and 100% COCO training data. But when we train with the same number of Conceptual images, VisualGPT continuously outperforms all the baselines. This leads us to think of the reason why VisualGPT show different performing behaviors on these two datasets. The difference between these two datasets is that the Conceptual Captions contain more diverse vocabularies and image contents. In contrast, COCO captions only cover 80 common image objects. Therefore, the appearance frequency for each word in COCO is much higher than that in Conceptual Captions and COCO vocabulary diversity is also much lower than Conceptual Caption. We hypothesize the reason for this performance difference is that when the captions have a small coverage of each word, the caption generation will be benefited a lot from the GPT inherent knowledge and GPT can help the model quickly adapt into the new domain. But when there is a lot of in-domain data, the current image-captioning models can already generalize well on it and it potentially contradicts to the GPT original knowledge. D. Attention over Different types of words We use the Spacy parser to detect the part-of-speech of words in captions and calculate the mean value of the visual attention score. The result is presented in Fig. 3. We found PoS that tend to visual content, like noun (0.71), verb (0.71) and adjective (0.72), have high visual attention scores, whereas linguistic PoS like pronoun (0.53), punctuation (0.58), and determiner (0.61) receive low attention. E. More Qualitative Examples In Figure 4, we provide more examples of visual attentions. Blue indicates high visual scores and red indicates low visual scores. We can observe that VisualGPT assigns higher scores to words like “steam engine”, “elephants”, “horse”, “lush” and “cabinets”, and it assigns low visual scores to determiners and prepositions like “to” and “at”. We also show some examples of generated captions by our VisualGPT and several strong baseline models including Transformer (3 layers) [13], $M^2$ Transformer (3 layers) [3] and AoA Transformer [7] in the Table 2, Table 3 and Table 4. Overall, we can observe that our VisualGPT is able to describe the image content more accurately than the baseline models. <table> <thead> <tr> <th>Image</th> <th>Generated Captions</th> <th>Ground Truth</th> </tr> </thead> </table> | ![Image](image1.png) | **Transformer**: a woman riding skis on skis **M² Transformer**: a couple of skiers are standing near the snow **AoA Transformer**: a man with skis in the snow **VisualGPT (ours)**: a group of people walk on a snowy mountain | GT1: the people are walking through snow in a wooded area GT2: two people wearing skis traveling through the snow GT3: a man is walking down a path covered in snow GT4: a couple is skiing through the snowy woods GT5: a couple of people that are in a snowy field | | ![Image](image2.png) | **Transformer**: a street that has some street in it **M² Transformer**: a traffic light over a street light under a traffic light **AoA Transformer**: a street with people on a city street **VisualGPT (ours)**: a street with tall signs and traffic signs | GT1: a yellow traffic light above a street next to houses GT2: a street scene of an intersection with a street light GT3: a stop light hanging over an intersection in a residential area GT4: a traffic signal at an intersection is suspended on wire GT5: a street intersection with a traffic light over it | | ![Image](image3.png) | **Transformer**: some pizza are sitting on a plate **M² Transformer**: a plate with food and a knife on it **AoA Transformer**: a plate of pizza on a table **VisualGPT (ours)**: a plate of bread are served on a table | GT1: a batch of bread slices sitting on a plate GT2: a plate with some pieces of bread on it GT3: sliced french bread is on a plate that is lying on a table GT4: bread that is sitting on a plate that is on a table GT5: a white plate with lots topped with garlic bread | | ![Image](image4.png) | **Transformer**: two tennis player playing tennis on the ball **M² Transformer**: a tennis player about to hit a ball **AoA Transformer**: a baseball players on a game playing a game **VisualGPT (ours)**: a tennis player hits a ball with a racket | GT1: a man holding a racquet on top of a tennis court GT2: a man with a tennis racket reaches for a ball GT3: a man with a tennis racket is running on a court GT4: a young man is playing a game of tennis GT5: a tennis player in a blue shirt runs toward a ball | | ![Image](image5.png) | **Transformer**: a group of birds that are standing in the grass **M² Transformer**: a flock of birds perched in a tree branch **AoA Transformer**: several giraffe are standing next to each trees **VisualGPT (ours)**: a bird standing in the middle of a pond | GT1: a bird is perched a top a branch over a river GT2: a bird sits on a branch above a stream GT3: a bird on top of a tree branch over water GT4: a picture of an outside region that appears incredible GT5: a bird on a fallen branch in a body of water | Table 2. Caption generated by our VisualGPT, Transformer, M² Transformer and AoA Transformer on 0.1% MS COCO data split. <table> <thead> <tr> <th>Image</th> <th>Generated Captions</th> <th>Ground Truth</th> </tr> </thead> </table> | ![Boats](image1) | **Transformer**: several boats are sitting in the middle of a lake **$\mathcal{M}^2$ Transformer**: a boat filled with boats floating in the water **AoA Transformer**: an empty boat that has water and water **VisualGPT (ours)**: a canal filled with boats in the water | **GT1**: a blue boat docked on a green lush shore **GT2**: a small marina with boats docked there **GT3**: a group of boats sitting together with no one around **GT4**: some boats parked in the water at a dock **GT5**: boats sitting around the side of a lake by a tree | | ![Pizza](image2) | **Transformer**: pizza slices and pizza in a plate covered pizza **$\mathcal{M}^2$ Transformer**: people sitting at a table eating pizza and other salad **AoA Transformer**: two pizza eating a table with pizza on the table **VisualGPT (ours)**: a group of pizza on an iron plate with toppings | **GT1**: a set of five pizzas sitting next to each other each with different toppings **GT2**: a handful of prepared pizzas sit next to each other **GT3**: five uncooked pizzas with a variety of different toppings **GT4**: five unbaked pizzas that include various types of cheeses **GT5**: five different pizzas are being prepared over a metal tray | | ![Dogs](image3) | **Transformer**: a dog holding a frisbee in the water **$\mathcal{M}^2$ Transformer**: a dog holding a frisbee in a body of water **AoA Transformer**: a dog walking during a frisbee in a stone day **VisualGPT (ours)**: a dog walking through the water with a frisbee | **GT1**: two dogs are playing on the beach catching a frisbee **GT2**: of two dogs only one may be the victor **GT3**: a dog catching a frisbee by another dog on a beach **GT4**: dog jumping up in the air to catch a frisbee in the summer time **GT5**: a dog jumping up into the air to catch a frisbee | | ![People](image4) | **Transformer**: a group of people taking a child in a in a building **$\mathcal{M}^2$ Transformer**: a group of people in an airport with their hands **AoA Transformer**: a picture of a young group of people standing for men **VisualGPT (ours)**: a group of people standing around a tv | **GT1**: a group of men standing around a room **GT2**: some people are waiting in a long room **GT3**: people are standing in a room looking at a television screen **GT4**: a person sitting on a bench while the rest look somewhere else **GT5**: a man in red winter clothes sits on a bench with people behind him gather in front of a tv | | ![Elephants](image5) | **Transformer**: an elephant eating a elephant has a elephant **$\mathcal{M}^2$ Transformer**: elephant with its trunk with their elephant with its trunk **AoA Transformer**: two elephants standing at a lot of trees **VisualGPT (ours)**: three elephants standing next to some trees | **GT1**: two adult elephants are surrounding a baby elephant **GT2**: a baby elephant kneeling in front of two bigger elephants **GT3**: a baby elephant and it’s parents eat fruit **GT4**: elephants eat fruit a baby elephant rummaging in the food **GT5**: a pair of adult elephants with a baby elephant eat from a pile of fruit | Table 3. Caption generated by our VisualGPT, Transformer, $\mathcal{M}^2$ Transformer and AoA Transformer on 0.5% MS COCO data split <table> <thead> <tr> <th>Image</th> <th>Generated Captions</th> <th>Ground Truth</th> </tr> </thead> </table> | ![Image](image1.png) | **Transformer**: a man in a suit and a woman standing in a shop **M^2 Transformer**: a man is standing in a shop with a people holding people **AoA Transformer**: a man is working on a bus in a **VisualGPT (ours)**: a group of people standing at an airport with their luggage | **GT1**: several people are purchasing tickets at a bus station **GT2**: some people are checking in at the ticket counter somewhere in Asia **GT3**: people waiting in line with luggage at a ticket counter **GT4**: people are standing near an airport ticket kiosk **GT5**: customers stand at a kiosk waiting for tickets | | ![Image](image2.png) | **Transformer**: a bus that is parked in front of a building **M^2 Transformer**: a couple of people walking down the side of a street **AoA Transformer**: a bus is parked in a city street **VisualGPT (ours)**: a while and blue bus is parked on the side of a city street | **GT1**: people standing outside of a blue and white bus **GT2**: an image of a tour bus that is picking people up **GT3**: several people standing around buses and most wearing orange vests **GT4**: a public transit bus pulling up to pick up passengers **GT5**: a city bus at a stop waiting to pick up passengers | | ![Image](image3.png) | **Transformer**: a blue and white airplane flying through a sky **M^2 Transformer**: an airplane flying in the air **AoA Transformer**: a plane airplane flying down in the sky **VisualGPT (ours)**: a plane is flying in the air over the trees | **GT1**: there’s an airplane in the sky flying over some trees **GT2**: a large plane is flying over a crowd of trees **GT3**: a aeroplane soaring high in the sky above the trees **GT4**: a passenger plane flies in the sky over a forest **GT5**: an airplane is seen flying over several trees | | ![Image](image4.png) | **Transformer**: a white toilet sitting in a white bathroom next to a sink **M^2 Transformer**: a cat sitting in the toilet **AoA Transformer**: a bathroom with a toilet and a sink **VisualGPT (ours)**: a cat sitting on top of a bathroom sink | **GT1**: a cat climbing into a bathroom sink looking at someone **GT2**: a cat looks up as it stands in the bathroom sink **GT3**: a large cat stands inside of a clean bathroom sink **GT4**: cat is caught stepping in to the bathroom sink **GT5**: a cute kitty cat in the sink of a bathroom near a brush and other items | | ![Image](image5.png) | **Transformer**: a little girl is eating a birthday cake **M^2 Transformer**: a child and a child are sitting at a table with table with table **AoA Transformer**: two children sitting at a table with a laptop computer **VisualGPT (ours)**: a woman and a girl sitting at a table with a birthday cake | **GT1**: a woman and child stand next to a table with cake on it **GT2**: a lady standing near the table with a baby is posing for the camera **GT3**: a woman stands beside a baby in a high chair a table is set with a birthday cake and champagne **GT4**: a woman setting up her house for a party **GT5**: a person standing next to a child in a booster seat | Table 4. Caption generated by our VisualGPT, Transformer, M^2 Transformer and AoA Transformer on 1% MS COCO data split Figure 4. More examples of visual attention for each word in generated captions. High visual scores are in blue and low scores in red. References
{"Source-Url": "https://openaccess.thecvf.com/content/CVPR2022/supplemental/Chen_VisualGPT_Data-Efficient_Adaptation_CVPR_2022_supplemental.pdf", "len_cl100k_base": 4275, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21486, "total-output-tokens": 5413, "length": "2e12", "weborganizer": {"__label__adult": 0.0005846023559570312, "__label__art_design": 0.00321197509765625, "__label__crime_law": 0.0006227493286132812, "__label__education_jobs": 0.0011281967163085938, "__label__entertainment": 0.00041604042053222656, "__label__fashion_beauty": 0.0003814697265625, "__label__finance_business": 0.0003898143768310547, "__label__food_dining": 0.0005517005920410156, "__label__games": 0.0008187294006347656, "__label__hardware": 0.0021610260009765625, "__label__health": 0.0008573532104492188, "__label__history": 0.0004489421844482422, "__label__home_hobbies": 0.00015413761138916016, "__label__industrial": 0.0008368492126464844, "__label__literature": 0.0010671615600585938, "__label__politics": 0.0004363059997558594, "__label__religion": 0.0007615089416503906, "__label__science_tech": 0.364990234375, "__label__social_life": 0.00015866756439208984, "__label__software": 0.033355712890625, "__label__software_dev": 0.58544921875, "__label__sports_fitness": 0.0003628730773925781, "__label__transportation": 0.000728607177734375, "__label__travel": 0.00029659271240234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19146, 0.02284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19146, 0.36907]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19146, 0.86293]], "google_gemma-3-12b-it_contains_pii": [[0, 3823, false], [3823, 6843, null], [6843, 9819, null], [9819, 13224, null], [13224, 16585, null], [16585, 19146, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3823, true], [3823, 6843, null], [6843, 9819, null], [9819, 13224, null], [13224, 16585, null], [16585, 19146, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19146, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19146, null]], "pdf_page_numbers": [[0, 3823, 1], [3823, 6843, 2], [6843, 9819, 3], [9819, 13224, 4], [13224, 16585, 5], [16585, 19146, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19146, 0.06322]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
59d33da9a6d6c5e63ad15bfe353fef5b48732721
Complementarity between Simulation and Formal Verification Transformation of PROMELA Models into FDDEVS Models: Application to a Case Study Aznam Yacoub, Maamar Hamri and Claudia Frydman Aix Marseille Université, CNRS, ENSAM, Université de Toulon, LSIS UMR 7296, 13397, Marseille, France Keywords: Formal Methods, Spin, PROMELA, Formal Verification, DEVS, FDDEVS, Simulation, Transformation. Abstract: Discrete Event System Specification (DEVS) is a simple comprehensive way to describe complex discrete-event systems in a hierarchical way. Few years ago, Finite and Deterministic DEVS (FDDEVS) was introduced to support verification analysis of a subclass of DEVS problems, in the same way as formal methods. This paper presents guidelines to transform behavioral models used in formal methods like critical sections, especially described in PROMELA in this case, into FDDEVS models, and shows the benefits of such a transformation. 1 INTRODUCTION With the growing complexity of systems, designing stable and robust systems has become harder and harder. Nowadays, creating reliable software, hardware or systems without any bug needs a lot of strong knowledge and experience. But for many years, two disciplines which make these tasks easier have emerged: on the one hand, Modeling and Simulation (M&S) allow working on a model and to perform some tests, which are generally too expensive or impractical to do on the real system. In order to design the simulated system, M&S bases the theory on assumptions done from the real system; the quality of the simulation consequently depends on the quality of the theory about the system which is being studied (Zeigler, 1984). On the other hand, Verification and Validation (V&V) which use formal methods allow guaranteeing the absence of problems on a system by mathematical verification: using a rigourous description of the system with a formal and expressive mathematical language (like propositional logic), these techniques ensure that the system fits on specifications by testing them as qualitative properties on the model of the real system. But, modeling an entire system with these techniques is very hard, because of the complexity of the formalisms. 2 MOTIVATIONS The work described in this paper is a part of our desire to make M&S and formal V&V closer. Approaches developed in both disciplines could be complementary. Finding a general method to transform formal models into simulation models and vice versa will then allow us taking advantages of formal verification and simulation. In this sense, we could use simulation to verify systems for which formal verification failed. On the one hand, Discrete Event Simulation (DES) provides a simpler way to verify, analyze and validate systems through a modular and hierarchical formalism: Discrete Event System Specification (DEVS) introduced by Zeigler (Zeigler, 1976). DEVs allows representing a full range of systems which can be assimilated to discrete-event systems. Some of the advantages of DEVS Framework, as a fundamental requirement of the M&S theory, is the separation of modeling from simulation, enabling reusability, stand-alone testing and hierarchical construction. Furthermore, the expressiveness of the DEVS formalism makes modeling easier, and identification of a specific experimental frame appropriate to a model makes easier the uncovering of assumptions on the real system. But, that also means that simulation is depending on specific scenarios, and allows testing the system only in some circumstances, unlike formal methods which guarantee the correctness of the system in all cases. On the other hand, V&V can encounter in few cases, especially with model-based formal methods, some difficulties like the State Explosion Problem for instance. When the system grows up, the size of the state space exponentially also grows. Even model checking tools like SPIN are able to verify models with 10^{120} states thanks to the use of Binary Decision Diagrams (BDD) for the representation of the state space (Miller et al., 2010), these verification tools do not fit to bigger systems. Formal verification can not apparently likewise verify systems with an uncountably infinite state space in practice. For these cases, simulation approach could be a very interesting complementary approach to the verification tools, especially as FDDEVS supports both verification and simulation. Due to the fact that there are many various techniques used in each of these disciplines, we only focus here on two formalisms, in order to validate our approach: FDDEVS (Hwang and Zeigler, 2006a) and PROMELA (Holzmann, 2004). One must keep in mind that the approach which we want to develop does not depend on the choice of the formalisms: Finite and Deterministic Discrete Event-system Specification (FDDEVS) is a subclass of DEVS problem, which is used to describe, model and simulate discrete event systems. Discrete Event Systems (Zeigler, 1976) are those whose the evaluation of their current state is done at some specific points in the time, called events. Across that, PROMELA is especially used to describe, model and verify asynchronous and concurrent systems. In these terms, reader can think that the case studies concerned with one or the other of these languages, our work is focusing on PROMELA language introduced by Holzmann (Holzmann, 1997) (Holzmann, 2004). PROMELA is especially designed to verify dynamic concurrent systems, which are then translated into non-deterministic automata. Properties which must be verified are expressed in Linear Temporal Logic (LTL) before being translated into Büchi Automata. The SPIN model-checker performs verification on these two final models. Moreover, the SPIN model-checker can also operate as a simulator, which allows us making a good comparison between this tool and our approach with the FDDEVS simulator. We will then introduce here a way to transform PROMELA models into FDDEVS models through one example, and show why using a simulation approach could be beneficial for formal methods in some cases, before speaking about the possible contributions of such a method for both domains. 3 VERIFICATION OF THE DEKKER’S ALGORITHM In this paper, we will exclusively handle our problem through one example which is representative of classic problems concerned by V&V and model-based formal methods. Our work introduced in this paper is thus based on the problem of mutual exclusion, and especially its resolution by the Dekker’s algorithm. 3.1 The Dekker’s Algorithm of the Mutual Exclusion Problem The Dekker’s algorithm of mutual exclusion was introduced in 1965 by Theodorus Dekker, according to Dijkstra (Dijkstra, 2002). This is the first and a relatively simple solution for a well-known problem in concurrent systems: the mutual exclusion that allows two processes accessing a shared critical resource. The algorithm for a process \( p \) considers two variables \( b_p, b_q \) and a flag \( k \). The two first boolean variables indicate whether processes \( p \) and \( q \) want to access to the critical resource or not. If both of them wish to reach the resource, the flag \( k \) acts as a referee and indicates which of them can immediately have the resource. Then, the process, which is forbidden to enter the critical section, turns his willingness flag to \( false \) and enters active waiting while the other process enters the critical section. At the end, the process, which had the resource in this turn, sets the flag \( k \) to the value of the other process, that guarantees the fairness property which ensures processes are fairly executed. 3.2 Verification by Model-checking Model-checking is a model-based formal method (Huth and Ryan, 2000) in which the considered system is described as a state transition system \( M \) used by the model-checker to verify if \( M \models \phi \), where \( \phi \) is a set of properties expressed in a temporal logic. Among all model-checking languages, our work is focusing on PROMELA language introduced by Holzmann (Holzmann, 1997) (Holzmann, 2004). PROMELA was especially designed to verify dynamic concurrent systems, which are then translated into Büchi Automata. Properties which must be verified are expressed in Linear Temporal Logic (LTL) before being translated into Büchi Automata. The SPIN model-checker performs verification on these two final models. Moreover, the SPIN model-checker can also operate as a simulator, which allows us making a good comparison between this tool and our approach with the FDDEVS simulator. The PROMELA implementation of the Dekker’s algorithm (given in Program 1) is very natural, thanks to the characteristics of the language. Processes are expressed as proctype blocks, and communication between both of them is done through global variables wantp, wantq and turn, which respectively represent Figure 1: Automata generated by the PROMELA implementation of the Dekker’s algorithm. the variables \( b_i, b_j, \) and \( k. \) The boolean variables \( \text{cs}_p \) and \( \text{cs}_q \) mean that the processes \( p \) and \( q \) are respectively in the critical section or not. In this example, we also test the safety property (line 5): “The processes \( p \) and \( q \) never enter the critical section at the same time”. The SPIN model-checker thus verifies the LTL property by firstly translating the property into a Büchi automaton, and then by computing the synchronous product between this automaton and the asynchronous product of two others automata that represents the processes \( p \) and \( q \) (Figure 1). The emptiness of the language accepted by the resulting automaton indicates whether the property is satisfied or not (Holzmann, 1997). The total state space of the final reachability graph thus includes 148 states, and 279 transitions. In 131 cases, transitions led to a path already verified. It will be interesting to remember it, when we will compare this verification method with new one that we will introduce later. Note that the verification by the model-checking method has many advantages. Among them, translation from the informal algorithm is very intuitive. Moreover, SPIN is a mature tool with many efficient algorithms to reduce the total state space and increase the speed of the verification. The use of LTL is also a good thing, because the verification is then based on a simple logic formula. Furthermore, SPIN integrates a simulation tool which allows engineers verifying the trace of the execution of the program. In this case, verification by model-checking seems to be an easy and safety way which ensures that a system has no bug relative to the given specifications. However, M&S provides another approach for problem modelling. DEVS and its subclass FDDEVS were designed (Hwang and Zeigler, 2006a) to formalize discrete-event systems in a very intuitive way. We show in the next section how to simulate and verify the Dekker’s Algorithm with the FDDEVS formalism. Program 1: Implementation of the Dekker’s algorithm in PROMELA ```plaintext 1: bool want p = false, want q = false; 2: byte turn = 1; 3: bool csp = false, csq = false; 4: ltl { [](!(csp and csq) ) } 5: active proctype p() { 6: do 7: :: want p = true; 8: do 9: :: !want q → break; 10: :: else → 11: if 12: :: (turn == 1) 13: :: (turn == 2) → 14: want p = false; 15: (turn == 1); 16: want p = true; 17: fi; 18: od; 19: od; 20: csp = true; 21: csq = false; 22: want p = false; 23: turn = 2 24: od; 25: ... 26: } ... 27: [... the process q is symmetrical to the process p ...] ``` 4 THE DEKKER’S ALGORITHM AS A FDDEVS 4.1 Simulation-based Verification As we previously said, discrete-event simulation provides a more natural way for modelling, verification and validation of discrete-event systems. Simulation is done under specific conditions, called Experimental Frame (EF) (Zeigler, 1976). Simulation-based verification consists then to verify that outputs produced by the model for a specific EF (in others terms, for specific inputs) meet some system requirements or specifications. Simulation also allows verifying the behaviour of a system, meaning its real evolution, unlike formal methods which only guarantee that the model meets requirements under all circumstances. In others words, simulation allows understanding how the system reacts when an unexpected event occurs. Simulation, thus, provides not only a way to verify that a system meets requirements in an EF, but also allows understanding how it evolves in the time. It’s why we believe using jointly simulation and formal verification ensures that the system of interest meets initial specification in all cases and its behaviour (its real temporal evolution) conforms to what was expected. 4.2 Introduction to FDDEVS Finite and Deterministic Discrete Event-system Specification (FDDEVS) is a formalism based on the DEVS formalism (Zeigler, 1976) and introduced in (Hwang and Zeigler, 2006a) to model and analyze discrete event systems in both simulation and verification (FDDEVS is a formalism based on the Finite and Deterministic Discrete Event-system Specification in all cases and its behaviour (its real temporal evolution) conforms to what was expected. 4.3 PROMELA to FDDEVS Transformation Rules As we said, FDDEVS allows analysis of problem in simulation way, in the same manner as DEVS. It is thus interesting to compare the analyzing of the Dekker’s algorithm provided in the previous section, and the results obtained with a simulation approach using FDDEVS. Note that, instead of modeling the problem from the informal Dekker’s algorithm, we directly wanted to obtain the FDDEVS model from the PROMELA code. Firstly, we know that the PROMELA implementation of the Dekker’s algorithm can be translated to a FDDEVS model. If we consider how SPIN simulation is working, we can decide that the execution of each line of the PROMELA code corresponds to an event in our FDDEVS model. In fact, we consider only the change of the value of each variable wantp, wantq and turn as done by an internal or an external event. Moreover, we saw in the section 3.2 that the sets of state of each automata representing each process in PROMELA are finite sets. The second and third restriction of a FDDEVS can be arbitrary decided in our case, because no explicit time restriction appears in the PROMELA verification way. Now we know we can translate the PROMELA code into a FDDEVS model, we slightly change the algorithm for convenience: instead of global variables wantp, wantq and turn, we consider three variables wantme, wantother and my_turn for each process. In the same way, we consider esp and csq variables as local variables (and not as global variables anymore). Besides, lines wantp = false; turn = 2 and wantq = false; turn = 1 are considered as atomic instructions. Then, we define each process as an atomic FDDEVS model defined by : \[ P = \langle X, Y, S, x_0, \tau, \delta_x, \delta_y \rangle \] where - \( X = \{?W_m, ?W_n, ?T_c\}\), where \(?W_m\) denotes the other process wants to enter critical section. \(?W_n\) denotes the other process does not want to enter the critical section anymore. \(?T_c\) denotes the change of the value of the my_turn variable; - \( Y = \{!W_m, !W_n, !T_c\}\), where \(!W_m\) is sent when the current process wants to enter critical section. \(!W_n\) is sent when the current process doesn’t want to enter the critical section anymore. \(!T_c\) is sent when the current process leaves the critical section; - \( S = \{(wantme, wantother, my_turn) \in \{0, 1\} \times \{0, 1\} \times \{0, 1\} \cup \{Cr\} \cup \{Wait\}\}\), where wantme means if the current process wants to enter critical section, wantother the other process wants to enter critical section, my_turn if the current process has the priority upon the critical section; the state “Cr” means the current process is in critical section; the state “Wait” represents the active waiting of the lines 14-18 of the PROMELA code; - \( x_0 = (0, 0, 0) \) or \( x_0 = (0, 0, 1) \) depending on the value of the turn variable in the PROMELA code; Now, in order to make the transition table of each FDDEVS atomic model and to define the transitions functions, we apply the following rules : 1. Each modification of a global variable leads to a new state; 2. The initial state of each FDDEVS atomic model depends on the turn variable. If turn is equal to 1, the process P1 in is \( s_0 = (0, 0, 1) \) and P2 in \( s_0 = (0, 0, 0) \), otherwise P1 in is \( s_0 = (0, 0, 0) \) and P2 in \( s_0 = (0, 0, 1) \); 3. When the value of a global variable is changed, the process which changes the value emits an output event before exiting its current state by the internal transition function; the other process changes its current state when it receives the input event; 4. If a state is changed by an input event, the internal schedule is preserved; 5. Lifespan of each state \( s \) is given by \( \tau(s) = 0 \) except for the states \((1, 1, 1)\) and \( \text{Wait} \) (because the loop condition only depends on the value of a global variable which is not updated in the loop). With these rules, we obtain the following FD-DEVS model in Figure 2. 5 RESULTS AND DISCUSSION 5.1 Verification with FDDEVS Framework After designing the FDDEVS model, we implement it using the Hwang’s Framework (Hwang and Zeigler, 2006a) which generates a reachability graph (Hwang and Zeigler, 2006b) of 13 vertices and 17 edges for the verification. The property \( G\neg (\text{cmd} \land \text{csq}) \) was verified by checking if a state exists in the reachability graph for which both processes are in the Critical state. Moreover, the simulation with DEVS shows the importance of the execution order of the instructions. Indeed, the lifespan of each state directly influences on the scenario of the model. We see, with the configuration where \( \tau(s) = 0 \) for each state, that process \( p \) directly enters critical section, and the active wait problem is never encountered. But, if \( \tau(s) = \alpha \) with \( \alpha > 0 \), then the scenario given by the model is the scenario where both processes want to enter critical section at the same time. Scenario of simulation is then included in the model given by the transformation. In fact, this problem comes from the precedence of external transition upon internal transition, or the internal transition upon external transition. In other words, if two events occur at the same time, the model will give the priority to the internal transition or external transition according to a \( \delta_{\text{confluent}} \) function defined by \( \delta_{\text{confluent}} : S \times X \rightarrow S \), that leads to repeat only one possible execution. This problem could be solved by generating one model per state of the base FDDEVS atomic model, in which we change the \( \delta_{\text{confluent}} \) function to change the priority of the events. But, for the Dekker’s Algorithm, the critical point is when both processes want to enter the critical section at the same time, so only two coupled models are needed to cover the verification of the entire problem. Moreover, the transformation shows something which is implicit in the PROMELA model: if the execution order of the instructions is not really taken into account in the algorithm, it depends on the system, meaning the FDDEVS model better represents the reality of the operating system scheduler than the PROMELA model, although model-checking verifies all possibilities of execution too. Furthermore, because of the size of the reachability graph obtained by this method, we show transformation could be really economic to verify some targeted scenarios. Then, instead of directly verifying the PROMELA model for all scenarios, designers and modelers could use transformation to verify precise scenarios, before using the model-checker tools. 5.2 Discussion around ”Wait” State and ”Critical” State There is another problem of the method introduced in this paper. It concerns the wait active loop given in the lines 14-18, which we redesigned as a Wait state for convenience and simplification. In the same way, considering lines following the exit as atomic instructions was a great simplification. In fact, if we rigorously apply our method, the atomic model of process would be incorrect for some reason. Firstly, applying rigorously our method would force to create internal transition to the existing state \((0, 1, 0)\). But because \( \tau(0, 1, 0) = 0 \) by definition, process would try again to go to \((1, 1, 0)\) at the end of the lifespan, which is not the behaviour of the algorithm. Besides, because we cannot redefine the \( \tau \) function, we must then define our state space as a set of 4-uplet \[ S = \{(\text{want}_p, \text{want}_q, \text{my turn}, \alpha)\} \cup S' \] where \[ S' = \{\text{Cr}\} \cup \{\text{Wait}\} \] and \( A \) is a finite set of real values, and redefine our \( \tau \) function like \[ \forall s \in S, \tau(s) = \alpha \] in order to solve this problem. This leads to differentiate states by their lifespan, but it is not a satisfying solution because it corresponds to a transformation based on semantics. We could also argue that the need to define a lifespan value for each state is also based on semantics. However, the method we previously introduced allows defining default values. For instance, if loop condition only depends on a global variable, then we could decide that the lifespan of the corresponding state will be \(\infty\). Otherwise, the lifespan will be equal to 0, as we previously defined. In the same way, the state \texttt{Critical} creates the same problem if we don’t consider the instructions following the exit as atomic instructions. 6 CONCLUSION AND FUTURE WORKS In this paper, we showed we can translate a formal algorithm written in PROMELA into a FDDEVS model, which supports verification and simulation. The transformation has the advantage to allow verification of some interesting scenarios in a reduced state space, in comparison with the state space generated by the model-checker. Moreover, the resulting model is more representative model the reality, in the sense that time is thus explicitly expressed. Taking into account that, transforming the PROMELA model into a FDDEVS model allows working on a complementary model during the design phase. A simulation with SPIN executes instructions step-by-step allowing simulation of randomness of the processor, but working on an explicit temporal model has the advantage to allow explicit changes of the behaviour of the system over time. However, semantic changes done on the initial PROMELA code, in order to produce a good equivalent FDDEVS model, raises the legitimate question about the equivalence of the models. These changes based on semantic was intended to make feasible the transformation, but we must show they allow expressing the same system. Moreover, the method introduced in this paper also opens the question of the generalizability of this approach to any others formalisms and any others systems, and also of the automaticity of the transformation. REFERENCES
{"Source-Url": "http://www.scitepress.org/Papers/2014/50379/50379.pdf", "len_cl100k_base": 5263, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21053, "total-output-tokens": 6116, "length": "2e12", "weborganizer": {"__label__adult": 0.0005102157592773438, "__label__art_design": 0.000392913818359375, "__label__crime_law": 0.0006351470947265625, "__label__education_jobs": 0.0007886886596679688, "__label__entertainment": 9.876489639282228e-05, "__label__fashion_beauty": 0.00022709369659423828, "__label__finance_business": 0.0003838539123535156, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0009236335754394532, "__label__hardware": 0.0014553070068359375, "__label__health": 0.0011043548583984375, "__label__history": 0.0003788471221923828, "__label__home_hobbies": 0.0001291036605834961, "__label__industrial": 0.0008974075317382812, "__label__literature": 0.00039768218994140625, "__label__politics": 0.0004944801330566406, "__label__religion": 0.00072479248046875, "__label__science_tech": 0.1055908203125, "__label__social_life": 0.00011849403381347656, "__label__software": 0.005443572998046875, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.0004723072052001953, "__label__transportation": 0.0010442733764648438, "__label__travel": 0.000274658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24562, 0.04716]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24562, 0.45572]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24562, 0.9007]], "google_gemma-3-12b-it_contains_pii": [[0, 3629, false], [3629, 8896, null], [8896, 12264, null], [12264, 16668, null], [16668, 21571, null], [21571, 24562, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3629, true], [3629, 8896, null], [8896, 12264, null], [12264, 16668, null], [16668, 21571, null], [21571, 24562, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24562, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24562, null]], "pdf_page_numbers": [[0, 3629, 1], [3629, 8896, 2], [8896, 12264, 3], [12264, 16668, 4], [16668, 21571, 5], [21571, 24562, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24562, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
a1a0e9e1e5ad4ae45bac1f2fc0e820019949b330
Towards an UML Profile for the Description of Software Architecture Abdelkrim Amirat, Mourad Oussalah To cite this version: Abdelkrim Amirat, Mourad Oussalah. Towards an UML Profile for the Description of Software Architecture. International Conference on Applied Informatics (ICAI’09), Nov 2009, Bou Arréridj, Algeria. pp.226-232. hal-00483680 HAL Id: hal-00483680 https://hal.science/hal-00483680 Submitted on 16 May 2010 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Towards an UML Profile for the Description of Software Architecture Abdelkrim Amirat\textsuperscript{1,2} and Mourad Oussalah\textsuperscript{1} \textsuperscript{1}Laboratoire LINA, CNRS UMR 6241, Université de Nantes, France \textsuperscript{2}Centre Universitaire de Souk Ahras, Algérie \{abdelkrim.amirat ; mourad.oussalah\}@univ-nantes.fr Abstract Existing ADLs (architecture description languages) have an advantage of formally specifying the architecture of component-based systems. But ADLs have not come into extensive use in industries since ADL users should learn a distinct notation specific to architecture, and ADLs do not address all stakes of development process that is becoming diversified everyday. On the other hand, UML is a de facto standard general modeling language for software developments as UML provides a consistent notation and various supporting tools during the whole software development cycle. A number of researches on architecture modeling based on UML have been progressed. In particular, many research results have been introduced that specialize UML by its extension mechanism in order to explicitly represent core architecture concepts that UML does not fully support. In this paper, we examine architecture modeling elements that can be represented in UML2.0 and discuss how to extend and specialize UML2.0 in order to make it more suitable for representing architectures. Keywords: Software Architecture Modelling, UML 2.0, OCL, Profile and Metamodel. 1. Introduction Software architecture has emerged as an important subdiscipline of software engineering. A key aspect of the design of any software system is its architecture, i.e. the fundamental organization of the system embodied in its components, their relationships to each other and to the environment, and the principles guiding its design and evolution [10]. Architecture can be modeled according to different viewpoints. From a run time perspective, two viewpoints are frequently used in software architecture: the structural viewpoint and the behavioural viewpoint [10]. In this work we are interested by the structural viewpoint which can be specified in terms of Components, Connectors and Configurations (C3 model). Thereby, from this viewpoint, an architecture description should provide a formal model of the architecture in terms of components and connectors and how they are composed together. The Unified Modeling Language (UML) [5] [6] [7] is a family of design notation that is rapidly becoming a de facto standard for representing the software artifacts obtained in the various activities (like requirement acquisition, requirement analysis, system design, or system deployment) of a software development process. For this reason, there have been attempts to use this language to represent the software architecture of systems as well. However, the language is not designed to represent syntactically and semantically the elements of software architecture [2]. The attempts to instantiate the constructors defined in the UML meta model or to extend UML by using stereotypes to represent these elements has driven to the same representations (boxes and lines) that have been widely criticized by the software architecture community. Consequently, the only solution is to extend the UML meta model. However, the extension of the UML meta model implies the modification of the language, which means a deviation from the standard. This has been one of the reasons used in the literature to extend UML with stereotypes or by specifying profiles for the area of interest [11]. A question that arises at this point is why not using Architecture Description Languages (ADLs) to describe the application software architecture, therefore avoiding the change to the UML meta model. Indeed, the currently available architectural description languages (ADLs) have not spread in industry mainly because they are not generic enough, are not standardized and are poorly supported by tools. UML is a standard, but its current semantics fails to meet the criteria stated above: it is weak at describing interfaces, the abstractions it provides are not univocal. and it provides little support for modeling architecturally significant information [3]. Additionally, the ADLs are not integrated in any development process (like the Unified Software Development Process [4]), while UML is. Hence, representing the application architecture with UML allows the integration of this representation with the rest of software artifacts. In this paper, we propose UML 2.0 profile for explicit components, connectors and configurations defined in previous work [8] [9]. The remainder of the paper is organized as follows. In Section 2 we describe the main elements that appear in the description of the C3 architectural elements. Section 3 describes the UML extension profile as specified by the Object Management Group (OMG). In Section 4 we present several attempts to extend UML for representing software architecture. In Section 5 we characterize C3 elements as UML meta classes by defining UML Profile. Finally, Section 6 presents conclusions and future lines of research. 2. Basic Architecture Elements of C3 Model The C3 model supports description of software architectures from a structural viewpoint. In C3, architecture is described in terms of components, connectors, and their composition (configuration). Figure 1 depicts its main constituents. ![Figure 1. Architectural Concepts](image) Components are described in terms of external interfaces and an internal behaviour. Their architecture role is to specify computational elements of a software system. Interfaces are described in terms of ports and services. Ports are described in terms of connections between a component and its environment. The figure 2 defines the metamodel of component concept in C3 from the structural point view. ![Figure 2. Component Meta Model in C3](image) Connectors are special-purpose components. They are described as component in terms of external interfaces and internal behaviour. However, their architectural role is to connect together components. They specify interactions among components. The internal behaviour is described by the glue protocol. Interfaces are described in terms of roles and services. Attachments describe the different possible connection of roles with the external environment. Figure 3 depicts the main constituent of connectors. ![Figure 3. Connector Meta Model in C3](image) In order to attach a port to role, the interfaces of the two elements must be compatible, i.e. the type of the component must be defined in interface of the connector. So, the provided port will be connected with required role and required port will be connected with provided role. Thereby, attached port/role can transport values (that can be data, connections, or even architectural elements. From a black-box perspective, only port of components and roles of connectors and values passing through connections are observable. Components and connectors can be composed to construct configuration (composite elements), which themselves will become components. Configurations can be decomposed and recomposed in different ways or with different components in order to construct different compositions. The visible parts of configurations are their interfaces which are defined in terms of ports and services. Ports are described in terms of connections between the configuration and its internals from one side and from the other side between the configuration and its environment. The figure 4 defines the meta model of configuration concept in C3 from the structural point view. \[ \begin{array}{c} \text{Configuration}\\+\text{name: String}\\0..1 \end{array} \quad \begin{array}{c} \text{Component}\\+\text{name: String}\\1 \end{array} \quad \begin{array}{c} \text{Connector}\\+\text{name: String}\\1..* \end{array} \quad \begin{array}{c} \text{Port}\\+\text{name: String}\\1..* \end{array} \] \text{Figure 4. Configuration Meta Model in C3} 3. UML 2.0 Profile UML provides a number of extension mechanisms that allow designers to customize and extend the semantics of model elements: Constraints place added semantics restrictions on model elements. The possibilities for constraints are numerous and include type constraints on class attribute values, constraints on the construction of associations between classes, and so on. Tagged values: Allow new attributes to be added to particular elements of the model. The stereotype defines a number of tagged values. Each tagged value is typed with a data type number, string, boolean, or user-defined enumeration. Stereotypes allow groups of constraints and tagged values to be given descriptive names (with the same specified in double angle brackets), and applied to model elements, effectively creating a new yet restricted form of a meta class for constructing models. The semantic effect is as if the constraints and tagged values were attached directly to those elements. UML Profiles combine the concepts of stereotypes, tagged values, and constraints to provide a coherent and concise dialect of UML for specific family of applications. 4. UML Extension Mechanisms UML 2.0 has become an industry standard for modeling, design and construction of software systems as well as more generalized business and scientific processes. In UML 2.0 there is no specific diagram for modeling architectures. In fact, constructs for architecture description are not directly provided but architecture description is supported and can be expressed as a combination of different views, e.g. 4+1 views. UML 2.0 provides a major improvement in its support to architecture description with a major enhancement in the Component Diagram and the introduction of a new diagram Composite Structure Diagram. So, in UML 2.0 components have been generalised, and are considered as higher-level than classes. The definition of UML Profiles for modelling software architecture is not new; [1] identifies three possible strategies for modeling software architectures using UML. The four-layer meta modelling architecture of UML suggests three possible strategies for modeling software architectures using UML. - Using UML “as is” - Constrain the UML meta model using UML’s built-in extension mechanisms (e.g. UML Profile). - Extend the UML meta model to directly support the needed architectural concepts. Each strategy has certain potential advantages and disadvantages. This section presents a brief discussion and preliminary evaluation of the strategies. In order to reap the benefits of standardization we require that any resulting notation adhere to the syntax and semantics of UML. 4.1 Using UML “As Is” Using UML 2.0 “As Is” is not the good choice of strategy [1]. The modeling capabilities provided by UML 2.0 “As Is” do not fully satisfy the structural and behavioural requirements for describing software architectures, because UML 2.0 does not provide specialized constructs for modeling software architectures, in particular for modeling software architecture from a runtime perspective. For example, although they are different architectural elements with very different responsibilities, components and connectors must be modeled in UML 2.0 using the same mechanism. Hence, describing software architecture in UML 2.0 is an error-prone approach. 4.2 Constraining UML This strategy uses profiles, also sometimes called lightweight built-in extension mechanisms. The most important profile element is the stereotype. Stereotyping is a pure extension mechanism. The model elements marked with a stereotype have the same structure (attributes, associations, operations) defined by the meta model element that describes them, plus the constraints and tagged values added by the stereotype to that meta model element. This is accomplished via the extension mechanisms described in section 3. However, with stereotypes we can not change the semantics of the meta model elements (at most, we can refine it), change its structure, nor create new elements of that meta model. So, the architecture specified in this manner would still be manipulated by standard UML tools and would understandable to UML users. 4.3 Augmenting UML This strategy is a heavyweight extensibility mechanism as defined by the specification of Meta Object Facility (MOF) [5][11]. In this strategy the goal is to extend the UML meta model by explicitly adding new meta classes and other meta constructors. The potential benefit of such an extension is that it could fully capture every desired feature of every ADL and provide “native” support for software architectures in UML. However, the challenge of standardization is finding a language that is general enough to capture needed concepts without adding too much complexity, while such a modification would result in a notation that is overly complex. More importantly, the notation would not conform to the UML standard and could become incompatible with UML compliant-tools. In this work we have experimented with the second strategy. Indeed, the use of UML Profile as an extension mechanism provides the best compromise to at the same time remain compliant with UML and specialise UML with precise semantics. 5. UML 2.0 Profile for C3 First of all we identify the target meta classes of UML 2.0 meta model which allow to stereotype the structural concepts as well as behavioral ones. The C3 structural concepts component, connector and the configuration are considered as types. Furthermore, those concepts are treated as entities having the same level of abstraction (first class entities). Finally, the external vision of component and configuration concepts is based on a set of ports and the external vision of connector concept is based on a set of roles. Although both component and class concepts of UML 2.0 have the same expressive power, they are used as base for stereotyping respectively the component and connector concepts of C3. The concept state machine of UML 2.0 is used as base for stereotyping the behavioral aspects of the C3 elements. A C3 interfaces is described by a stereotype of UML 2.0 interface «C3Interface». 5.1 Components UML 2.0 component is the closest concept to the C3 component. So, the former concept will be used as base for stereotyping the later one. Invariant 1 assures that those components have only interfaces through C3 ports and properties. There are no required or provided interfaces which are associated to C3Component. All ports associated with C3component are C3Ports and have port type. A C3 component is described by a stereotype of UML 2.0 component «C3Component» as depicted by Figures 5 and 6. ``` Context Component inv: -- invariant 1 self.isC3Component () implies self.provided \rightarrow isEmpty and self.required \rightarrow isEmpty self.ownedPort \rightarrow forall (p | p.stereotype = C3Port and p.C3PortType = # port) self.realisation \rightarrow isEmpty self.stateMachine \rightarrow size() = 1 ``` Figure 5. OCL description for a component Figure 6. Component Meta Class in UML 2.0 Meta Model 5.2 Ports Ports identify points of interaction between a component and its environment. UML ports are features of classifiers that specify distinct points of interaction between the classifier and its environment. UML ports have required and provided interfaces. We use a combination of UML port and corresponding required and provided interfaces to express C3’s port concept as illustrated by figure 7. Ports can only be used with components and they have only one provided and one required interface. 5.3 Connectors Representing connectors using UML’s assembly connector would be visually appealing, but we would lose expressiveness because C3 connectors may be much more complex than a simple interfaces’ match. They can be, for example, a protocol, or a SQL link between two components (a client and a database). Moreover, when reusing components built by different teams it is normal that their interfaces do not match exactly. The connector may provide the required glue between the components and this must be made explicit in the design. In order to represent the concept of connector, which has no semantic equivalent in UML, we use a stereotype of UML class named <<C3Connector>> and that it has no other interfaces than the ones defined through its roles and properties as depicted by Figures 8 and 9. 5.4 Roles In C3, roles are related to connectors the same way as ports are related to components. Thus, it makes sense to represent C3 roles as constrained UML ports, through the use of the <<C3Role>> stereotype as illustrated by Figure 10. 5.5 Configurations We introduce stereotypes for modeling the attachments of components to connectors and for C3 configurations. Stereotype C3Attachment for instances of metaclass association: • C3 attachments are associations between two elements. \[ \text{self.ocltype.end} \rightarrow \text{size()} = 2 \] • One end of the association must be a C3 component. Let \( \text{ed} = \text{self.ocltype.end} \) \[ \text{ed}[1].\text{multiplicity} = "1..1" \text{ and } \\ \text{ed}[1].\text{class}\_\text{stereotype} = \text{C3Component} \] • The other end of the association must be a C3 connector \[ \text{ed}[2].\text{multiplicity} = "1..1" \text{ and } \\ \text{ed}[2].\text{class}\_\text{stereotype} = \text{C3Connector} \] Stereotype C3Configuration: A C3Configuration is made up of only C3 model elements. \[ \text{self.ocltype.elements} \rightarrow \text{forAll} ( e | \\ \quad \text{e}\_\text{stereotype} = \text{C3Component} \text{ or } \\ \quad \text{e}\_\text{stereotype} = \text{C4Connector} \text{ or }) \] 6. Related Work Different UML Profiles dedicated for the description of software architecture have been proposed in the literature. For instance, the SAE Architecture Analysis and Design Language (AADL [13]) standard includes UML 1.4 and UML 2.0 Profiles that add the real-time and embedded systems semantics of AADL to UML [14]. In [16] the authors establish an UML 2.0 profile for the ADL ACME. The authors of [15] indicate some weaknesses of this work specially related to the proposed representation of ADL connector in UML2.0 and propose a generic ADL in the form of a UML2.0 profile. In this work, authors use the concept of collaborations provided by UML2.0 to represent ADL connectors. Oquendo in his paper [12] presents the UML 2.0 Profile for \( \pi \)-ADL, a novel ADL that has been designed in the ArchWare European Project. he presents \( \pi \)-ADL and its UML 2.0 Profile which formally modelling software architectures. It is expected that multiple profiles for different domains will be defined as specialization of UML 2.0 in the future. 7. Conclusion C3 introduces the notion of architecture abstractions, which can be components, connectors, and configuration from structural viewpoint. All abstractions are first-class citizens. The UML 2.0 Profile for C3 architecture elements briefly presented in this paper provides a UML-compatible notation for modeling software architecture. This UML 2.0 Profile provides an easy to learn and low cost entry point for describing software architectures. However, while a connector is regarded as first class design element by architecture community, it has no direct mapping in UML 2.0. Our proposal is to promote connectors to first class architectural element, by representing them as stereotyped components. This seems to be good option, considering that the evolution of Component Based System should provide us with an increasing number of off-the-shelf components. Representing connectors as stereotyped components gives us the extra flexibility to meet this challenge. The availability in UML 2.0 of components with ports typed by provided and required interfaces has proved to be a step forward in bridging the gap between architectural and design information. This improves the traceability between architectural description and its implementation, using the design as a middle layer between them. This traceability is relevant for keeping the consistency between the architecture, design and implementation of a software system. Our ongoing works in this field are: 1- Implementation of this C3 Profile in UML 2.0 environment with OCL support. 2- Extension of this profile to support advanced concepts like behavioral aspects of C3 elements, nested configurations, architectural styles. References
{"Source-Url": "https://hal.science/hal-00483680/document", "len_cl100k_base": 4547, "olmocr-version": "0.1.49", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22247, "total-output-tokens": 5855, "length": "2e12", "weborganizer": {"__label__adult": 0.00032973289489746094, "__label__art_design": 0.00055694580078125, "__label__crime_law": 0.00030732154846191406, "__label__education_jobs": 0.0005970001220703125, "__label__entertainment": 5.441904067993164e-05, "__label__fashion_beauty": 0.0001246929168701172, "__label__finance_business": 0.00015854835510253906, "__label__food_dining": 0.0002894401550292969, "__label__games": 0.0004105567932128906, "__label__hardware": 0.0005364418029785156, "__label__health": 0.00037598609924316406, "__label__history": 0.0002110004425048828, "__label__home_hobbies": 6.538629531860352e-05, "__label__industrial": 0.0002970695495605469, "__label__literature": 0.0002582073211669922, "__label__politics": 0.0002300739288330078, "__label__religion": 0.00043487548828125, "__label__science_tech": 0.01053619384765625, "__label__social_life": 7.522106170654297e-05, "__label__software": 0.004833221435546875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00027298927307128906, "__label__transportation": 0.0003876686096191406, "__label__travel": 0.0001780986785888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24244, 0.03469]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24244, 0.59426]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24244, 0.88815]], "google_gemma-3-12b-it_contains_pii": [[0, 970, false], [970, 5139, null], [5139, 8304, null], [8304, 12388, null], [12388, 16140, null], [16140, 17829, null], [17829, 21958, null], [21958, 24244, null]], "google_gemma-3-12b-it_is_public_document": [[0, 970, true], [970, 5139, null], [5139, 8304, null], [8304, 12388, null], [12388, 16140, null], [16140, 17829, null], [17829, 21958, null], [21958, 24244, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24244, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24244, null]], "pdf_page_numbers": [[0, 970, 1], [970, 5139, 2], [5139, 8304, 3], [8304, 12388, 4], [12388, 16140, 5], [16140, 17829, 6], [17829, 21958, 7], [21958, 24244, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24244, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
75b46c9a82011a8d9c98abc973b041e937d2e1ed
Web-based object-oriented control system design J. C. MARTÍNEZ-GARCÍA, G. H. SALÁZAR-SILVA, R. GARRIDO Departamento de Control Automático/Centro de Investigación y de Estudios Avanzados del IPN E-mail: {martinez,gaston,garrido}@ctrl.cinvestav.mx Abstract—We present in this paper some ideas concerning object-oriented control systems design. We propose in particular a Java-based methodology to develop control systems simulations on the World Wide Web, and we illustrate it with an application concerning the simulation of a two degrees-of-freedom robot manipulator. Keywords—Object-oriented control system design, World Wide Web, Java applications, computer assisted control systems education. I. INTRODUCTION In this paper, we propose an object-oriented methodology for control systems design, following the discussion started in [15]. We focus our proposal to the application of the Internet World Wide Web for Automatic Control purposes. The Java application which implements our methodology is proposed as a free alternative to the commercial tools which perform similar tasks. In Section II we briefly discuss the basic concepts concerning object-oriented programming and we discuss the main characteristics of the Java programming language. We also discuss in this section how the classic block diagram paradigm, in the automatic control field, can be easily translated in an object-oriented methodology for control systems design. In Section III we illustrate the proposed methodology concerning a Java application which implements a simulated control of a 2 degrees-of-freedom robot manipulator. Finally, Section IV is dedicated to some concluding remarks. II. BASIC CONCEPTS Object-Oriented Control Systems Design (OOCSD) mimics the general idea concerning Object-Oriented Programming (OOP). Indeed, OOCSD is just control design based in object-oriented programming technics. In what follows we recall some ideas concerning object-oriented programming. The interested reader can consult for instance [5], [8] and [16], for a more extensive survey on this topic. A. Object-oriented programming and Java Object-oriented programming (OOP) is one the most powerful programming paradigm in recent years. This programming idea organizes programs in ways that echo how things are put together in the real world. When we use object-oriented programming, our overall program is made up of lots of different self-contained components (objects), each of which has a specific role in the program and all of which can communicate one to another in predefined ways. Object-oriented programming is not limited to combining objects, it also provides many other concepts and features to create and use objects in an easier and flexible way. One of the concepts employed in OOP is that of class. A class is a template for multiple objects with similar features, i.e., a class is a (generic) general representation of an object (the concrete representation of an object is called instance). In fact, when we write a program in an object-oriented language (like Java or C++), we do not define actual objects. We define classes of objects, which are generally grouped in class libraries. As far as Java is concerned, every class is generally made up of two components: properties and methods. Properties, defined by variables, are the individual characteristics that differentiate one object from another and determine its appearance, state or other qualities. Class methods determine how the instances of the class change their internal state or react when the instance is asked to do something by another class or object. In fact, they are functions defined inside classes that operate on instances of those classes. As is common in object-oriented languages, there exists in Java some mechanisms for organizing classes and class behaviours: inheritance, interfaces, and packages. The concept of inheritance is that when we write a class, we only have to specify how that class is different from some other class. Inheritance will give us automatic access to the information contained in the original class. When using inheritance, all classes are arranged in strict hierarchy. Each class has a superclass, and each class have one or more subclasses. Classes further down in the hierarchy are said to inherit from classes further up in hierarchy. Subclasses inherit all the methods and variables from their superclasses. At the top of the Java class hierarchy is the class object, all classes inherit from this superclass. Object is the most general class in the hierarchy; it defines behaviour inherited by all the classes in the Java hierarchy. Each class further down in the hierarchy adds more information and becomes more tailored to a specific purpose. As far as Interfaces and Packages are concerned, both are advances topics for implementing and designing groups and interfaces. An Interface is a collection of method names... without actual definition indicating that a class has a set of behaviours in addition to the behaviours the class gets from its superclasses. Packages in Java are a way of grouping together related classes and interfaces: Package enables modular groups of classes to be available only if they are needed and eliminates potential conflicts between class names in different groups of classes. B. The World Wide Web as an Automatic Control tool It is probably unnecessary to tell anyone reading this article about the rapid growth of the Internet, mainly due to the world-wide price reduction in personal computers and connection services. In particular, as the main Internet service, the World Wide Web, that we just call the Web in the sequel, has now a privileged place in the popular culture. In fact, the Web is never unavailable on today’s university campus, which is very appealing for Automatic Control purposes, including education and remote experimentation (see for instance [6], [10], and [13]). Because of its accessibility, the Web is a real low-cost alternative to the expensive traditional training services based on laboratory demonstrations. Moreover, the Web as a platform for didactic purposes allows the students to schedule his own learning (see for instance [12] and [14]). The Web also allows 24 hours per day accessibility to virtual and real experimental facilities (see for instance [9]). Due to the expensive cost of real prototypes, nowadays the academy enhances theoretical teaching with simulation-based demonstrations, and because of its accessibility, the Web can be used as a platform to implement remote experimental facilities with the main advantage that there exists some nice development tools which are free of charge. C. Java and its possibilities In order to have a better insight on the possibilities that Java offers, let us briefly describe the main characteristics of this language. For more information see for instance [5] and [8]. Because Java is an object oriented programming language designed to provide a simple, attractive interface to information on the Web, it is a natural tool for the conception on Web-based Automatic Control facilities. The Java syntax structure is very similar to the C++ syntax structure, but its virtual machine is completely different; C++ is compiled to the native language of the computer where compilation is performed, which is not the case for Java. As an obvious consequence, the execution time of Java programs is poor as compared with the execution time of an equivalent compiled C++ program (see for instance [7]). In fact, Java is not a genuine compiled language. Indeed, Java development systems simply convert Java programs into a very compact, cross-platform, byte code that can be downloaded and interpreted by a Web browser (this characteristic is what makes Java a very attractive Internet development tool). In fact, almost all the popular browsers are now Java compatible. Because of its platform-independency nature, we decided to use Java in our project. It must be pointed out that Java is a Web-oriented general-purpose programming language, not a scientific computation tool, which makes not easy to perform engineering computations. This lack of scientific computing skills, as a consequence of the universal accessibility, dificults the development of Web-based Automatic Control facilities, offering then a number of interesting challenges. D. The object-oriented nature of the Control problem Broadly speaking, the control problem, as illustrated in Figure 1, pursues the modification of the Plant behaviour in order to influence its output, also called actual output, in a desired way. This modification is attained through the action of the Controller, on the Plant input, which reacts to the error signal, just called error. The error is equal to the actual output minus the desired output, also called reference. In terms of the object-oriented paradigm, both, the controller and the plant belong to the family of dynamical systems. Indeed, both systems are dynamical blocks interacting with their environment through theirs corresponding inputs and outputs. If we use the block diagram paradigm, we can say that both systems belong to the Block class. Thus, both, the controller and the plant can be defined as subclasses of the Block superclass. The instances corresponding to the controller and the plant classes (for a particular application), are specified by the parameters of the concret controller and the concret plant. In order to illustrate these ideas, we present in the next section a particular application concerning the simulation of a closed-loop control scheme including an industry oriented controller and a well known two degrees-of-freedom robot manipulator (see [15]). Let us remark that object-oriented programming has in fact its roots in simulation. Indeed, the first object-oriented programming language, Simula, was developed to provide simulation facilities within a general purpose programming language (see for instance [4] and [11]). III. AN ILLUSTRATIVE EXAMPLE With respect to Figure 2, the control of a virtual two degrees-of-freedom robot manipulator can be described as follows: 1. The Client uses a Graphic User Interface (GUI) in order to obtain a point in cartesian coordinates on the Client’s computer screen. It is the point where the user wants to place the gripper of the virtual robot manipulator. The desired position of the gripper is then generated by the Client’s mouse. 2. The desired cartesian position of the gripper is transformed to the manipulator generalized coordinates, i.e., the angular positions $\alpha$ and $\beta$ of the two links $l_1$ and $l_2$, respectively. These coordinates constitute what is usually called the reference input and they are generated by the inverse kynematic model of the robot manipulator. 3. The actual magnitud of the angular position of the two links, i.e., the output, are measured and compared with a reference input, in order to obtain the position error. 4. The controller receives the position error and generates the control input. The parameters of the controller are specified by the Client via the GUI. 5. Finally, the virtual robot manipulator reacts to the control input in a dynamic nonlinear way. The final output is generated by the direct kynematic model of the robot manipulator. With this information flow in mind, we proceed to the synthesis of the Java program which implements the control of the virtual two degrees-of-freedom robot manipulator. ### A. Program synthesis First of all, let us make some comments about the notation: **Sans serif** characters are used to indicate the name of a particular Java class, always beginning with a capital letter (consider for instance the Java class called `Block`). As far as an object is concerned, we also use **Sans serif** characters, but in this case only small letters are used. Consider for instance the object called `block`. Finally, we use **Typewriter** characters to write the source code of the programs, including the data names and the member methods on the Java classes. We can now present the modules which conforms our Java application. ### A.1 The robot manipulator The idea behind the module which concerns the virtual robot manipulator is the dynamical model of a well-known 2R manipulator (see for instance [3]). There exist several approaches to model this kind of systems. Because of the nonlinear nature of the robot, the state space approach is usually considered. In this case, the dynamic behaviour of the system is described by a set of differential equations: $$ \begin{align*} \dot{x}(t) &= f(x(t)) + g(x(t))u(t) \\ y(t) &= h(x(t)), \end{align*} $$ where: $x \in \mathbb{R}^n$ denotes the state; $u \in \mathbb{R}^p$ denotes the control input, and $y \in \mathbb{R}^p$ denotes the output. $f$, $g$, and $h$ are real valued non linear functions. In our case, this dynamical model is implemented via a discretizing Euler integrator (see for instance [2]). ### A.2 The Block class Following the object-oriented approach, we define an interface for the virtual robot manipulator considering the natural functionality of a 2R manipulator. Applying the block diagram paradigm, this interface is constituted by two basic operations, i.e.: - a) To apply a signal at the input, and - b) To measure the signal at the output. Extending this idea, the block corresponding to the virtual robot manipulator is built using the following components: - To apply a signal at the input. - To measure the signal at the output. - To observe the state. - To compute both, the inverse and the direct kynematic models. - To paint the kynematic chain on the graphic plane. Since both, the robot manipulator and the controller can be considered as blocks (recall the block diagram paradigm shown in Figure ), we first define a basic Java superclass called `Block`, which is defined in terms of the basic concept of state. This `Block` is build around the data constituted by the triplet $(x, y, u)$, i.e., the state, the output, and the input, respectively. The interface corresponding to the basic Java superclass that we are considering is shown in Listing 1. ```java package ctrl; public abstract class Block { public abstract double[] x; // state public abstract double[] y; // output public static double t = 0.0001; // sampling time Block nextBlock; // reference to the next Block public static double t = 0.0001; // sampling time public Block(int n, int m); // constructor public abstract double[] dynamics(double[] u); public abstract double[] output(); public void connectTo(Block newBlock); public boolean connectMe(Block newBlock); public boolean setState(double[] x0); public double[] getState(); public double[] setInput(double[] u); public double[] getPutput(); public String toString(); } ``` Fig. 2. Main Idea. public static void setSamplingTime(double st); } Listing 1. Block class. The purpose of the dynamics method is to program the dynamics of the model using a nonlinear set of differential equations as the one described by (1). The connectTo method links the output of a Block with the input of one other Block via the reference nextBlock. The setState method assigns the value of the state vector \( x \), which is useful to provide the initial state of the Block. The getState method allows the observation of the state vector \( x \), which is useful when state feedback is being considered. The setInput method gives an input to the Block and the getOutput method measures the value of the output vector \( y \) of the Block. Finally, the setSamplingTime method fixes the integration time \( t \). A.3 The Robot2R class Now, taking as a base the Block superclass, we define the Robot2R derived class, which models a robot manipulator of type 2R (see [3]). The Robot2R class adds some particular characteristics to the Block superclass, mainly the length and the mass of the links (we assume at this level that the two links have the same physical parameters). The definition of the Robot2R class is specified in Listing 2. package ctrl; import java.awt.*; public class Robot2R extends Block { int linkSize; // Length of a link int linkScale; double linkMass; // Mass of a link int baseX, baseY; // Coordinates of the Robot Base public Robot2R(int size, double mass, int bX, int by); double[] dynamics(double[] u); double[] output(); public double[] setInput(double[] u); public double[] getInvKyn(int xd, int yd); public int[] getPosition(); public boolean isInside(int xd, int yd); public int[] xformToG(int xl, int yl); public int[] xformToL(double xl, double yl); public int[] xformToL(int xg, int yg); public int[] xformToG(double xg, double yg); public void draw(Graphics g); void drawBase(Graphics g); void drawWorkspace(Graphics g); double sin(double u); double cos(double u); double acos(double u); double atan2(double u, double v); double sqrt(double u); } Listing 2. Robot2R class. The Robot2R builder defines the dimensions of the state vector \( x \) and the output vector \( y \). Robot2R also initializes the particular characteristics of a 2R manipulator. In this case the method dynamics models the dynamics of the 2R manipulator as is specified in (1). The state vector \( x = (x_0(t), x_1(t), x_2(t), x_3(t)) \) is defined as follows: \[ \begin{align*} x_0(t) & := \theta_1(t) \\ x_1(t) & := \theta_2(t) \\ x_2(t) & := \theta_1(t) \\ x_3(t) & := \dot{\theta}_2(t), \end{align*} \] where \( \theta_i(t) \) and \( \dot{\theta}_i(t) \) denote the angular position and the angular speed of the \( i \)-th link, respectively. The model of the 2R manipulator is given by the following differential equations: \[ \begin{bmatrix} \dot{x}_0(t) \\ \dot{x}_1(t) \\ \dot{x}_2(t) \\ \dot{x}_3(t) \end{bmatrix} = M^{-1}(x(t)) \left( -V(x(t)) - G(x(t)) \\ -F(x(t)) + u(t) \right), \] where the matrix functions \( M(x(t)), V(x(t)), \) and \( G(x(t)) \) are defined as in [3]. \( F(x(t)) \) denotes the viscous friction. The output method implements the output of the 2R manipulator, i.e., gives a real vector constituted by the angular positions of the links \( (x_0(t) \) and \( x_1(t) \)). The setInput method implements an Euler integrator. The getInvKyn method computes the inverse kinematic model of the 2R manipulator. The parameters associated to this method are the coordinates of the gripper desired cartesian position and the return value is a real vector of dimension 2 constituted by the computed angular positions. If the desired point is not in the workspace, a new point is build on the workspace border. The getDirKyn method computes the direct kyinematic model of the 2R manipulator, receiving as inputs the two angular positions. As far as the actual position of the kinematic chain is concerned (in cartesian coordinates), it is computed by the getPosition method. This method produces a real vector \( p(t) = (p_0(t), p_1(t), p_2(t), p_3(t)) \) defined as follows: \[ \begin{align*} p_0(t) & = \overline{p}_1(t) \\ p_1(t) & = \overline{p}_1(t) \\ p_2(t) & = \overline{p}_2(t) \\ p_3(t) & = \overline{p}_2(t), \end{align*} \] where \( \overline{p}_1(t), \overline{p}_1(t) \) and \( \overline{p}_2(t), \overline{p}_2(t) \) are the cartesian positions of the robot manipulator joints on the graphic plane. The isInside method verifies if a given point is inside the manipulator workspace. The xformToG method converts the local coordinates on the graphic plane. The xformToL method converts the global coordinates of the graphic plane to the local coordinates of the manipulator. Finally, the draw method (and all its associated methods) draws the robot manipulator and its workspace on the graphic plane. As it can be seen, some transcendental functions are also included in our Java application. To obtain the output. The controller given by the following equation: \[ u(t) = k (1 + \frac{t}{t_i} + \frac{t_d}{t}) \] where: \( k \) denotes the proportional gain; \( t_i \) denotes the integral time (also called reset); and \( t_d \) denotes the derivative time. Finally, the measureFrom method connects controlPID to the block which produces the measure. B. Graphic User Interface The Graphic User Interface (GUI) is composed of two parts: the first one comprises the Display class, which implements a graphic display generated by the Panel class (which is part of the standard library of Java). The graphic display allows the visualization of the virtual robot manipulator movement. The Display class calls the draw method of Robot2R and obtains a position on the graphic plane via a click from the Client’s mouse. This point is then converted in a real vector constituting the reference input. It is important to say that Display uses the Java multithreading resources, in order to give rise to a nice animation of the virtual plant. The second part of the GUI is constituted by an applet which connects the scrollbars with ControlPID, in order to allows the Client to adjust the PID gains. The behaviour of our Java program is illustrated for two different gripper positions in Figure 3 and Figure 4. Note that the PID gains can be modified by the Client using the mouse. Remark that the classes that we presented here are grouped in a package called ctrl. IV. Concluding Remarks A Java-based methodology concerning object-oriented control systems is presented in this paper. We have shown that the object-oriented paradigm can be easily applied to the synthesis of Automatic Control applications. We illustrated it with an example concerning the simulation of a popular control scheme of a two degrees-of-freedom robot manipulator. The structure of the Java application mimics the diagram of blocks corresponding to the closed-loop sys- tem. The derived class for any arbitrary dynamical system can be obtained from the Block class, which makes possible the simulation of a huge diversity of control schemes. We can affirm that even if Java was not conceived for developing engineering applications, its object-oriented nature and its platform independency make of Java an excellent tool to develop Automatic Control applications. Indeed, Java allows the developer to easily implement several control strategies. In our last section we illustrate this possibility including a discrete time PID controller (see the application at http://www.ctrl.cinvestav.mx/rws/VirtualRobot.html), which can be easily changed to include a more sophisticated controller, with no extra time conception cost. Let us remark that the Euler method we applied to discretize the nonlinear dynamics of the plant, can be substituted by a Runge-Kutta method, in order to insure better numerical properties in our Java application. We are developing nowadays a control library which will include more sophisticated control strategies, including real-time control. Our Web-based service will also include in the short term a Matlab based tutorial. REFERENCES
{"Source-Url": "http://www2.irccyn.ec-nantes.fr/Jfrmex/papiers_septembre/c-129.pdf", "len_cl100k_base": 5052, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23339, "total-output-tokens": 6357, "length": "2e12", "weborganizer": {"__label__adult": 0.00044155120849609375, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.0004119873046875, "__label__education_jobs": 0.0014324188232421875, "__label__entertainment": 7.265806198120117e-05, "__label__fashion_beauty": 0.00017571449279785156, "__label__finance_business": 0.0002715587615966797, "__label__food_dining": 0.0004487037658691406, "__label__games": 0.000820159912109375, "__label__hardware": 0.001953125, "__label__health": 0.0005707740783691406, "__label__history": 0.00029277801513671875, "__label__home_hobbies": 0.0002199411392211914, "__label__industrial": 0.0014905929565429688, "__label__literature": 0.00023806095123291016, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0004730224609375, "__label__science_tech": 0.05596923828125, "__label__social_life": 9.441375732421876e-05, "__label__software": 0.006595611572265625, "__label__software_dev": 0.9248046875, "__label__sports_fitness": 0.0005068778991699219, "__label__transportation": 0.0017271041870117188, "__label__travel": 0.00022172927856445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25665, 0.01347]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25665, 0.81529]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25665, 0.8672]], "google_gemma-3-12b-it_contains_pii": [[0, 4985, false], [4985, 10283, null], [10283, 14973, null], [14973, 19871, null], [19871, 22032, null], [22032, 25665, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4985, true], [4985, 10283, null], [10283, 14973, null], [14973, 19871, null], [19871, 22032, null], [22032, 25665, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25665, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25665, null]], "pdf_page_numbers": [[0, 4985, 1], [4985, 10283, 2], [10283, 14973, 3], [14973, 19871, 4], [19871, 22032, 5], [22032, 25665, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25665, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
37f3ffe9fcfa6cbb71d2437039247eb6a01e100d
Software Testing Lecture 3 Coverage Justin Pearson 2017 Approaches to testing - **Black Box Testing**: Test without looking at the code/hardware - **White Box Testing (clear box testing)**: Test the internal structure of the software There is also grey box testing where you look for test cases that cover the specification and cover some aspect of the code. It is a grey area. It is all about coverage - Black box testing: test by covering the specification - White box testing: test by covering the source code - Execution paths - Statements - Decision coverage - ... Short version: - Complete coverage hard to define or impossible; - So we have to find some approximation. Turing’s halting problem Does this program halt? ```c main() { int i = 0; int z = 0; for (i = 0; i < 10; i++) { z = z + 1; } } ``` Turing’s halting problem Does this program halt? ```c int i = 0; int z = 0; while (1 != 0) { z = z + 1; } ``` Turing’s halting problem - Can I write a program that takes *any* program and decides if it halts? Seems that is might be possible, but it is mathematically impossible. Turing’s halting problem Proof - Enumerate all programs. There are infinitely many, but still a countable number\(^1\). - Give each program a number. - The function \[ h(i, x) = \begin{cases} 1 & \text{if program } i \text{ halts on input } x \\ 0 & \text{otherwise} \end{cases} \] is not computable. That is there no always halting program that implements \(h\). \(^1\)You can put them in an infinitely long list. Given \[ h(i, x) = \begin{cases} 1 & \text{if program } i \text{ halts on input } x \\ 0 & \text{otherwise} \end{cases} \] Define \[ g(i) = \begin{cases} 0 & \text{if } h(i, i) = 0 \\ \text{loop forever} & \text{otherwise} \end{cases} \] g is a program it has a number, let's call it \( G \). What of \( h(G, G) \)? Two possibilities: \[ h(G, G) = 1 \] then \( g \) halts on input \( G \), so \( g(G) = 0 \) which implies \( h(G, G) = 0 \), hence a contradiction. \[ h(G, G) = 0 \] then \( g \) loops forever on input \( g \). Which implies \( h(G, G) \) is not equal to 0, hence contradiction. Proof strategy is often referred to as a diagonal argument. Common Caveats ▶ Halting function should work on *all* programs. ▶ Finite memory, finite number of registers a computer is just a finite state machine. ▶ So it is possible to write a function that decides if all programs up to a given size terminate, but not very efficiently. ▶ Also knowing that the program halts for all memory sizes up to certain value does not necessarily tell you anything about bigger sizes. ▶ How big is big enough? Rice’s Theorem - All interesting properties are non-computable. - Ask yourself: Is what I’m trying to do equivalent to the halting problem? - Are all execution paths covered? - If you could solve that problem, then you would solve the halting problem. - This is the origin of “Program testing can be used to show the presence of bugs, but never to show their absence!” Edsger Dijkstra. Pragmatics - Admit we can not decide properties on all programs. - Do our best on most programs. - Or be content with approximations such as: - Definitely yes - Definitely no - I have no idea. Coverage criteria include: ▶ Function coverage — Has each function (or subroutine) in the program been called? ▶ Statement coverage — Has each statement in the program been executed? ▶ Branch coverage — Has each branch of each control structure (such as in if and case statements) been executed? ▶ Loop coverage — Have we done a representative number of iterations of all the loops. Statement and Branch coverage - Statement coverage $\subseteq$ Branch coverage ```c silly(int x) { int y=0; if (x==1) { y=100; } twonk(y); } ``` - The test case silly(1) covers all statements in the program, but it does not cover all branches. We never test the case when $x \neq 1$. Statement and Branch coverage - Even branch coverage is a blunt instrument. ```c int silly(int x) { int y = 0; while (x >= 0) { y = y + x; x-- } return (y); } ``` - The test case `silly(1)` runs the loop once. A while loop is a branch with a goto statement. - But what about running the loop zero times, lots of times? - Halting problem again, for all most all loop you can’t decide how many times to run the loop. Control Flow Graphs Control flow graphs models the control structures of the program. We can use them to reason about executions and test cases. - Nodes: Statements of sequence of statements - Edges: Transfer of control - Basic Block: Sequence of statements with no transfer of control. If statements ```c if (x < y) { y = 0; } else { x = y; } ``` If statements ```c if (x < y) { y = 0; } ``` ![Flowchart diagram] If return statements ```python if (x<y) { return; } print(x); return; ``` Note that we do not collapse the two return statements. Loops \[ \text{for}\ (i=0; \ i<x; \ i++) \ \{ \\ \quad \text{loop\_body}(); \\ \} \] • Other program constructs are easy to do. • Each node only to be labelled with one basic block. • Beware of hidden control structures. (C’s case statement). Other definitions possible. - A path is a sequence of nodes. - The length of a path is the number of edges. A path with only one node, hence no edge has length 0. - Subpath is sub-sequence of a path. - Reach\( (n) \) the set of nodes that can be reach via a directed path from the node \( n \). Paths include: [0, 3, 8], [0, 4, 8], [0, 4, 9], [1, 4, 8], . . . , [4, 8], . . . Reach($n$) - Reach(3) = \{8\}, Reach(1) = \{4, 9, 8, 6, 10\}. Two notions in program analysis: - Syntactic reach. - Semantic Reach. ```c main() { for (i = 0; true; i++) { if (f(i) == 0) { break(0); } } X(); } ``` `X()` is syntactically reachable, but semantically you have to infer something about `f()`. Test Path - A test path starts at an initial node and ends in a final node. - Test paths represent execution of test cases. - Some paths can be executed by many test cases. - Some paths can not be executed by any test cases (halting problem again). Test and Test Paths Many to one. Deterministic software, each test path has identical execution. Many to many, non-deterministic software (you’ll meet it all the time) a test can execute many test paths. Testing and Covering Graphs - **Test Requirements (TR)**: Describe properties of test paths - **Test Criterion**: Rules that define test requirements - **Satisfaction**: Given a set TR of test requirements for a criterion C, a set of tests T satisfies C on a graph if and only if for every test requirement in TR, there is a test path in \( \text{path}(T) \) that meets the test requirement. General idea in testing. Define your test requirements separately from the tests cases. Reformulate your requirements into test criteria and then try to find test paths that satisfy your test criteria. Node Coverage — Statement Coverage - Node Coverage (NC) : Test set $T$ satisfies node coverage on graph $G$ iff for every syntactically reachable node $n$ in $N$, there is some path $p$ in path($T$) such that $p$ visits $n$. Edge Coverage — Branch Coverage - Edge Coverage (EC) : TR contains each reachable path of length up to 1, inclusive, in $G$. Is there any difference between node and edge coverage? Difference between node and edge coverage - **Node coverage** - Test requirement (TR) = \(\{0, 1, 2\}\). - Test path = [0, 1, 2]. - **Edge Coverage** - Test requirement (TR) = \(\{(0, 1), (0, 2), (1, 2)\}\). - Test paths = [0, 1, 2], [0, 2]. Complete Path Coverage - Require that all paths are covered. Often there are too many paths. So various approximations. - Require that all paths up to length $k$ are covered. - $k = 0$, node coverage. - $k = 1$, edge coverage. - $k = 2$, edge pair coverage. Structural Coverage Example Node Coverage: $TR = \{0, 1, 2, 3, 4, 5, 6\}$, test paths $= \{[0, 1, 2, 3, 6], [0, 1, 2, 4, 5, 4, 6]\}$. Edge Coverage: $TR =$ $= \{(0, 1), (0, 2), (1, 2), (2, 3), (2, 4), (3, 6), (4, 5), (4, 6), (5, 4)\}$. Complete Path Coverage. Test paths $[0, 1, 2, 3, 6], [0, 1, 2, 4, 6], [0, 1, 2, 4, 5, 4, 6], [0, 1, 2, 4, 5, 4, 5, 4, 6]$, etc. Structural Coverage Example Edge-Pair Coverage: \( TR = \{[0, 1, 2], [0, 2, 3], [0, 2, 4], [1, 2, 3], [1, 2, 4], [2, 3, 6], [2, 4, 5], [2, 4, 6], [4, 5, 4], [5, 4, 5], [5, 4, 6]\} \) Test Paths - \([0, 1, 2, 3, 6], [0, 1, 2, 4, 6], [0, 2, 3, 6]\) - \([0, 2, 4, 5, 4, 5, 4, 6]\). Loops There is a lot of theory, most of it is unsatisfactory. - Don’t be content with branch coverage - Look at your loops. - Try to get them to execute zero times, once and many times. Loops - If a graph contains a loop then it has an infinite number of paths. - Thus you can not ask for complete path coverage. - Attempts to deal with loops: - 1970s: Execute cycles once ([4, 5, 4] in previous example, informal) - 1980s: Execute each loop, exactly once (formalised) - 1990s: Execute loops 0 times, once, more than once (informal description) - 2000s: Prime paths Simple and Prime Paths - A path is simple if no node appears more than once except possible that the first and last node can be the same. - A prime path of a graph is a simple path that is not a sub-path of any other simple path. Simple Paths - [0],[1],[2],[3] - [0, 1],[0, 2],[1, 3],[2, 3],[3, 0] - [0, 1, 3], [0, 2, 3], [1, 3, 0], [2, 3, 0],[3, 0, 1] - [0, 1, 3, 0], [0, 2, 3, 0], [1, 3, 0, 1], [2, 3, 0, 2], [3, 0, 1, 3], [3, 0, 2, 3], [1, 3, 0, 2], [2, 3, 0, 1]. Prime Paths Remove all simple paths that can be extended (either direction) to a longer simple path. - [0], [1], [2], [3] - [0, 1], [0, 2], [1, 3], [2, 3], [3, 0] - [0, 1, 3], [0, 2, 3], [1, 3, 0], [2, 3, 0], [3, 0, 1] - [0, 1, 3, 0], [0, 2, 3, 0], [1, 3, 0, 1], [2, 3, 0, 2], [3, 0, 1, 3], [3, 0, 2, 3], [1, 3, 0, 2], [2, 3, 0, 1]. In this case the prime paths are all the longest simple paths. Not always the case. Simple Paths Enumerate all simple paths of length, 1, 2, 3, … then remove simple paths that can be extended. You will be left with the prime paths. - [1], [2], [3], [4] - [1, 2], [2, 3], [2, 4], [3, 2] - [1, 2, 3], [1, 2, 4], [2, 3, 2] - We have to be careful about the paths of length 4. - [1, 2, 3, 2] is not a simple path. Repeats 2 which is not at the beginning or the end. - In fact there are no simple paths of length 4 in this graph. Prime Paths Enumerate all simple paths of length, 1, 2, 3, \ldots then remove simple paths that can be extended. You will be left with the prime paths. \[ \begin{align*} &x = 0 \\ &i++ \quad \downarrow \\ &\text{body} \\ &3 \quad 4 \end{align*} \] - \([1, 2, 3], [1, 2, 4], [2, 3, 2]\) Prime Paths to Test Paths $x = 0$ - $[1, 2, 3] \rightarrow [1, 2, 3, 4]$ - Execute loop once. - $[1, 2, 4] \rightarrow [1, 2, 4]$ - Execute loop zero times. - $[2, 3, 2] \rightarrow [1, 2, 3, 2, 4]$ - Execute loop more than once. <body i++ Simple Paths - \[1, 2, 3, 4, 5, 6, 7\] - \[1, 2, 3, 4, 5, 6, 3, 6, 3\] - \[1, 2, 3, 4, 5, 6, 3, 4\] - \[1, 2, 3, 4, 5, 6, 3, 4, 5\] - \[1, 2, 3, 4, 5, 6, 3, 4, 6\] - \[1, 2, 3, 4, 5, 6, 3, 4, 5\] - \[1, 2, 3, 4, 5, 6, 3, 5, 6, 3\] - \[1, 2, 3, 4, 5, 6, 3, 4, 5, 6, 3\] Prime Paths 1 → 2 → 3 → 4 → 5 → 6 → 7 - [1], [2], [3], [4], [5], [6], [7], [8] - [1, 2], [2, 3], [3, 4], [3, 7], [4, 5], [4, 6], [5, 6], [6, 3], [6, 3], [6, 4] - [1, 2, 3], [2, 3, 4], [2, 3, 7], [3, 4, 5], [3, 4, 6], [4, 5, 6], [4, 6, 3], [4, 6, 3], [5, 6, 3], [5, 6, 3], [6, 3, 4] - [1, 2, 3, 4], [1, 2, 3, 7], [2, 3, 4, 5], [2, 3, 4, 6], [3, 4, 5, 6], [3, 4, 6, 3], [4, 5, 6, 3], [5, 6, 3, 4], [6, 3, 4, 5], [4, 6, 3, 4], [5, 6, 3, 4] - [1, 2, 3, 4, 5], [1, 2, 3, 4, 6], [2, 3, 4, 5, 6], [3, 4, 5, 6, 3] - [1, 2, 3, 4, 5, 6] Prime Paths - $[1, 2, 3, 7]! \rightarrow [1, 2, 3, 7]$ - Do the loop zero times. - $[3, 4, 6, 3] \rightarrow [1, 2, 3, 4, 6, 3, 7]$ - Do the loop once and do not do the if - $[6, 3, 4, 5] \rightarrow [1, 2, 3, 4, 6, 3, 4, 5, 6, 3, 7],$ - Do the loop twice, once with the if and once without. - $[4, 6, 3, 4] \rightarrow [1, 2, 3, 4, 6, 3, 4, 6, 3, 7]$ - Do the loop twice, both times without taking the if. - $[5, 6, 3, 4] \rightarrow [1, 2, 3, 4, 5, 6, 3, 4, 6, 3, 7]$ - Do the loop twice, take the if once and once without, other way around from the previous case. - $[3, 4, 5, 6, 3] \rightarrow [1, 2, 3, 4, 5, 6, 3, 7]$ Prime Paths: Summary - Prime paths give you a good way of deriving a set of test cases that cover various combinations of loops and branches. - There is no formal guarantee about completeness. As in all testing it just formalises a good compromise. Model, define, and approximate - Model what you want to test. - Define coverage criteria. - If coverage criteria is undecidable or requires too many test cases then approximate. Separate test requirements and test cases - Have a reason for a test. - Test requirements are the reasons for tests. - You need to find test cases satisfying test cases. Example ```c int count_spaces(char* str) { int length, i, count; count = 0; length = strlen(str); for(i=1; i<length; i++) { if(str[i] == ' ') count++; } } ``` First Divide into Basic Blocks ```c int count = 0; length = strlen(str); for(i=1; i<length; i++) if(str[i] == ' ') count++; return(count); ``` ```c int count = 0; length = strlen(str); for (i = 1; i < length; i++) count++; if (str[i] == ',') return (count); ``` Test Path - Remember a test path is a path that starts at an entry node and leaves at an exit node. Node Coverage ```c int count = 0; length = strlen(str); for(i=1; i<length; i++) count++; if(str[i] == ' ') return(count); ``` TR = {1, 2, 3, 4, 5, 6, 7}, Test path is [1, 2, 3, 4, 5, 6, 3, 7] Grey box testing - Our test path [1, 2, 3, 4, 5, 6, 3, 7] requires the loop to execute exactly once and it to detect one space. So we might try the test case (" ",1) but this won’t work. Don’t forget that i=1 in the loop body. - Instead we have to use the test case ("H ",1) - By thinking about what the code should do, and trying to construct a test case corresponding to a path, we have uncovered a fault. Edge Coverage ```c int count = 0; length = strlen(str); for(i=1; i<length; i++) count++; if(str[i] == ',') return(count); ``` Test paths are - $[1, 2, 3, 4, 5, 6, 3, 7]$, - $[1, 2, 3, 4, 6, 3, 7]$, - $[1, 2, 3, 7]$. Test Cases - [1, 2, 3, 4, 5, 6, 3, 7] (" ", 1) - [1, 2, 3, 4, 6, 7] ("H", 0) - [1, 2, 3, 7] (" ", 0) Relaxing test cases - As we have seen, sometimes we have infeasible test cases. - This could because there is a fault. - Or, that we have to do other things to get to the code. There might be a bit of setup code that we have to call first that is not in our path. - Before we introduced the notion of a path touring another path. - A path $p$ tours the path $s$ if $s$ is a sub-sequence of $p$. - $[1, 2, 3, 4, 6, 3, 4, 6, 3, 7]$ tours the test path $[1, 2, 3, 4, 6, 3]$ it also tours many other paths including $[4, 6, 3, 7]$. - Don’t forget the difference between a test path and a path. Relaxing Test Cases - A test path $p$ is set to *tour* sub-path $q$ with *side-trips* if every edge that is in $q$ is also in $p$ in the same order. - A test path $p$ is set to *tour* sub-path $q$ with *detours* if every node that is in $q$ is also in $p$ in the same order. The path [0, 1, 2, 3, 2, 4, 5] tours the path [0, 1, 2, 4, 5] with side trips. The path $[0, 1, 2, 3, 4, 5]$ tours the path $[0, 1, 2, 4, 5]$ with side trips. Infeasible test requirements - An infeasible test requirement *cannot be satisfied* - Unreachable statement (dead code) - Can only be executed if a contradiction occurs $X > 0 \land X < 0$. - Always check against the specification, it could be a fault. Infeasible test requirements - Most test criteria have some infeasible test requirements. - It is usually undecidable to decide if all test requirements are feasible (halting problem again). - Allowing side trips might weaken the test cases, but allows more feasible test cases. - Practical recommendation, best effort touring. Allow as many as possible without side-trips, only allow side-trips on infeasible test paths.
{"Source-Url": "http://user.it.uu.se/~justin/Teaching/Testing/Slides/lecture3.pdf", "len_cl100k_base": 5907, "olmocr-version": "0.1.53", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 94256, "total-output-tokens": 8303, "length": "2e12", "weborganizer": {"__label__adult": 0.0003688335418701172, "__label__art_design": 0.0002589225769042969, "__label__crime_law": 0.0003578662872314453, "__label__education_jobs": 0.0012636184692382812, "__label__entertainment": 5.805492401123047e-05, "__label__fashion_beauty": 0.00013327598571777344, "__label__finance_business": 0.00011467933654785156, "__label__food_dining": 0.0004572868347167969, "__label__games": 0.0011138916015625, "__label__hardware": 0.0007314682006835938, "__label__health": 0.00035572052001953125, "__label__history": 0.00016641616821289062, "__label__home_hobbies": 9.399652481079102e-05, "__label__industrial": 0.00028324127197265625, "__label__literature": 0.0003159046173095703, "__label__politics": 0.00020968914031982425, "__label__religion": 0.0004286766052246094, "__label__science_tech": 0.004177093505859375, "__label__social_life": 8.279085159301758e-05, "__label__software": 0.00357818603515625, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0003712177276611328, "__label__transportation": 0.0004291534423828125, "__label__travel": 0.00017642974853515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16290, 0.04507]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16290, 0.60658]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16290, 0.83153]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 382, null], [382, 690, null], [690, 847, null], [847, 963, null], [963, 1134, null], [1134, 1555, null], [1555, 2220, null], [2220, 2661, null], [2661, 3053, null], [3053, 3253, null], [3253, 3637, null], [3637, 3950, null], [3950, 4400, null], [4400, 4689, null], [4689, 4759, null], [4759, 4831, null], [4831, 4967, null], [4967, 5053, null], [5053, 5211, null], [5211, 5507, null], [5507, 5588, null], [5588, 5651, null], [5651, 5918, null], [5918, 6172, null], [6172, 6378, null], [6378, 6974, null], [6974, 7201, null], [7201, 7386, null], [7386, 7638, null], [7638, 7904, null], [7904, 8271, null], [8271, 8552, null], [8552, 8742, null], [8742, 9131, null], [9131, 9362, null], [9362, 9600, null], [9600, 10020, null], [10020, 10465, null], [10465, 10754, null], [10754, 11004, null], [11004, 11274, null], [11274, 11803, null], [11803, 12443, null], [12443, 12693, null], [12693, 12872, null], [12872, 13043, null], [13043, 13243, null], [13243, 13401, null], [13401, 13530, null], [13530, 13631, null], [13631, 13834, null], [13834, 14243, null], [14243, 14472, null], [14472, 14574, null], [14574, 15173, null], [15173, 15449, null], [15449, 15528, null], [15528, 15608, null], [15608, 15868, null], [15868, 16290, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 382, null], [382, 690, null], [690, 847, null], [847, 963, null], [963, 1134, null], [1134, 1555, null], [1555, 2220, null], [2220, 2661, null], [2661, 3053, null], [3053, 3253, null], [3253, 3637, null], [3637, 3950, null], [3950, 4400, null], [4400, 4689, null], [4689, 4759, null], [4759, 4831, null], [4831, 4967, null], [4967, 5053, null], [5053, 5211, null], [5211, 5507, null], [5507, 5588, null], [5588, 5651, null], [5651, 5918, null], [5918, 6172, null], [6172, 6378, null], [6378, 6974, null], [6974, 7201, null], [7201, 7386, null], [7386, 7638, null], [7638, 7904, null], [7904, 8271, null], [8271, 8552, null], [8552, 8742, null], [8742, 9131, null], [9131, 9362, null], [9362, 9600, null], [9600, 10020, null], [10020, 10465, null], [10465, 10754, null], [10754, 11004, null], [11004, 11274, null], [11274, 11803, null], [11803, 12443, null], [12443, 12693, null], [12693, 12872, null], [12872, 13043, null], [13043, 13243, null], [13243, 13401, null], [13401, 13530, null], [13530, 13631, null], [13631, 13834, null], [13834, 14243, null], [14243, 14472, null], [14472, 14574, null], [14574, 15173, null], [15173, 15449, null], [15449, 15528, null], [15528, 15608, null], [15608, 15868, null], [15868, 16290, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16290, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16290, null]], "pdf_page_numbers": [[0, 58, 1], [58, 382, 2], [382, 690, 3], [690, 847, 4], [847, 963, 5], [963, 1134, 6], [1134, 1555, 7], [1555, 2220, 8], [2220, 2661, 9], [2661, 3053, 10], [3053, 3253, 11], [3253, 3637, 12], [3637, 3950, 13], [3950, 4400, 14], [4400, 4689, 15], [4689, 4759, 16], [4759, 4831, 17], [4831, 4967, 18], [4967, 5053, 19], [5053, 5211, 20], [5211, 5507, 21], [5507, 5588, 22], [5588, 5651, 23], [5651, 5918, 24], [5918, 6172, 25], [6172, 6378, 26], [6378, 6974, 27], [6974, 7201, 28], [7201, 7386, 29], [7386, 7638, 30], [7638, 7904, 31], [7904, 8271, 32], [8271, 8552, 33], [8552, 8742, 34], [8742, 9131, 35], [9131, 9362, 36], [9362, 9600, 37], [9600, 10020, 38], [10020, 10465, 39], [10465, 10754, 40], [10754, 11004, 41], [11004, 11274, 42], [11274, 11803, 43], [11803, 12443, 44], [12443, 12693, 45], [12693, 12872, 46], [12872, 13043, 47], [13043, 13243, 48], [13243, 13401, 49], [13401, 13530, 50], [13530, 13631, 51], [13631, 13834, 52], [13834, 14243, 53], [14243, 14472, 54], [14472, 14574, 55], [14574, 15173, 56], [15173, 15449, 57], [15449, 15528, 58], [15528, 15608, 59], [15608, 15868, 60], [15868, 16290, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16290, 0.0]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
86fe03875d19c2be84d4f21320260d74f6848c26
Microcontrollers Programming Framework based on a V-like Programming Language Fernando Martínez Santa¹, Santiago Orjuela Rivera², Fredy H. Martínez Sarmiento³ Universidad Distrital, Francisco José de Caldas, Bogotá, Colombia¹,³ Corporación Nacional Unificada, de Educación Superior CUN, Bogotá, Colombia² Abstract—This paper describes the design of a programming framework for microcontrollers specially the ones with low program and data memory, using as a base a programming language with modern features. The proposed programming framework is named Aixt Project and took inspiration from other similar projects such as Arduino, Micropython and TinyGo among others. The project’s name is inspired on the weasel pet people who live in the Amazon rain-forest, just between Colombia, Perú and Brasil. Aixt comes from Aitü or Aitü rü which means otter in Ticuna language. The proposed programming framework has three main components: the Aixt language based on the V syntax, a transpiler that turns the defined V-like source code into C, and a generic cross-platform Application Programming Interface (API). The target of this project is obtaining a cross-platform programming framework over the same language modern language an the same API for programming different microcontrollers especially the ones with low memory resources. Aixt language is based on the syntax of V programming language but it uses mutable variables by default. V language was selected to be used as base of this project due to it is a new compiled programming language with interesting modern features. In order to turn the Aixt source code into C, a transpiler is implemented using Python and the some specialized libraries to design each part of its translation process. The transpiled code is compiled by the native C compiler of each microcontroller to obtain the final binary file, that is why the API has to be adapted for each native C compiler. The complete project is released as a free and open source project. Finally, different application test were done over the XC8 and XC16 compilers for the PIC16, PIC18, PIC24 and dsPIC33 microcontrollers families, demonstrating the correct working of the overall framework. Those tests show that the use modern language framework to program any microcontrollers is perfectly feasible using the proposed programming framework. Keywords—Microcontroller; transpiler; API; programming language; V; V-lang; Aixt project I. INTRODUCTION The different processor architectures used by the commercial microcontrollers, make the programming process dependent on those architectures and thus not universal. Even, when the microcontrollers are programmed on high level languages, tasks such as peripherals, timers, setup registers, and others, keep depending on the programmer’s knowledge of the processor’s architecture [1], [2]. There are some different projects which pretend to generate cross-platform programming frameworks [3], using different programming languages like JavaScript [4], and other implementations using virtual machines [5], [6], [7]. An example of those programming frameworks (and one of the most popular) is Arduino[8], [9], [10], which is based on C language in addition to an API which makes the programming process easier. That API works on a predefined hardware setup to reduce the setup process by the programmer. Another popular programming framework for microcontrollers is Micropython which implements on several devices a subset of Python language. Micropython has specific relatively high memory requirements which makes it impossible to run on small microcontrollers, but it has been ported to a large number of different architectures [11] mainly in internet of things IoT implementations. Arduino is compiled but its C syntax lacks modern features, on the other hand Micropython is interpreted and therefore non time optimized as compiled language, but there is an intermediate framework named Tinygo which implements Go language on Microcontrollers, offering modern features like Python and the advantage of being compiled [12] like Arduino (C). However, most of the microcontroller with limited memory features does not fit to the memory requirements of the projects previously described, so for those ones it is necessary to use their native C compiler. In order to obtain the best execution times and the best code optimization level [13], [3] it is necessary to use the native C compiler of each architecture. Then, if there is a programming framework with an upper modern language layer, a transpiler to C and the native C compiler as a part of the framework, this could have high level language features along with optimization levels similar to the ones reached with only the native compilers. The described programming framework needs to have a transpiler [14], which is a translator from the upper layer language to the native C [15]. Transpilers are highly utilized nowadays [16], [17], in several languages both compiled and interpreted [18], [19], [20], and even in languages based on virtual machines [21]. Those transpilers are mainly used in order to reuse source code that comes from another different language [18], or improve the execution times or another performance feature of the program [22], [23] changing the platform or language (for instance turn Python (interpreted) into Rust (compiled) [19]), even translating source code to gate-based hardware [24] like FPGAs or other processor-less devices. Several new programming languages have emerged nowadays, mainly to solve some of the issues of the traditional ones such as safety, memory management among others. Among these new languages are Go, Swift, Dart, F# and Rust, being this last one is one of the most preferred ones[25], having even implementations on microcontrollers [26], [27], [28]. There are some other other languages such as Peregrine which is based on the Python’s syntax and the V programming language [29] wish is inspired on Rust and other languages. $V$ is an statically-typed programming language with several modern features that make the development easy, and a better learning curve than other modern languages like Rust. This paper proposed a programming framework for microcontrollers that is composed by a high level language based on $V$ as the main language, a transpiler from this $V$-like language named Aixt to $C$, and the microcontrollers’ native $C$ compiler which finally generates output binary file. In order to generalized the programs across the different microcontrollers, a general API is designed, which is implemented on each $C$ compiler of the supported devices (in this first stage for XC8 and XC16 compilers). For the transpiler implementation Python and the module SLY were used, to write the lexer analyzer and the Parser. This project is based on a previous one named Sokae [30] developed by the same authors. The paper is organized as follows: Section II presents the methodology for implementing the overall proposed programming framework, including the Aixt language definition (Section II - A), the Python implementation of the Aixt-to-C transpiler (Section II - B), and the API implementation for the XC16 compiler and PIC24 microcontrollers family (Section II - C). Section III shows the Aixt language functionality by implementing several examples, as well as it presents the results of implementing the proposed programming framework by several test source codes. Finally, Section IV shows the conclusions about this research’s main ideas, including possible future jobs. II. METHODOLOGY With the name Aixt Project, a microcontroller programming framework is implemented. This framework uses an homonym language which is based on the $V$ programming language. A transpiler from Aixt language to $C$ is the most important block of this framework, as well as an Application Programming Interface (API) written in both languages. As part of the proposed structure, the native $C$ compiler of the specific microcontroller finally generates the output binary file, as shown in Fig. 1. Using the proposed framework, the users will be able to write the source code in Aixt language using a standard API and obtains the binary file for a specific microcontroller or board without having further knowledge of the programming architecture. This framework pretends to be highly modular and relatively easy to include other microcontrollers or boards. The Fig. 1 shows the general structure of the programming framework indicating that for each new microcontroller to be supported it is necessary to adapt the API (Fig. 1 right) to this and invoke its specific native $C$ compiler (Fig. 1 left down). The specific test done for this paper were implemented on some different Microchip® microcontroller families such as PIC16, PIC18, PIC24 and dsPIC33 using the XC8 and XC16 compilers, these microcontrollers were selected because their limited amount of implemented memory. A. Aixt Language Aixt is the name given to the proposed language and the overall programming framework. This language is based on the $V$ programming language [29] and shares most of its syntax. Due to its relatively short learning curve, $V$ language was selected for this implementation instead other new languages like Rust [30]. The framework and language name is inspired in the Weasel pet of $V$ Language, and at the same time is a tribute to Ticuna people who live in the Amazon rain-forest in the borders between Colombia, Brasil and Perú. Weasels are mustelids just like otters, so the name Aixt comes from Aitítu or Aitú rúí which is a way to say otter in Ticuna language. Aixt is a compiled and statically typed programming language based on the $V$ syntax. This is designed to be used on a wide range of microcontrollers no matter their memory limitations. Aixt syntax shares some feature with languages such as Rust and Go, therefore also it shares syntax features with $C$, which makes Aixt easy to understand and transpile. Listing 1 shows an example code using Aixt language and API, which makes blinking a LED for a specific microcontroller’s pin. Likewise, the Listing 2 shows the $C$ equivalent of the same Aixt source code. Some of the basic features of Aixt language are listed as follows: - the $:=$ operator is used for declaring variables. - Unlike $V$, variables are mutable by default in Aixt. - $ix$, $ux$ and $fx$ variable types for regular integers, unsigned integers and floating point variables. - $isize$ and $usize$ for integers with same size of the processor. - $rune$ type for character variables. - Default type inference in declaring. - Underscore character in literals for improving its readability. • The main function is the first entry of a V program. In case of having only one source code, the main function definition can be omitted. • All instructions end with a new line character, with a semicolon or with a curly bracket close. • The semicolon is optional. It has to be used when having two simple instructions in the same code line. • All the code blocks are delimited by curly braces. • All function declarations start with the reserved word fn. • The names for all the identifiers (variables, constants, functions, etc.) prefer to use snake case as in V, for instance the function pin_low(). This feature is implemented in order to keep a standard format for all the V source code. • There is only a loop instruction which is used for implementing all the supported loops, changing only its input parameters syntax. • The reserved word import is used for including different complete modules or libraries. • In order to reduce the C obtained code, it is possible to include individual components from a global module using the curly braces following the syntax: import module { comp1, comp2, ...} Listing 1: Blinking LED example in Aixt ```v import machine { pin } import time { sleep_ms } pin(A6,OUT) for { pin_high(A6) sleep_ms(500) pin_low(A6) sleep_ms(500) } ``` Listing 2: Resultant C code for the Blinking LED example ```c #include "./settings.h" #include ".machine/pin.h" #include ".time/sleep_ms.h" int main(void) { pin(A6,OUT); while(true) { pin_high(A6); sleep_ms(500); pin_low(A6); sleep_ms(500); } return 0; } ``` B. Transpiler A transpiler is a program that translates source code between programming languages with the same abstraction levels, by contrast a compiler translates source code generally to another low level language. The proposed programming framework does not compile directly the Aixt source code but transpile it to C. The Transpiler from Aixt language to C is implemented with Python and using the PLY module in order to implement the lexer analyzer and parser for the input source code. The complete working diagram of the implemented transpiler is shown in Fig. 2, where and input file with .v extension get in to the transpiler and it generates the output .c file. The transpiler implementation is based on part of the V language grammar, the Listing 3 shows and extract of that grammar in Backus-Naur form (BNF). This part of the grammar shows the definition of the four different ways to do loops in Aixt using the reserved word for, including infinite loops. ``` forStmt ::= for block | for expr block | for forClause block | for inClause block forClause ::= simpStmt ; expr ; simpStmt inClause ::= exprList in IDENTIFIER ``` Listing 3: Aixt language BNF definition (extract) For the implementation of the lexer analyzer, all of the tokens of V language are supported, such as keywords, operands and other punctuation symbols, as shown in the code extract of Listing 4. ``` tokens = { I8, 116, 132, 164, ISIZE, F32, F64, BOOL, RUNE, IMPORT, IN, MAP, MATCH, RETURN, } ``` Listing 4: Aixt Lexer implementation (extract) Once the Lexer analyzer reduced the character flux of the source code to an token flux, the parser analyzes the syntactic rules of language in order to find possible syntactic error and transpile it to C. The most of the syntactic rules of V are implemented in Aixt using the SLY module as shown in the extract source code of Listing 5, which matches with the BNF definition shown in Listing 6. Listing 5: Aixt Parser implementation (extract) ```python @_( 'identList DECL ASGN exprList', ) def varDecl(self, p): ... return ret_value @_( 'IDENTIFIER', 'identList""," IDENTIFIER' ) def identList(self, p): ... return p[0] ``` Listing 6: Aixt BNF rules (extract) \[ \text{varDecl} ::= \text{identList DECL ASGN exprList} \\ \text{identList} ::= \text{IDENTIFIER} \\ \text{identList }""," \text{IDENTIFIER} \] SLY library uses Python’s function decorators to implement the syntactic rules of the language to be compiled or transpiled, applying them to each syntactic production, for example the production varDecl is the implementation of variable declarations in Aixt language. As previously said, the transpiler reads the source code written in Aixt, which for compatibility with standard source code editors, the .v file extension. ### C. Application Programming Interface One of the main goals of the proposed framework is designing a cross-platform API, which includes the basic features and peripherals of most microcontrollers. In order to make the microcontroller’s programming process easier, a general Application Programming Interface is implemented in both the Aixt programming language and C for the specific native compiler. This API includes the peripherals and features shown in Tables from I to IV. <table> <thead> <tr> <th>Description</th> <th>Function name</th> </tr> </thead> <tbody> <tr> <td>pin type declaration</td> <td>pin()</td> </tr> <tr> <td>setting high and low</td> <td>pin_low() pin_high()</td> </tr> <tr> <td>setting specific binary value</td> <td>pin_value()</td> </tr> <tr> <td>reading an input value</td> <td>pin_value()</td> </tr> </tbody> </table> ### TABLE II. ANALOG TO DIGITAL CONVERTER (ADC) <table> <thead> <tr> <th>Description</th> <th>Function name</th> </tr> </thead> <tbody> <tr> <td>ADC setting up</td> <td>adc()</td> </tr> <tr> <td>ADC reading value</td> <td>adc_read()</td> </tr> </tbody> </table> ### TABLE III. UNIVERSAL ASYNCHRONOUS RECEIVER TRANSMITTER (UART) <table> <thead> <tr> <th>Description</th> <th>Function name</th> </tr> </thead> <tbody> <tr> <td>UART setting up</td> <td>uartx()</td> </tr> <tr> <td>single byte transmitting</td> <td>uartx_put()</td> </tr> <tr> <td>single byte receiving</td> <td>uartx_get()</td> </tr> </tbody> </table> ### TABLE IV. TIMING FEATURES <table> <thead> <tr> <th>Description</th> <th>Function name</th> </tr> </thead> <tbody> <tr> <td>delays in microseconds</td> <td>sleep_us()</td> </tr> <tr> <td>delays in milliseconds</td> <td>sleep_ms()</td> </tr> <tr> <td>delays in second</td> <td>sleep()</td> </tr> </tbody> </table> Table I shows the pin and GPIO functions like setup, input capture and output set. Some devices even could support exchange state functions (pin_toggle). The rest of API functions follow the same rules: - The setup function has the same name of the module. - The rest of name functions of the same module follow the syntax: `module_function()`. For instance: `adc_read() function of machine {adc}` module (Table II). - Devices with more than one peripheral of the same time follow this name function syntax: `module_function()` where the `x` refers to the number that identify each peripheral. For instance: `uart2_get()` as shown in Table III. - Some API modules refers to a inner features of the device different to hardware peripherals, for instance software delays (Table IV). The Fig. 3 shows the folder structure designed for the overall API, this structure has to be followed for each of the supported microcontrollers and boards to maintain the compatibility across all the hardware devices. Following strictly this folder structure allow the transpiler to correctly redirect the module including tasks when it is necessary to include to the project isolated components of a module. As previously mentioned, the module including follows the next syntax in Aixt: `import module` for complete modules, which will be transpiled as #include ./module.h". Likewise, the sub-modules or module components including follows the syntax: import module { sub1, sub2, ...}, which will be transpiled to #include ./module/sub1.h" etc. That is very important to optimize the resultant binary file. On the other hand, when a complete module is included, the ./module.h header file has to include all of the .h files in the correspondent folder on the folder’s API structure. III. RESULTS The overall project including the Aixt language definition, the transpiler from Aixt to C and the API, is published by the authors as a free software project at the URL https://gitlab.com/fermarsan/aixt-project. The authors hope this project works as a starting point of a great free programming framework for microcontrollers or as seed for other similar projects. The complete programming framework was successfully tested using some of the 8-bit and 16-bit PIC microcontroller families from Microchip ®. Those devices were selected due to their low amount of implemented data and program memories. Several different working tests have been performed to check the correctness of most Aixt features. Listings 7 and 8 show a comparison of the variable declarations in Aixt and the corresponding transpiled C code for XC8 and XC16 compilers. In Aixt the variable declaration is always along with an assignment. The declaration and assignment process uses the operator := to differentiate with only assignment =. At the same time it is necessary to use the conversion predefined functions such as i8(), u32() and f64() among others, in order to specify the number of bits and the type of integer and floating point variables. One of the benefits of using the conversion functions of V for the variable definitions is that each variable is bit-width explicit, independent of hardware device. Listing 7 shows too the use of the underscore symbol "_" for improving the large numbers readability. Also the special notations for hexadecimal, octal, and binary literals, are shown. The only difference with C is the octal literals beginning with the sequence "0o" (zero + o), instead of only 0 as in C. Listing 7: Aixt variable declaring and assignment example. ``` var2 := i8(129) var3 := i64(−6_835_292) var4 := u8(0b0011_0101) var5 := u16(0x0073452) var7 := u64(0xAAFF_7625) var8 := f32(1_342.56) var9 := f64(−34.035_440) ``` Listing 8: C resultant variable declaring and assignment example. ``` int8_t var2 = 129; int64_t var3 = −6835292; uint8_t var4 = 0b00110101; uint16_t var5 = 0x073452; uint64_t var7 = 0xAAFF7625; float var8 = 1342.56; long double var9 = −34.035440; ``` Modern programming languages like V has some useful features such as the type inference, which simplifies programming in most cases. Type inference gives programmers peace of mind about variable types when they are not needed, thereby reducing development time. The implementation of this feature in Aixt is reached by using the standard types for integer and floating point variables. In the case of XC8 compiler the standard integer type is int8_t and for XC16 compiler int16_t. For the floating point variables the default type is float. Listings 9 and 10 show the transpiling result for some variable declarations by inference, including Boolean, character (named runes), integer and floating point literals, for the XC8 compiler. Listing 9: Aixt variable declaring and assignment by inference example. ``` var0 := true var1 := false var2 := 1345 var3 := 71.4 var4 := −457 var5 := −10.445 var6 := 'd' ``` Listing 10: C resultant variable declaring and assignment by inference example. ``` bool var0 = true; bool var1 = false; int8_t var2 = 1345; float var3 = 71.4; int8_t var4 = −457; float var5 = −10.445; char var6 = 'd'; ``` On the other hand, Aixt language syntax provides support for some of V’s looping statements, such as: condition for (while in C), bare for or infinite loop (while(true) in C), infinite loops, regular for loop and C-like for loop. Listing 10 shows an example of the loop statements currently supported by the Aixt syntax and Listing 11 shows the C equivalent of each one. The Aixt-like for loop includes and integer range notation with the syntax: i...f, where i is the initial value and f is the final value. Listing 11: Aixt available loops. ```c // condition for for a < 10 { a += 1; } // bare for for { a += 1; } // range for for i in 0..10 { arr[i] = 0; } // c for for i:=0; i<=10; i++ { arr[i] = 0; } ``` Listing 12: C equivalent loops. ```c while (a < 10) { a += 1; } while (true) { a += 1; } for (int i=0, i<10, i++) { arr[i] = 0; } for (int i=0, i<=10, i++) { arr[i] = 0; } ``` A. Microcontrollers Setting Up In order to setup a specific new microcontroller or board added to the Aixt programming framework, a configuration file has to be written. The chosen format for this configuration file is YAML which means Yet Another Markup Language, and is a very simple format to implement setup file for software projects. In that configuration file the designer can setup features such as: type equivalences between Aixt and the native C compiler, the microcontroller fuses or configuration bytes, the part or device number, the default header files among others. This configuration file is expected to be modified once by the designer and not to be modified by a regular user. The Listing 12 shows an extract of the configuration file for a PIC24FJ device. Listing 13: YAML microcontroller or board configuration file (extract). ```yaml i8: int8_t ... u16: uint16_t ... default_int: int16_t ... device: p24FJ128GA010 ... headers: - <xc.h> - <stdint.h> ... configuration: - "POSCMOD = XT" - "OSCIOFNC = ON" ... ``` On the other hand, a batch file has to be included for each new device. This file works as a Makefile, following the steps and invoking the different component of the framework, in order to obtain the final binary file starting from the Aixt source code. The batch file has to be provided in .ps1 (PowerShell) format for Windows and in .sh format for Linux. IV. Conclusion Using the proposed programming framework, the microcontrollers programmer can utilize a modern high level language programming environment, using a compiled language with its benefits and at the same time taking advantage of the modern features of the language. Aixt Language pretends to be a highly level programming language for microcontrollers with a short learning curve due to its simplicity compared with other modern languages. Aixt utilizes modern V-based features such as type inference but at the exact same time obtains binary files with similar optimization degrees of standard compiled languages such as C and similar execution times. Aixt Language and the proposed programming framework could enable programming microcontrollers with ease, as long as they have a native C compilers. At the same time Aixt does not need a fixed amount of memory to work, the finally binary file depends only the source code. So it has not the problem of the memory needed to run a program written with an interpreted language such as MicroPython or Javascript. The transcompilation process between Aixt and C is successful because both languages are similar, mainly due to all variables in Aixt are mutable by default like in C, and some other similar features like curly braces and others. Transpile another language such Python to C for instance, would be a little bit difficult because the differences between both languages. Aixt Language and programming framework could allow individuals with little electronics know how to program easily embedded systems, just like Arduino, mbed, and MicroPython among other frameworks. Likewise, Aixt could enable experienced embedded system programmers only learning one programming language and API, to program a wide variety of microcontrollers no matter their memory sizes. All of the features in the proposed programming framework was completely tested, however not all the modern features of the V programming language were implemented. That means this project can highly improve implementing more features and adapting it to other microcontrollers and boards. It is perfectly able to use Aixt language and this framework in the classroom in Basic courses of microcontroller and embedded systems, due to currently this project is highly functional. In spite of the short learning curve of V and therefore Aixt languages, it is possible to explore another simple languages to improve the proposed programming scheme, or even giving support to another main languages maintaining the same API. One of the candidates is the Peregrine Language who is based on the Python syntax. As future work, the development of other useful features of V language are proposed. For example, the array definition, direct array indexing using the array for loop, array interpolation, matching statements among others. Likewise, it is important to keep giving support to other MCUs and board especially those with low program and data memories, which are the motivation for this project. For instance Atmel AT mega and AT tiny will be included to the project soon due to they use also the XC8 compiler. Finally, it is possible to combine PC graphical application developed in V with embedded application developed in Aixt, taking the advantage of learning only one programming basis to develop a complete embedded-based graphical application. ACKNOWLEDGMENT This work was supported by Universidad Distrital Francisco José de Caldas and Corporación Unificada Nacional de Educación Superior CUN. The views expressed in this document are not necessarily endorsed by Universidad Distrital or CUN. The authors thank the ARMOS and IDECUN research groups for the simulations and tests. REFERENCES
{"Source-Url": "https://thesai.org/Downloads/Volume13No12/Paper_5-Microcontrollers_Programming_Framework.pdf", "len_cl100k_base": 6286, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28815, "total-output-tokens": 9127, "length": "2e12", "weborganizer": {"__label__adult": 0.0007276535034179688, "__label__art_design": 0.0004546642303466797, "__label__crime_law": 0.00042724609375, "__label__education_jobs": 0.0006380081176757812, "__label__entertainment": 0.00010710954666137697, "__label__fashion_beauty": 0.0002779960632324219, "__label__finance_business": 0.00021278858184814453, "__label__food_dining": 0.0006251335144042969, "__label__games": 0.0010509490966796875, "__label__hardware": 0.011474609375, "__label__health": 0.0007114410400390625, "__label__history": 0.0002887248992919922, "__label__home_hobbies": 0.0002455711364746094, "__label__industrial": 0.0008907318115234375, "__label__literature": 0.00027060508728027344, "__label__politics": 0.0003485679626464844, "__label__religion": 0.0008339881896972656, "__label__science_tech": 0.0214691162109375, "__label__social_life": 9.381771087646484e-05, "__label__software": 0.003543853759765625, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0004835128784179687, "__label__transportation": 0.0012683868408203125, "__label__travel": 0.00026535987854003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34713, 0.05342]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34713, 0.72394]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34713, 0.83261]], "google_gemma-3-12b-it_contains_pii": [[0, 5962, false], [5962, 10707, null], [10707, 13904, null], [13904, 17787, null], [17787, 21600, null], [21600, 25913, null], [25913, 33656, null], [33656, 34713, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5962, true], [5962, 10707, null], [10707, 13904, null], [13904, 17787, null], [17787, 21600, null], [21600, 25913, null], [25913, 33656, null], [33656, 34713, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34713, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34713, null]], "pdf_page_numbers": [[0, 5962, 1], [5962, 10707, 2], [10707, 13904, 3], [13904, 17787, 4], [17787, 21600, 5], [21600, 25913, 6], [25913, 33656, 7], [33656, 34713, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34713, 0.07143]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
6ebeb47faa5f5fd5d244e9b0779e1eb3a9e7686c
Project Planning Support by Model Checking Björn Axenath, Oliver Sudmann Software Engineering Group, University of Paderborn, Warburger Str. 100, D-33098 Paderborn, Germany axenath,oliversu@uni-paderborn.de Abstract: Today’s trend in software and system engineering is to utilize more specialized models. This model-based development approach makes a single engineering task more easy, as the engineer can focus on the particular aspect of the system, when working with one model. Though collaborations get more difficult, because more models have to be kept consistent. Unfortunately, the process support for model-driven development is still rather weak in today’s development environments: static processes are supported, but this is insufficient for collaborations. We present a technique for project planning which utilizes relations between models and which uses a verification method to produce suggestion for the project plan, based on the current situation of the project. 1 Introduction In the 90’s, there was a research trend on process-centered development environments. Not much of the research results have found their way to practice: Today’s favorite development environments do provide only very little process support. Even worse: often developers are of the opinion, that a process support would hinder their work. We noticed, that there is still a gap between document-centric process models and today’s model-driven engineering approach. A document is an item in a workflow, which defines a certain amount of information which can be processed in one solitary task. A model is a partial description of a system, in general an abstraction or an oversimplification. A process model describes the control flow, but it does not consider the dependencies among models explicitly. Development environments, especially those for mechatronic products, use workflow engines to implement document-centric processes. But engineering tasks differ significantly from business process tasks, which are successfully supported by workflow engines. Engineering tasks are characterized by making decisions on assumptions. How difficult this is, can be seen in Airbus’ A400M [Flo09]. At its first release, it was 12 tons overweight, so that it had a shorted cruising range and lowered loading capacity. Also a flight maneuver, the steep approach, was impossible. Several components, like lowerable undercarriage, had to be excluded already, which had the drawback of a reinforcing element in the floor. To cope with these uncertain assumptions, changes are a daily occurrence. A change consists of several steps. First of all, it has to be noticed that a change of a model has an impact on other models. This sounds trivial, but it is the human nature to disregard these dependencies [Dör79]. Then, it has to be identified, which impact the change has. This impact depends on the current state of the project. Let us regard an example. Tests of mechatronic systems may take up to several months. Changes on an element which has already been tested would require a repetition of the test, which might be very costly or infeasible due to the delay. Finally, the change has to be integrated into the existing project plan, which takes the availability of developers and much more into account. All these steps are neither supported sufficiently nor integrated by today’s development environments. To identify the complexity in development processes of complex products, we analyze the characteristics of development process of mechatronic systems (Sect. 2). We will see that the model-driven development approaches complicates the collaboration among the engineers. Then, we identify use cases being more appropriate than today’s process support. In Sect. 3, we describe our concepts which help to tackle the complexity of project planning. In Sect. 4, we shortly describe our prototype and the experience which we made. After we discussed related work in Sect. 5, we draw the conclusions of our work and give an outlook on the future work.\footnote{This work was developed in the course of the Collaborative Research Center 614 – Self-optimizing Concepts and Structures in Mechanical Engineering – University of Paderborn, and was published on its behalf and funded by the Deutsche Forschungsgemeinschaft.} 2 Development Processes for Mechatronic Products Mechatronic development, here as an example for a class of complex development processes, aims to create products which are a synergetic combination of the disciplines mechanical engineering, control engineering, and software engineering. In advanced mechatronic systems, like self-optimizing mechatronic products, the software connects several mechatronic components so that they are able to fulfill functions in a synergetic way. A mechatronic component is build of sensors, providing information from the physical work, actors, and a controller. Think about modern cars, in which the motor management, the steering mechanism and brakes control the stability altogether. Notice, mechatronic components are connected in two ways: on the one hand they are connected by physical effects; on the other they can be connected by networks. As a matter of course, a complex system cannot be specified in one step. Several intermediate development goals have been introduced. Along the process, there are goals like principle solution, modularization, interface specifications of components, design specification, implementation specification, and many more. The order of these steps represents the overall development method. Former development process which have been taken from mechanical engineering started with the definition of the assembly structure; automation and software were added thereafter. More recent development processes, like the VDI 2206 [VDI04], treat the disciplines equally by having an active structure first, which covers all domains. Orthogonal to that, industrial standards, like the MDA method [BBI+04], propose to process the requirements of the target platform in an incremental manner. On the one hand side, a developer can now focus on the very specific topic, which the model has been developed for. This reduces the complexity of a single task, which has been part of a composed task before. On the other hand side, the more models are used, the more coordination among them is necessary. The complexity of the process resp. the collaboration rises. Nevertheless, the models are only loosely integrated and traceability among them is still on the research agenda [PBKS07]. The process of developing a mechatronic system will be illustrated by a brief example of a rail vehicle which can build a convoy with other vehicles as shown in Fig. 1. To master the complexity of the overall system, the system is modularized in subsystems. In our example the system structures is expressed by an active structure diagram [GFDK08a]. As depicted in Fig. 1, the active structure consists of two components, which are called system elements here. The Configuration Controller is responsible for the negotiations with other vehicles, if a convoy is build or not. The Distance Controller is part of the drive system and controls the systems position. The Distance Controller gets the current distance $d_{cur}$ between two vehicles to keep the desired distance $d^*$. Thereafter, an interface specification has to be written for all system elements. The behavior of the Configuration Controller is modeled by an Hybrid State Chart as its behavior is discrete. The Distance Controller is modeled by a block diagram as its behavior is continuous. Here, both components have to refine the parameters $d^*$, which is modeled in the system structure on a conceptual level. For instance, the data type and the rate of change are not defined, but both have a significant effect on the system. Thus, the hybrid state chart and the block diagram are relate to each other by the parameter. ![Active Structure Diagram](image1.png) ![Hybrid State Chart (Configuration Controller)](image2.png) ![Block Diagram (Distance Controller)](image3.png) Figure 1: Fragment of a mechatronic systems structure and behavior specification Furthermore, it has to be considered that engineers of different disciplines are involved, so that developers do not understand all dependencies of their model to models of other domains. Either specifications like the system structure which can be understood by all disciplines have to be used, or cross-discipline experts have to mediate. In our example, it would be not appropriate to keep consistency directly between the Hybrid State Chart and the Block Diagram. Usually, mechatronic systems are modularized in several discipline-spanning components, which are developed by several teams working rather independent in parallel. Nevertheless, the models are not necessarily developed at the same time. A change, for example in the system specification, might result in different costs, depending on the number of dependent tasks which have already been finished. For mechatronic components some tasks, like quality assurance on a testbed, can become costly. A plain traceability analysis, which just shows the dependent documents of the change, is insufficient: The progress of the project has to be considered. The result of a development process is influenced by the success of finding a solution with an acceptable cost-benefit ratio. The requirements and the final specification have to be fit to one another. For that, iterative processes can be applied. In iterative processes, the previously created artifacts are revised resp. changed, because of the experience from the previous iteration. The question is then: is it possible to do change now, or should it be done in the next iteration? In summary, the solving of a development problem is split up into several, small modeling tasks, which are highly dependent. Though, developers cannot see or do not want to see these dependencies, and for project managers, it is hard to understand the dependencies due to a lack of knowledge. Thereby, the project manager is not interested in the dependencies, but in the resulting work. We deduce the following two use cases with respect to process support: 1. **Impact Analysis:** A developer gets informed, if his work on a model has dependencies to other models or puts a deadline at risk due to dependencies. 2. **Process Synthesis:** As it happens in some domains, that developers do not follow the project plan, the project manager should get support to identify tasks to make the project consistent with the process model. In these use cases, the development environment should identify all tasks which have to be done, and order them according to the process model. Thereby, it is sufficient identify a possible impact of an activity or to make coarse suggestions for project plans. The final project planning is left to the project manager as too many side constraints, like resources, social aspects, and so on, exist. Consequently, we are not interested to create a detailed and complex model of the process, which could consider the uncertainties in the execution of a task, for example. ### 3 Concept To explain our concepts, we describe some preliminaries, mainly the formalization of the process model including the traceability links. Then, we explain the concept, which applies model checking to analyze the state space defined by the process model. Finally, we explain how this approach supports our use cases. 3.1 Preliminaries We aim to perform a dependency analysis with respect to the process model. Before the analysis can be performed, the process model and its dependency to the data managed by the development environment have to be specified. Next, we describe the necessary steps in short. In doing so, we use a running example, which is a fragment of a development process of mechatronic systems [GFDK08b], and which was applied to create our introductory example. First of all, the document model has to be defined. In general, the document types can be extracted from a reference model and have to be refined thereafter. A reference model of our example is shown in Fig. 2. In the first step, the system is decomposed. The decomposition creates a system structure. Then, for every component a sub process is created, of which the first step depicted. In this first step, the observable behavior of the components should be specified. Furthermore, the control structures which define relationships among the documents have to be identified, which we assume to be part of every development process model anyhow. Secondly, the document types have to be refined. For every document, a model is defined which specifies the concepts which should be used to write the document. Notice, that the purpose of a document is to define the information which is necessary to perform a task. How this is done, depends on the product. In our example, the system is a mechatronic, self-optimizing system which is build up of mechatronic components. These components can be arranged in mechatronic function groups which are connected by information flow, material flow, and energy flow [Kal98]. The interface specification should specify the parameters which are described in the system structure on a conceptual level, e. g. physical dimensions and precision are not defined. Fig. 3 shows an excerpt of the project information model of our running example. The dashed box contains the documents of the process model. The Signal Checklist and the Signal Trace are specified in the next steps. Altogether, they are the project information model. Thirdly, to enable traceability, the relations between elements of the project information model have to be defined. These relation also depend on the product. Nevertheless, domain reference models are available for several domains which define traceability relations (see e. g. [RJ01]). For our product we would like to enable traceability between the information flow which is defined in the system structure and the interface specification of mechatronic systems. This is done by the Signal Trace\(^2\). For the process view, these links are also part of the process. So, they have to be part of the document model, and we define a new document type, the Signal Checklist. Often, they are used within documents from the quality assurance, because they do not specify the product itself, but describe its evolution. In this model we take into account that several disciplines are working together. Traceability links should be defined only between concepts, which can be understood by the participating domains. In general, links between discipline-specific models are avoided and discipline-spanning models are used [GGS\(^+\)07]. Fourthly, the project information model has to be implemented by some tools. In general, the available tools do not support the concepts modeled by the project information model, and the creation of tools which directly support the project information model is still costly. Consequently, more general tools are used. Examples are spreadsheet applications like Microsoft’s Excel, generic UML tools for software components, and dataflow-oriented tools like Mathworks’ Matlab with the Simulink Toolbox for signal processing. To integrate these tools, we use the ToolNet [ADS02] approach, which integrates the tools by adapters, that map the elements of the information model onto elements of tool data model. That is, the instance of the project information model is then materialized by the tool data. Traceability links which are not handled by any tool are stored in a link repository. To make suggestions for the project planning, we need information from the previous project plan. A project plan schedules tasks and defines intermediate goals by milestones, which are measurable quality criteria on the system under development. A sufficient abstraction for these measures are processing states, like planned, finished, approved, or integrated. \(^2\)Actually, we define them by Triple Graph Grammars (TGG) to enable automatic model transformation and consistency checking [Sch95]. By knowing the artifacts, which have been checked in together and assuming that they are consistent, we can select an automatic repair action, which can be derived from a TGG. Despite this approach has some limitations, it is able to automate a significant amount of tasks. 3.2 Project Analysis To implement the use cases, we have to identify certain paths in the process space. In both use cases, we have to search a path, which leads from the current situation to a given milestone. The impact analysis checks, if the next milestone is still reachable after a certain execution of a task. So, we have to analyze the complex state space which is defined by the processing states and its transitions given by classes of tasks. We apply model checking [CJGP99] for this analysis, because model checking provides adequate algorithms for this kind of analysis. Furthermore, we will see that temporal logics are an adequate formalism to define boundary conditions on the process. The state space is build in the following way. We define for every document type its processing states. Transitions are given by tasks resp. their classes. In Fig. 4 we extended our introductory example by further quality assurance tasks on the documents: the system structure has to be checked; the interface specification has to be tested and checked according to some guidelines, thereafter it is integrated with other specifications. Then, for every document instance, an automaton is created. Finally, all are put in parallel. In doing so, dependencies of transitions have to taken into account. Transitions of the subsequent tasks are only allowed to fire, when the input documents have reached a certain state. When tasks process more than one document, the particular transitions have to fire synchronously. Additionally, we have transitions which result from change processes (drawn dashed). When a document is changed, then all depending documents are changed synchronously. Model checking verifies a temporal formula on a labeled transition system $M$. In general, a system is checked weather a property is always fulfilled. If the property is not fulfilled, the model checker can produce a counterexample which describes a run of the system violating the formula. In contrast to this kind of analysis, we are interested in the reachability of a state, so that we need the path witness for a formula requiring the existence of a path. We check if the milestone, which is represented by propositions on a state, cannot be reached in future ($F$ for future) starting from the current situation $s$: $$ M, s \models \neg F \text{milestone} $$ To create suggestions for the project plan, the deadline and the duration of tasks are taken into account, which can be extracted from the project plan. When tasks have been iterated already, an estimation of next iterations duration can be determined by extrapolation of the previous durations. This requires timed model checking, which considers clocks. We use a global clock to measure the project time. For the processing times which are consumed by the tasks, we create a local clock on each automaton and define guards to keep the document in a state for the estimated time. The side effect of the usage of a deadline is, that the state space is reduced significantly. The project management usually defines further boundary conditions on the project plan. For instance, certain documents should not be modified any more. This can only be modeled if we use a branching time logic, like CTL, as we search for one particular path: There exists (E for exists) a path on which the constrains are valid until (U for until) we reach the milestone. In formal terms, the process synthesis has to verify: \[ M, s \models E (\text{constraints} \ U (\text{milestone} \land t < \text{deadline})) \] The impact analysis is formulated in the following way. Let \( s_1 \) be the state of a document after the intended task. It exists a path in which in the next (X for next) step \( s_1 \) is valid and from which a path to the milestone exists: \[ M, s \models EX (s_1 \land EF (\text{milestone} \land t < \text{deadline})) \] The final concept of our approach is to control the process continuously, by monitoring the version management system of the documents. After every check-in an impact analysis is performed, if the next milestone can be reached. Let us consider an exemplary situation in a project, which is illustrated in Fig. 5 as a Gantt Chart. At the deadline, work on Active Structure \( s \), Hybrid State Chart \( b_1 \), and Block Diagram \( b_2 \) has to be finished. The process model of the example is defined as shown in Fig. 2. When at \( t_0 \) a developer working on \( b_1 \) creates an inconsistency with \( s \), he gets the information that Task 1 is created. Additional work on \( b_2 \) is not created, because work on \( b_1 \) has not been started yet and even due to the delay of Task 1 it is sufficient time to finish before the deadline. Would this inconsistency occur at \( t_1 \), the developer would be informed that, the impact of his work puts the deadline at risk, because according to the project plan firstly Task 2 and then Task 3 have been done. Figure 5: Gantt Chart 4 Implementation We have implemented our concepts in a prototype called *ProcessCoach* which is integrated in the Eclipse IDE. The process model is defined by our own formalism, for which we build a graphical editor as shown in Fig. 6(a). The ProcessCoach uses the model checker UPPAAL (see www.uppaal.com) for the state space analysis. UPPAAL is able to search the shortest path, so that we are able to use it without any extension. The counterexample from UPPAAL is imported into the ProcessCoach and can be exported to Microsoft Project. A critical issue for the application of model checking is that temporal logics can hardly be specified by users of a development environment. In the synthesis view depicted in Fig. 6(b), the project manager selects the documents from a table, defines their desired processing states at the milestone, defines a deadline, and further boundary conditions on the process. Here, he decides that the Regelungskonzept should not be set back in its processing state. The ProcessCoach creates the temporal formula internally, starts UPPAAL and shows the result at the bottom. As a case study, we modeled a real-live development process of mechatronic system having more than 50 documents. All tests we made did not run into a state space explosion of the model checker. The results had been calculated within a few seconds on standard computers and notebooks. (a) Document Model (b) Process Synthesis Figure 6: Screenshots of ProcessCoach 5 Related Work Product data management system, like IBM’s Enovia, provide workflow support and recently also integration of project planning tools. But they are only able to define and control activities in the planning tool which are then executed by their workflow engine. The process model is not taken into account, so that no planning support wrt. the necessary activities is provided. Due to the complexity of the planning processes, mainly algorithm are applied which use heuristics, so that not all solutions are considered. Planning by model checking, which is able to analyze the complete state space, has been under research during the last ten years and a few of them also considered real-time model checkers. Most of these approaches aim to synthesize controllers, which are more complex results than sequences of actions for a project plan. Furthermore, we aim to identify the optimal sequence of activities to the particular milestone and repeat this procedure — a so-called interleaving planning. An early application of a real-time checker for planning was presented by Goldman et. al. [GMP00]. Their approach of domain modeling is similar to ours, but as they aim to use the model checker for a controller synthesis, their overall method, including the goal definition, is very different. Dierks [Die05] applies the real time model checker UPPAAL CORA to the general planning language PDDL [MC98]. UPPAAL CORA uses priced timed automata, so that it is possible to use further variables for optimization [BLR05]. We only use time to restrict the state space so that we are able to omit the pricing functions. In addition, we suppose that we are able to check bigger models, as we can use optimizations based on our domain model as we do not use the general planning language PDDL. Furthermore, it seems to be an interesting idea to map graph grammars to action planning as it has been suggested by Edelkamp et. al. [EJL05]. This would enable us also to analyze dynamic document models, but due to the complexity, we suppose that we would not be able to check realistic examples. 6 Conclusion and Future Work We have systematically analyzed which advanced use cases of development environments are necessary and feasible to support development processes of one class of complex, technical products. We improved the traceability analysis, because we took the process model into account, so that the impact of changes can be estimated by tasks instead of just listing depended artifacts. By our use cases, it is more easy to motivate developers to generate traceability data, because they benefit from it now. The explicit specification of the project information model starting with the document model bridges the gap between document-oriented processes and model-based development. The benefit of this method is hard to determine. On the one hand side, the presented approach requires some effort for to the process modeling and for the tracking of changes. For some product classes, especially for complex or safety critical systems, these tasks are already done. Unfortunately, the particular tools are not integrated, so that also additional effort on the integration becomes necessary. On the other hand side, the major advantage of the impact analysis and the process synthesis are an acceleration of the process. The value of the acceleration is hard to be estimated, as it depends on various factors — up to the assumed release dates of competitors. So this method might be suited for companies which have mature processes and use a highly integrated development environment. We have applied model checking to implement the use cases. By using this technique, we achieved an improved quality of the process suggestions, because we are able to search the complete state space and do not use any heuristics. However, the user of the development environment is not annoyed with the formalism, although we have introduced an interface to the model checker based on temporal logics. Our case study uses real-life processes, but the tool has not been tested during a project by the developers. To provide a comprehensive prove of concept, we have to analyze not only the technical details, like scalability, but also the usability. We expect, that we can also derive from a usability report further use cases for process support in model-driven engineering. Up to know, our document model is static. In the next step, we will apply infinite state model checking methods, to consider that further documents are created. Moreover, our future work is motivated by the idea of a development environment with a self-optimizing process. That is, we want the development environment to be able to adapt the process’ goals depending on the development situation and accordingly change the process model. Before being able to correctly change the process model, we need to be able to rate the current situation of the development. Therefore, we have to extend our approach by considering measurements about the models’ completeness related to a processing state. For instance, we want to use test results or the completeness of traceability links for that. References
{"Source-Url": "http://subs.emis.de/LNI/Proceedings/Proceedings154/gi-proc-154-292.pdf", "len_cl100k_base": 5586, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 26414, "total-output-tokens": 7264, "length": "2e12", "weborganizer": {"__label__adult": 0.00033283233642578125, "__label__art_design": 0.00040435791015625, "__label__crime_law": 0.0003349781036376953, "__label__education_jobs": 0.0010528564453125, "__label__entertainment": 5.221366882324219e-05, "__label__fashion_beauty": 0.00017380714416503906, "__label__finance_business": 0.00029206275939941406, "__label__food_dining": 0.0003478527069091797, "__label__games": 0.0005803108215332031, "__label__hardware": 0.0010395050048828125, "__label__health": 0.0005173683166503906, "__label__history": 0.0002624988555908203, "__label__home_hobbies": 0.0001327991485595703, "__label__industrial": 0.0007734298706054688, "__label__literature": 0.00022780895233154297, "__label__politics": 0.00021028518676757812, "__label__religion": 0.0004627704620361328, "__label__science_tech": 0.034637451171875, "__label__social_life": 8.678436279296875e-05, "__label__software": 0.005374908447265625, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00036025047302246094, "__label__transportation": 0.000896453857421875, "__label__travel": 0.00021088123321533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31650, 0.02179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31650, 0.58034]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31650, 0.92604]], "google_gemma-3-12b-it_contains_pii": [[0, 2676, false], [2676, 6070, null], [6070, 8931, null], [8931, 11540, null], [11540, 14131, null], [14131, 16471, null], [16471, 18828, null], [18828, 21440, null], [21440, 22919, null], [22919, 26086, null], [26086, 29178, null], [29178, 31650, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2676, true], [2676, 6070, null], [6070, 8931, null], [8931, 11540, null], [11540, 14131, null], [14131, 16471, null], [16471, 18828, null], [18828, 21440, null], [21440, 22919, null], [22919, 26086, null], [26086, 29178, null], [29178, 31650, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31650, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31650, null]], "pdf_page_numbers": [[0, 2676, 1], [2676, 6070, 2], [6070, 8931, 3], [8931, 11540, 4], [11540, 14131, 5], [14131, 16471, 6], [16471, 18828, 7], [18828, 21440, 8], [21440, 22919, 9], [22919, 26086, 10], [26086, 29178, 11], [29178, 31650, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31650, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
e047bfca5e5fb58340d39b22943e2cc387f7982a
Overview In this lab students will build a complete WSN consisting of a small number of nodes. “Complete” means a functional, cooperating network of nodes, with functional sensors. The node hardware will be supplied and “building” here means developing and then programming the necessary algorithms into the nodes. The purpose of the WSN is to periodically measure the light level at the node location and forward that to the base station. When a switch that is attached to a node is closed, the node must notify the base station. When a node powers up, it must automatically join the WSN. The WSN will follow a node-to-node communication model rather than node-base communication. When a node power down, the WSN must automatically reconfigure itself. Building a robust, energy efficient WSN is a large undertaking. Thus, a collection of routines that provide OS-like services will be supplied. Also, a number of simplifying assumptions (see below) will be introduced. Organizational Groups consisting of 3 students will work together. Only one group at a time can have access to the equipment. Groups should schedule access to the Sensors Lab in half-day (morning/afternoon) blocks. At the start of each lab I will spend some time showing students the tools. A group can schedule more than one block. **Groups need to demonstrate the working WSN and then write a short report. One lab report per group is due by the end of the semester, Friday, May 5, e-mailed as a PDF document to me.** Development Tools and Equipment *Node Hardware.* The same hardware that was used for Lab 1 will be used for this lab, with the addition of two sensors: a light sensor, and a switch. *Programming.* Programming will be done in the C programming language. Skeleton code will be provided, and the development environment for the code-compile-load development cycle will be set up. The HP InfoTech CodeVisionAVR (www.hpinfotech.ro) integrated development environment will be used. *Supplied Library.* The library will provide services needed to implement the algorithm(s): send a packet to another node, ping another node to measure RSSI, respond to requests from other nodes, select communication channel, interrogate its sensors and so on. Details on the library are supplied below. Simplifications **Coordinate System and Localization.** We will assume nodes know their location in a simple Cartesian coordinate system. These coordinates will be programmed into the nodes. Node locations will be such that very simple routing algorithms may be used (i.e., packets will not get stuck at nodes). **Sleep modes.** Nodes will be alive (no sleep modes). WSN Design The node transceivers provide several mechanisms for constructing the communication links: communication channel, hopping sequence, *destination address*, and *masks* (XCite vernacular). This is detailed in the XCite 900 MHz documentation provided for the previous lab and on the class website ([www.engineering.uiowa.edu/~ece195/2005/labNotes.html](http://www.engineering.uiowa.edu/~ece195/2005/labNotes.html)). We will use the capability to select different channels and the concept of a destination address, and not the masking and hopping sequence capabilities. The destination address will be used as the node ID, and node IDs will be unique. **Approach 1.** RF channel 0 will be used for control and configuration information. RF Channel 2 will be used for communication. Upon power up, a node waits for a random time and switches to the network control channel and joins the network—discovers other nodes and their locations. Based on this information it determines its neighbors. The node then switches to the communication channel and listens for, and responds to packets sent to it. Nodes periodically switch to the network control channel and repeat the process. This enables the nodes to discover new nodes and determine which nodes have left the WSN. Nodes measure the light level at set intervals and send the packet to the base station. When the switch attached to a node is closed the node notifies the base. **Approach 2.** All nodes use the same channel, Channel 0. Upon power up, a node waits for a random time and then cycles through destination addresses 0–19 and discovers other nodes and their x- and y-coordinates. Based on this information it determines nodes it will communicate with. It then switches to its destination address and listens for, and respond to packets sent to it. Nodes periodically repeat the process to discover new nodes and determine which nodes have left the WSN. Nodes measure the light level at set intervals and send the packet to the base station. When the switch attached to a node is closed the node notifies the base. Preparation To prepare for the lab, read through the PDF files for this lab provided on the website. Familiarize yourself with the routines used in the library. Pick one of the approaches outlined above and write pseudo C-code to implement it. Issues to consider include are the following. A node may discover that it has many neighbors (potentially all the nodes) in the network. It has to select a subset of these that it will communicate with. How will it make its selection? What routing strategy will the node implement? That is, consider it receives a request to “please forward this packet to the node at \((x,y)\)” , and that it has a number of neighbors it can forward the packet to. Which neighbor node will it use? How will it know if the neighbor received the packet? What should be done if the packet is not received? **Report Writing** To complete the lab, document your WSN. A group should write one group report and e-mail that to me as a PDF file. How you present the information is your decision, but here are some things you may want to address. Include the C source code for each node. What was the most challenging part of building the WSN? What are the strengths and weaknesses of your WSN? What is required to add sleep modes to your WSN? Discuss the energy efficiency of your WSN, and comment on how it may be improved. **An Alternative Lab(s)** As alternative lab is to do an RSSI investigation and antenna evaluation for a cell-phone based network that is under development at IIHR. Briefly, this will consist of writing code to communicate with a cell-modem to query RSSI. Then the students will accompany an IIHR employee and visit a number of locations in an around Iowa City, measure RSSI for a few antenna configuration, and document their findings. The pros of this lab are that the code-writing part can be quite simple and that it involves field work. The cons are that it involves field work that will be time-consuming. I am open to suggestions from students who want to propose their own lab. ## Library Routines ### Core Routines <table> <thead> <tr> <th>Routine</th> <th>Synopsis</th> </tr> </thead> <tbody> <tr> <td>void init(void)</td> <td>Initializes MCU registers and global variables, and peripherals such as the RTC and LCD. Call once at start of program.</td> </tr> <tr> <td>int getSerialNumbers(int *data);</td> <td>Get the transceiver’s destination address, vendor ID, and serial numbers. Always returns 1.</td> </tr> <tr> <td>int setAddress(int address)</td> <td>After this call, the transceiver will communicate with other transceivers with the same <em>address</em> (id), and ignore others, even though they may be using the same channel. Always returns 1.</td> </tr> <tr> <td>int setChannel(int channel)</td> <td>Sets the communication channel. Valid values for <em>channel</em> are 0, 2…10. Always returns 1.</td> </tr> <tr> <td>intgetConfig(char *packet, int ack)</td> <td>Retrieves the node configuration: node address/id, coordinates, time, etc. from <em>packet</em>, and reprograms the node. The variable <em>ack</em> is ignored in the current version, and the routine always returns 1.</td> </tr> <tr> <td>int putConfig(char *packet, int ntry)</td> <td>Loads the node configuration: node address/id, coordinates, time, etc., into <em>packet</em> and output/transmit the packet. The proper destination address and channel must be selected prior to this routine. The variable <em>ntry</em> is ignored in this version and the routine always returns 1.</td> </tr> <tr> <td>int putPacket(char *packet, int ntry)</td> <td>Transmits <em>packet</em>. The proper destination address and channel must be selected prior to this routine. The user constructs <em>packet</em> prior to this routine. The variable <em>ntry</em> is ignored in this version and the routine always returns 1.</td> </tr> <tr> <td>int getPacket(char *packet, int ack)</td> <td>If the transceiver received a valid packet, fill <em>packet</em> and returns 1. Otherwise, returns 0. This routine will discard characters that are not part of a valid packet. The variable <em>ack</em> is ignored in the current version, and the routine always returns 1.</td> </tr> <tr> <td>int getLight(void)</td> <td>Measures the voltage across the photosensor. This routine is partially implemented. One of the tasks for the lab is to complete the implementation so that it returns the light level in lx.</td> </tr> </tbody> </table> ### Support Routines <table> <thead> <tr> <th>Function</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>int getSwitch(void)</td> <td>Returns 0 if the switch is pressed (down) or 1 if it is open.</td> </tr> <tr> <td>unsigned char read_adc(unsigned char n)</td> <td>Reads ADC channel n. The ADC is an 8-bit, 115 KHz, ADC with a 5V reference. The return value is a number in range 0 (0V) to 255 (5V). The photosensor is connected to ADC channel 0.</td> </tr> <tr> <td>delay_ms(n)</td> <td>Pauses execution for n milliseconds. Note that the transceiver will still receive (and buffer) characters.</td> </tr> <tr> <td>void lcd_clear(void)</td> <td>Clears the LCD screen</td> </tr> <tr> <td>void lcd_puts(char *string)</td> <td>Displays string on the LCD</td> </tr> <tr> <td>char getchar(void)</td> <td>Gets a character from the transceiver. If there is no character, waits for one.</td> </tr> <tr> <td>void putchar(char c)</td> <td>Writes a character c to the transceiver, which transmits it to the current destination address on the current channel.</td> </tr> <tr> <td>printf()</td> <td>This routine implements the C stdio.h printf routine. The output goes to the transceiver, which transmits it to the current destination address on the current channel.</td> </tr> <tr> <td>snprintf()</td> <td>This routine implements the C stdio.h snprintf routine for printing to a string.</td> </tr> <tr> <td>sscanf();</td> <td>This routine implements the C stdio.h sscanf routine for reading from a string.</td> </tr> <tr> <td>int rand (void)</td> <td>Generates a pseudo-random number between 0 and 32767.</td> </tr> <tr> <td>void srand(int seed)</td> <td>Sets the starting value seed used by the pseudo-random number generator in the rand function.</td> </tr> <tr> <td>void setLed(int state)</td> <td>Turns an indicator LED on (state = 1) or off (state = 0)</td> </tr> <tr> <td>void switchToModemMode()</td> <td>Switches the transceiver to modem mode.</td> </tr> <tr> <td>void switchToATMode(void);</td> <td>Switches the transceiver to AT/command mode.</td> </tr> <tr> <td>int convertBase(char *buf, int base);</td> <td>Converts the character string in buf, assumed to be in base to its internal binary representation. For example d = ConvertBase(&quot;2A&quot;,16) sets d to 42 decimal.</td> </tr> </tbody> </table> ### Other Routines The main board has a Phillips PCF8563 Real-Time Clock (RTC). The clock is backed with a battery so that it retains its setting if the main board is powered down. Globals and #defines There the supplied main program contains a number of global variables and defines that are relevant. unsigned char rtc_get_time(unsigned char *hour, unsigned char *min, unsigned char *sec) Returns the current time measured by the RTC. The *hour, *min and *sec pointers point to the variables that must receive the values of hour, minutes and seconds. Returns the value 1 if the read values are correct. If the function returns 0 then the chip supply voltage has dropped below an acceptable value and the time values are incorrect. void rtc_set_time(unsigned char hour, unsigned char min, unsigned char sec) Sets the current time of the RTC. void rtc_get_date(unsigned char *date, unsigned char *month, unsigned *year) Returns the current date measured by the RTC. The *date, *month and *year pointers must point to the variables that must receive the values of day, month and year. void rtc_set_date(unsigned char date, unsigned char month, unsigned year) Sets the current date of the RTC. rx_counter0 The number of characters in the transceiver’s receive buffer. It is incremented every time the transceiver receives a character, and is decremented by getchar, scanf, etc. One can clear the buffer by setting rx_counter0 = 0. int myaddr,myx,myy; The node’s address/id, x- and y-coordinates. Packet Format The WSN nodes communicate with each other using packets of ASCII characters. Packets have a two-byte preamble, the two ASCII characters “AC”. A carriage return ‘r’, newline ‘n’, or NULL ‘\0’ character is the end of packet marker. A packet can be up to 32 bytes long. The 32 includes the preamble. The character following the preamble determines the contents of the rest of the packet. <table> <thead> <tr> <th>Control Character</th> <th>Function</th> </tr> </thead> <tbody> <tr> <td>Q</td> <td>The Query request. If we receive this, call putConfig to transmit our time-stamped coordinates and sensors’ values.</td> </tr> <tr> <td>C</td> <td>The Configuration request. If we receive this,</td> </tr> </tbody> </table> Following the preamble and control characters are the payload and ASCII formatted characters. <table> <thead> <tr> <th>Field</th> <th># Characters</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>XS</td> <td>2</td> <td>x-coordinate of the source/origin of the packet, formatted as “00”....”99”</td> </tr> <tr> <td>YS</td> <td>2</td> <td>y-coordinate of the source/origin of the packet, formatted as “00”....”99”</td> </tr> <tr> <td>ID</td> <td>2</td> <td>Node ID, typically associated with XS and YS, formatted as “00”....”99”</td> </tr> <tr> <td>XF</td> <td>2</td> <td>x-coordinate of the final destination of the packet, formatted as “00”....”99”</td> </tr> <tr> <td>YF</td> <td>2</td> <td>y-coordinate of the final destination of the packet, formatted as “00”....”99”</td> </tr> <tr> <td>HC</td> <td>2</td> <td>hop count, formatted as “00”....”99”</td> </tr> <tr> <td>LL</td> <td>2</td> <td>Light level in % of maximum lx, formatted as “00”....”99”</td> </tr> <tr> <td>SW</td> <td>2</td> <td>Switch status, formatted as “00” or “01”</td> </tr> <tr> <td>DATE</td> <td>6</td> <td>Date stamp when the packet was generated (at its origin) formatted as “hhmmss”</td> </tr> <tr> <td>TIME</td> <td>6</td> <td>Time stamp when the packet was generated (at its origin), formatted as “hhmmss”</td> </tr> </tbody> </table> The 32\textsuperscript{nd} character is ‘\0’ Fields that are unknown, don’t care, or undefined for a particular command, are indicated with any negative value, for example “-1”. Where it makes sense, some packets may be shorter than 32 bytes. Short packets are terminated with ‘\r’, or ‘\n’. **Example 1.** We receive the packet ``` ACQ\n``` ``` we received the packet ``` ``` we can request another node to send/forward data for us to a remote node. ``` This is used mostly for programming nodes during deployment. The **R**oute request. If we receive this, another node is requesting we send the packet payload to another node. Respond by routing the packet to this node. ``` We can request another node to send/forward data for us to a remote node. ``` **A**cknowledge packet. We send this in response, to for example a query request. Where ‘\n’ is the ASCII newline character (ASCII 10 in decimal). This is a query request from an unknown node. Assume we are located at \((x,y) = (2,3)\), our node address/id = 12, the light level is 46%, the switch is open, but we don’t know what the date or time is. Respond by generating an acknowledge packet: ``` ACA 4 312-1-1000046-1-1-1-1-1-1\0 ``` This will also work: ``` ACA040312-1-1000046-1-1-1-1-1-1\0 ``` **Example 1.** We receive the packet ``` ACQ1222\n ``` This is a query request from a node located at \((x,y) = (12,22)\). Assume we are located at \((x,y) = (4,5)\), our node address/id = 8, the light level is 13, the switch is open, the date (ymmd) is 050422 and the time (hhmmss) is 145623. Respond by generating an acknowledge packet: ``` ACA 4 312-1-1000046-1-1-1-1-1-1\0 ``` Example 3. We sense a closure of the switch at date (ymmd) 050416 and time (hhmmss) 160201, and want to send this information to the base node, located at \((x,y) = (18,11)\). As before, we are located at \((x,y) = (4,5)\), our node address/id = 8. The light level is 25%. Using our routing algorithm we select an appropriate neighbor node and request it to route the packet for us: ``` ACA 4 312-1-1000046-1-1-1-1-1-1\0 ``` Before we transmit the packet, we must select the destination address of the node that will route it for us. Last changed 4/26/2005 11:27:00 AM 8 Tasks **Add Code To Display Node ID and Time on the LCD.** This will greatly help with debugging. **Design and Implement a Strategy to Join Network** This involves cycling through possible destination addresses and discovering neighbors. Some neighbors will be closer than others. Some nodes will give better RSSI. Design a strategy to select one or two of the possible candidates. Then implement it. **Design and Implement a Strategy to Periodically Update Network** New nodes may join the network, while others may leave the network. Design and implement a method for periodically updating the network. **Design and Implement a Routing Algorithm** A node may desire, or be asked to send information to a node that it not a neighbor. It knows its own coordinates, the coordinates of its neighbors, and that of the destination. Which neighbor will it ask to relay the information? Design and implement a strategy to answer this question. **Complete Implementation of getLight** The photosensor is connected in series with a 50K resistor between ground and 5V. The supplied routine getLightLevel measures the voltage across the photosensor and not the light level. Using the supplied data sheet for the photosensor, this voltage must be converted to reflect the light level in lx, expressed as a percentage compared to what one would measure on a bright sunny day. void main(void) { ... // Initialize register, ports, and peripherals. Then print a // welcome message on the LCD screen. init(); lcd_clear(); lcd_putsf("Welcome..."); // Main loop. while(1){ if (isUpDateTime()) { // Time to rediscover our neighbors. joinNetwork(); } if (getSwitch() == 0){ // Make a packet to send to the base station. setLed(ON); base_x = 18; base_y = 19; ... hc = 0; lux = getLight(); sprintf(packet,"ACR%2d%2d...",base_x,base_y,...); // Use the routing algorithm to select a node we // will ask to forward our packet. address = findRoute(...); setAddress(address); putPacket(packet,0); delay_ms(1000); // Blink the LED a few times, and then // turn it off. ... setLed(OFF); } if (getPacket(packet,0)){ if (packet[0] == 'Q') // Echo our configuration. putConfig(packet,0); } } if (packet[0] == 'R') { // Route request // Construct packet, select destination // and send packet ... putPacket(packet,0); }
{"Source-Url": "http://user.engineering.uiowa.edu/~ece195/2005/labs/Lab2Instructions.pdf", "len_cl100k_base": 4943, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 21730, "total-output-tokens": 5104, "length": "2e12", "weborganizer": {"__label__adult": 0.0011615753173828125, "__label__art_design": 0.00112152099609375, "__label__crime_law": 0.0007305145263671875, "__label__education_jobs": 0.02447509765625, "__label__entertainment": 0.0002598762512207031, "__label__fashion_beauty": 0.0006885528564453125, "__label__finance_business": 0.0006690025329589844, "__label__food_dining": 0.0014858245849609375, "__label__games": 0.002197265625, "__label__hardware": 0.048370361328125, "__label__health": 0.0018796920776367188, "__label__history": 0.0011425018310546875, "__label__home_hobbies": 0.0013427734375, "__label__industrial": 0.00313568115234375, "__label__literature": 0.000743865966796875, "__label__politics": 0.0006270408630371094, "__label__religion": 0.0015497207641601562, "__label__science_tech": 0.269775390625, "__label__social_life": 0.0005335807800292969, "__label__software": 0.00652313232421875, "__label__software_dev": 0.625, "__label__sports_fitness": 0.0012826919555664062, "__label__transportation": 0.004634857177734375, "__label__travel": 0.0006103515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19436, 0.02786]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19436, 0.51374]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19436, 0.88269]], "google_gemma-3-12b-it_contains_pii": [[0, 2277, false], [2277, 5192, null], [5192, 6771, null], [6771, 9043, null], [9043, 11603, null], [11603, 13604, null], [13604, 15562, null], [15562, 16944, null], [16944, 18318, null], [18318, 19292, null], [19292, 19436, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2277, true], [2277, 5192, null], [5192, 6771, null], [6771, 9043, null], [9043, 11603, null], [11603, 13604, null], [13604, 15562, null], [15562, 16944, null], [16944, 18318, null], [18318, 19292, null], [19292, 19436, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19436, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19436, null]], "pdf_page_numbers": [[0, 2277, 1], [2277, 5192, 2], [5192, 6771, 3], [6771, 9043, 4], [9043, 11603, 5], [11603, 13604, 6], [13604, 15562, 7], [15562, 16944, 8], [16944, 18318, 9], [18318, 19292, 10], [19292, 19436, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19436, 0.23684]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
9c8bfd22028ff9d19fd685aeb1f496f38dc828f3
Scratchpad Allocation for Concurrent Embedded Software Vivy Suhendra, Abhik Roychoudhury, and Tulika Mitra Department of Computer Science, National University of Singapore {vivy, abhik, tulika}@comp.nus.edu.sg ABSTRACT Software-controlled scratchpad memory is increasingly employed in embedded systems as it offers better timing predictability compared to caches. Previous scratchpad allocation algorithms typically consider single process applications. But embedded applications are mostly multi-tasking with real-time constraints, where the scratchpad memory space has to be shared among interacting processes that may preempt each other. In this paper, we develop a novel dynamic scratchpad allocation technique that takes these process interferences into account to improve the performance and predictability of the memory system. We model the application as a Message Sequence Chart (MSC) to best capture the inter-process interactions. Our goal is to optimize the worst-case response time (WCRT) of the application through runtime reloading of the scratchpad memory content at appropriate execution points. We propose an iterative allocation algorithm that takes into account the potential interference patterns, and exploits this interference information to tune the scratchpad reloading points and content to best improve the WCRT. We evaluate our memory allocation scheme on a real-world embedded application controlling an Unmanned Aerial Vehicle (UAV). Categories and Subject Descriptors C.3 [Special-purpose and Application-based Systems]: Real-time and embedded systems General Terms Design, Performance Keywords Scratchpad memory, WCET, Message Sequence Chart 1. INTRODUCTION Scratchpad memory is a software-managed on-chip memory that has been widely accepted as an alternative to caches in real-time embedded systems, as it offers better timing predictability compared to caches. The compiler and/or the programmer explicitly controls the allocation of instructions and data to the scratchpad memory. Thus the latency of each memory access is completely predictable. However, this predictability is achieved at the cost of compiler support for content selection and runtime management. In this paper, we address the problem of scratchpad memory allocation for concurrent embedded software (with real-time constraints) running on uniprocessor or multiprocessor platforms. Our objective is to reduce the worst-case response time (WCRT) of the entire application. Our problem setting is representative of the current generation embedded applications (e.g., in automotive and avionics domain) that are inherently concurrent in nature and, at the same time, are expected to satisfy strict timing constraints. The combination of concurrency and real-time constraints introduces significant challenges to the allocation problem. Given a sequential application, the problem of content selection for scratchpad memory has been studied extensively [9, 10, 12, 14]. However, these techniques are not directly applicable to concurrent applications with multiple interacting processes. Figure 1 shows a Message Sequence Chart (MSC) model [1, 3] depicting the interaction among the processes in an embedded application. We use MSC model as it provides a visual but formal mechanism to capture the inter-process interactions. Visually, an MSC consists of a number of interacting processes each shown as a vertical line. Time flows from top to bottom along each process. A process in turn consists of one or more tasks represented as blocks along the vertical line. Message communications between the processes are shown as horizontal or downward sloping arrows. Semantically, an MSC denotes a labeled partial order of tasks. This partial order is the transitive closure of (a) the total order of the tasks in each process, and (b) the ordering imposed by message communications — a message is received after it is sent. A naive allocation strategy can be to share the scratchpad memory among all the tasks of all the processes throughout the lifetime of the application. Allocation algorithms proposed in the literature for sequential applications can be easily adapted to support this strategy. However, this strategy is clearly sub-optimal, as a task executes for only a fraction of the application’s lifetime yet occupies its share of the memory space for the entire lifetime of the application. Instead, two tasks with disjoint lifetimes (e.g., tasks \( f_{m_0} \) and \( f_{m_4} \) in Figure 1) should be able to use the same memory space through time multiplexing. This is known as dynamic scratchpad allocation or scratchpad overlay where the scratchpad memory content can be replaced and reloaded at runtime. As timing predictability is the main motivation behind the choice of scratchpad memory over caches, it should be maintained even in the presence of scratchpad overlay. This implies that in a concurrent system (e.g., as shown in Figure 1), two tasks \( t_1 \) and \( t_2 \) should be mapped to the same memory space only if we can guarantee... Unmanned Aerial Vehicle (UAV) reveals that we can achieve significant performance improvement through appropriate content selection and runtime management of the scratchpad memory. Related Work. The problem of content selection for scratchpad memory has been studied extensively for sequential applications. Most of these works [10, 14] aim to minimize average-case execution time or energy consumption through scratchpad allocation. Scratchpad content selection for minimizing the worst-case execution time (WCET) has been addressed as well [9, 12]. However, these techniques are not applicable when the scratchpad space needs to be shared among multiple interacting processes. The work by Verma et al. [15] presents a set of scratchpad sharing strategies among the processes for energy consumption minimization. However [15] simply assumes a statically defined schedule whereas we consider priority driven preemptive scheduling. Moreover, the scratchpad sharing decisions in [15] are not based on interactions and interference among the processes, which are critical in our case to provide real-time guarantees. Scratchpad sharing among different processing elements (PEs) in a multiprocessor system-on-a-chip has also been investigated. Here the focus is more on mapping of codes/data to the private scratchpad memories of the PEs so as to maximize the benefit from the scratchpad allocation [4]. Other techniques include exploration of scratchpad hierarchy [2, 13] and runtime customization of scratchpad sharing and allocation among the PEs [5]. Finally, this work complements the research on cache-related preempted delay (CRPD) [6, 7], which provides timing guarantee for concurrent software by analyzing interferences in cache memory due to process interactions. Our work, on the other hand, eliminates interference in memory through scratchpad allocation. 2. PROBLEM FORMULATION The input to our problem is in the form of Message Sequence Chart (MSC) [1, 3] that captures process interactions corresponding to a concurrent embedded application. We assume a preemptive, multi-tasking execution model. The application is periodic in nature. The MSC represents interactions within one such invocation where all processes involved should adhere to a common period and deadline. The underlying hardware platform contains one or more processing elements (PEs), each associated with a private scratchpad memory. A process (a vertical line in the MSC) typically corresponds to a specific functionality. It is thus natural to assign all the tasks in a process to one PE. The order in which the tasks appear on the process lifeline reflects their order of execution on the PE. In this paper, we assume zero communication delay between processes. However, our analysis can be easily adapted to include non-zero communication delays. Each process is assigned a unique static priority. The priority of a task is equal to the priority of the process it belongs to. A task \( t_1 \) in a process \( P \) may get preempted by a task \( t_2 \) from a higher priority process \( P' \). The assignment of static priorities to processes and the mapping of processes to PEs are inputs to our framework. Note that statically assigned priorities do not guarantee a fixed execution schedule at runtime. The preemptions and execution time variations depending on input lead to varying completion times of a task. This, in turn, gives rise to different execution schedules. We now formalize the problem definition. Let \( t_1, \ldots, t_N \) denote the tasks belonging to all the processes in the application. Each task \( t_i \) \( (1 \leq i \leq N) \) is associated with a period \( p_i \), a static priority \( r_i \) (the range of \( r_i \) is \([1, R]\) with 1 being the highest priority), and mapping to a PE \( P_{E_i} \) (the range of \( P_{E_i} \) is \([1, Q]\) where \( Q \) is the number of PEs in the system). As mentioned before, all the tasks belonging to a process have the same priority and are mapped to the same PE. Further, let \( c_{t_i} \) denote the \textit{uninterrupted} worst-case execution time (WCET) of the task \( t_i \) running on \( P_{E_i} \), in isolation. The estimation of \( c_{t_i} \) does not assume any scratchpad allocation, i.e., all the accesses incur the main memory latency. Let \( S \) be a particular scratchpad allocation for the application. In this work, we consider allocating program codes into the scratchpad. The method applies similarly to data allocation. \( S \) consists of two components: (1) the amount of scratchpad space \( space(t_i) \) allocated to each task, and (2) the allocation of \( space(t_i) \) among the code blocks of \( t_i \). Note that as we allow scratchpad overlay, the same scratchpad memory space can be allocated to two or more tasks as long as they have disjoint lifetimes. Let \( Mem_i \) denote the set of all code blocks of \( t_i \) available for allocation. Given \( space(t_i) \), the allocation \( Alloc(t_i) \subseteq Mem_i \) is the set of most profitable code blocks from \( t_i \) to fit the capacity. Finally, WCET of \( t_i \) as a result of allocation \( S \) is denoted as \( wcet(t_i, S) \). Given an allocation \( S \) and the corresponding WCET of the tasks, we can estimate the \textit{lifetime} of each task, defined as the interval between the lower bound on the start time \( Start(t_i, S) \) and the upper bound on the finish time \( Finish(t_i, S) \) of the task. This estimation should take into account the dependencies among the tasks (total order among the tasks within a process and ordering imposed by message communication) as well as preemptions. The WCRT of the whole application is now given by \[ WCRT = \max_{1 \leq i \leq N} Finish(t_i, S) - \min_{1 \leq i \leq N} Start(t_i, S) \] (1) Our goal is to construct the scratchpad allocation \( S \) that minimizes the WCRT of the application. ### 3. METHOD OVERVIEW Our proposed method is an iterative scheme (Figure 2), which we will elaborate below. Figure 3(a) shows an MSC extracted from our case study for the purpose of illustration. **Task Analysis.** We analyze each task to determine its WCET with a given scratchpad allocation (initially empty), along with the area and the gain of allocating each of its code blocks. The WCET will serve as input to the WCRT analysis, while the memory profile will be used to choose the scratchpad content for the task at the allocation step. We handle code allocation in this paper, for which possible choices of granularity are a basic block (which we use here) or a function. The gain of allocating a block of code, in terms of execution time saving, is the execution frequency of the block multiplied by the reduction in latency required to fetch the block from scratchpad instead of from main memory. As we are considering systems with real-time constraints, the execution frequencies correspond to the worst-case execution path, which in turn is obtained via static analysis [11]. Worst-case execution path may shift as allocation decisions change; thus task profiles should be updated following each change in the course of the iterative improvement. We adapt our previous work on WCET-centric allocation of data for a single task to scratchpad memory [12] for this purpose. **WCRT Analysis.** The WCRT of a task \( t_i \) is a function of its WCET value and the delay caused by higher priority tasks whose lifetimes overlap with that of \( t_i \). A fixed-point iteration computes this value by finding the root to the equation \[ x = g(x) = wcet(t_i, S) + \sum_{t_j \in \text{const}(t_i)} wcet(t_j, S) \times \left[ \frac{x}{p_j} \right] \] \[ \text{intf}(t_i) = \{ t_j \mid r_j < r_i \text{ AND } \text{Start}(t_j, S), \text{Finish}(t_j, S) \cap [\text{Start}(t_i, S), \text{Finish}(t_i, S)] \neq \emptyset \} \] If we denote this WCRT value as wcrt(t_i, S), we have for each task \( t_i \): \[ \text{Start}(t_i, S), \text{Finish}(t_i, S) \cap [\text{Start}(t_j, S), \text{Finish}(t_j, S)] \neq \emptyset \] The partial ordering of tasks in the MSC imposes the constraint that a task \( t_i \) can start execution only after all its predecessors have completed execution. In other words, \( \text{Start}(t_i, S) \geq \text{Finish}(u, S) \) for all tasks \( u \) preceding \( t_i \) in the partial order of the MSC. Observing these rules, the WCRT analysis computes the lifetimes of all tasks in all processes. After the analysis, we construct the task interference graph for the purpose of scratchpad allocation. Figure 3(b) shows the task lifetimes computed by the WCRT analysis and the constructed interference graph given the MSC in (a). An edge between two nodes in the interference graph implies overlapping lifetimes of the two tasks represented by the nodes. **Scratchpad Sharing Scheme & Allocation.** Based on the interference pattern resulting from the WCRT analysis, we can construct a scratchpad sharing scheme among tasks on the same PE. One possible scheme is illustrated in Figure 3(c), which shows the space sharing among tasks as well as the dynamic overlay over time. The schemes will be elaborated in the next section. The sharing scheme determines the space allocated to each task, and the memory profiles obtained in the first step are used to select the most beneficial scratchpad content. The selection strategy is based on our previous work that aims to minimize the task WCET, taking into account the possible shift in worst-case execution path [12]. After allocation is performed, the task analysis is re-applied to find the new WCET. **Post-Allocation Analysis.** Given updated task WCETs after allocation, the WCRT analysis is performed once again to compute updated task lifetimes. There is an important constraint to be observed in the WCRT analysis when the allocation decision has been made. The new WCET values have been computed based on the current scratchpad allocation, which is in turn decided based on the task interference pattern resulting from the previous analysis. In particular, scratchpad overlays have been decided among tasks determined to be interference-free. Therefore, these values are only valid for the same interference pattern, or for patterns with less interference. To understand this, suppose the interference graph in Figure 3(b) leads to the allocation decision in (c). The reduction in WCET due to the allocation in turn reduces task response times and changes task lifetimes to the one shown in (d). However, this computation of lifetimes is incorrect, because it assumes the WCET value of \( f_{s0} \) given that it can occupy the assigned scratchpad space throughout its execution. If \( fm_4 \) is allowed to start earlier after its predecessor \( fr_1 \) as shown in (d), it may in fact preempt \( f_{s0} \), flushing the scratchpad content of \( f_{s0} \) and causing additional delay for reload when \( f_{s0} \) resumes. Indeed, we see that the interference graph in (d) has an added edge from \( fm_4 \) to \( f_{s0} \). To avoid this unsafe assumption, we need to maintain that tasks known not to interfere when allocation decision is made will not become interfering in the updated lifetimes. This is accomplished by introducing slack that forces the latter task to “wait out” the conflicting time windows. The adapted WCRT analysis consults existing interference graph and adjusts \( \text{Start}(fm_4, S) \) such that \( \text{Start}(fm_4, S) \geq \text{Finish}(f_{s0}, S) \). Figure 3(e) shows the adjusted schedule, which maintains the same interference graph as (a) by forcing \( fm_4 \) to start after \( f_{s0} \) has completed. With a more sophisticated sharing/allocation scheme and schedule adjustment as we will introduce next, we can sometimes remove existing task interferences without adding interference elsewhere. When this happens, we iterate over the allocation and analysis steps to enhance current decision, until no more improvement can be made (Figure 2). As task interferences are enforced to be non-increasing, the iteration is guaranteed to terminate. **4. ALLOCATION METHODS** This section describes the scratchpad allocation routine, which is the focus of our paper. As only one task will be running on the PE at any given time, we can actually utilize the whole scratchpad space for the single executing task. The concern arises when a task is preemption, as flushing the scratchpad content will cause additional reloading delay when the task resumes. In that case, it may be beneficial to reserve a portion of the scratchpad for each of the tasks (space-sharing), thus avoiding the need to flush and reload the scratchpad memory at each preemption/resume. On the other hand, two tasks guaranteed to never interfere with each other can share the same space via overlay (time-sharing). In Figure 3(b), tasks \( fm_2 \) and \( fr_0 \) are space-sharing tasks, while task \( fm_1 \) in time window \( W_1 \) has a time-sharing relationship with all tasks in time window \( W_2 \). The various schemes are illustrated in Figure 4. The left side of each picture shows task lifetimes as determined by the WCRT analysis, and the right side sketches the state of the scratchpad memory due to the different allocation schemes. For the purpose of comparing the scratchpad state, the lifetime of each task has been drawn with the same height across the different schemes (with the exception of CR). In reality, the heights (representing the length of task runtime) will vary due to the different allocation decisions. **Profile-based Knapsack (PK).** As the baseline method, we consider a profile-based static allocation method. In this scheme, all tasks executing on the same PE will share the PE’s scratchpad space throughout application lifetime. The allocation decision does not consider the possible interferences (or lack thereof) within the PE. Partitioning and scratchpad allocation for each PE \( q \) can be simultaneously optimized via an Integer Linear Programming (ILP) formulation. The objective is to minimize the combined WCET weighted by task periods, defined as \[ \sum_{t_j, \text{fre}_q = q} \text{wcet}(t_j, S)/p_j \] \[ \text{wcet}(t_i, S) = c_i - \sum_{b \in \text{Alloc}(t_i)} \text{freq}_b \times \text{area}_b \times \Delta \] Recall that $c_i$ is the running time of $t_i$ when all code blocks are fetched from the main memory, and $Alloc(t_i) \subseteq Mem_i$ is the selected set of code blocks of $t_i$ in scratchpad allocation $S$. $freq_i$ and $area_b$ are respectively the execution frequency in the worst-case path and the area occupied by block $b$. The term $\Delta$ defines the savings in execution time per unit area due to the scratchpad allocation. Given the scratchpad size of $cap_q$ attached to PE $q$, the capacity constraint is expressed as $$\left( \sum_{t_i \in \text{PE}_q \cap \text{Mem}_i} \sum_{b \in \text{Alloc}(t_i)} area_b \right) \leq cap_q$$ For allocation of program code into the scratchpad, an additional constraint is needed to maintain correct control flow [10]. If two sequential basic blocks are allocated in different memory areas (i.e. one in scratchpad and one in main memory), then a jump instruction should be inserted at the end of the earlier block. Figure 4(a) shows the partitioned scratchpad by PK. As the allocation decision does not depend on task interference, PK will only execute for one round; no iterative improvement can be made. **Interference Clustering (IC).** In this second method, we use task lifetimes determined by the WCRT analysis to form interference clusters. Tasks whose lifetimes overlap at some point are grouped into the same cluster. They will share the scratchpad for the entire duration of the common time window, from the earliest start time to the latest finish time among all tasks in the cluster. The same partitioning/allocation routine used in PK is employed among all tasks in the same cluster. The left part of Figure 4(b) shows the clustering decision for the given task schedule. $fm_2$ as well as $fm_4$ have been identified as having no interference from any other task. Each of them is placed in a singleton cluster and enjoys the whole scratchpad space during its lifetime. **Graph Coloring (GC).** The IC method is prone to produce large clusters due to transitivity. In Figure 4(b), even though $fm_2$ and $fs_0$ do not interfere with each other, their independent interferences with $fr_2$ end up placing them in the same cluster. Because of this, simply clustering the tasks will likely result in inefficient decisions. The third method attempts to enhance the allocation within the clusters formed by the IC method by making use of the task-to-task interference relations captured in the interference graph. If we apply graph-coloring to this graph, the resulting colors will give us groups of tasks that do not interfere with each other within the cluster. Tasks assigned to the same color have disjoint lifetimes, therefore can reuse the same scratchpad space via further overlay. Graph coloring using the minimum number of colors is known to be NP-Complete. We employ the Welsh-Powell algorithm [16], a heuristic method that assigns the first available color to a node, without restricting the number of colors to use. Given the interference graph, the algorithm can be outlined as follows. 1. Initialize all nodes to uncolored. 2. Traverse the node in decreasing order of degree, assigning color 1 to a node if it is uncolored and no adjacent node has been assigned color 1. 3. Repeat step 2 with colors 2, 3, etc. until no node is uncolored. After we obtain the color assignment, we formulate the scratchpad partitioning/allocation with the refined constraint that a task $t_i$ with assigned color $k_i$ can occupy at most the space allocated for $k_i$, denoted by $area(k_i)$. The scratchpad space given to all $K$ colors used for PE $q$ add up to the total capacity $cap_q$ as expressed below. $$\sum_{b \in \text{Alloc}(t_i)} area_b \leq area(k_i); \sum_{k=1}^K area(k_i) \leq cap_q$$ Figure 4(c) shows the further partitioning within the second cluster formed by IC. $fm_2$ and $fs_0$ have been assigned the same color, and allocated the same partition of the scratchpad to occupy at different time windows. The similar decision applies to $fr_0$ and $fr_1$. The partition will be reloaded with the relevant task content when execution transfers from one task to another. **Algorithm 1: The CR algorithm** **Critical Path Interference Reduction (CR).** While the above three schemes try to make the best out of the given interference pattern, the final method that we propose turns the focus to reducing the interference instead. This is motivated by the observation that allocation decisions are often compromised by heavy interference. When the analysis recognizes a potential preemption of one task by another, both tasks will have to do space-sharing; in addition, the lifetime window of the preempted task must make allowance for the time spent waiting for the preempting task to complete. In an extreme case, if a task $t_1$ is released right before a higher priority task $t_2$, it must wait for practically the entire execution duration of $t_2$. In this case, suppressing the release time of $t_1$ until $t_2$ completes can only be beneficial: $t_1$’s waiting time is still the same, yet no preemption cost is incurred, and a better allocation decision can be made for both tasks. In general, this is a good strategy when the waiting time far outweighs the actual computation time of the task. The method proceeds as shown in Algorithm 1. We first work on the schedule produced by the WCRT analysis to improve the interference pattern. In choosing which interference to eliminate, we naturally look at the critical path of the application, as determined by the WCRT analysis. We consider all interferences in which tasks on the critical path are preempted or have to wait for tasks with higher priority (line 4). From these, we choose the interference that occupies the longest time window, that is, one in which the higher-priority task has the longest WCET (line 7). We eliminate this interference by forcing a delayed start time for the affected task (line 9), then propagate the shift to all tasks by re-running the WCRT analysis. Certainly, new interferences are not allowed to arise in this step. From the new schedule, we again consider preemptions on the critical path, which may or may not have shifted. The elimination and re-analysis are iterated until no more interferences can be eliminated from the critical path. We then proceed to perform scratchpad partitioning/allocation as in the GC scheme on this improved interference graph. In Figure 4(d), the interference between $fs_0$ and $fr_1$ has been eliminated by letting $fr_1$ wait out the lifetime of $fs_0$ instead of starting immediately after the completion of its predecessor $fr_0$. This improvement frees $fr_1$ from all interference. It can now occupy the whole scratchpad memory throughout its lifetime. to the processes, we vary the number of PEs from 1 to 4. In the 4-PE case, processes operate in one of two modes: manual and automated. The original implementation as shown uses 2 PEs with a scratchpad configuration, after applying the four discussed schemes. The proposed scheme CR gives the best WCRT improvement over all other schemes. This justifies the strategy of eliminating critical interferences via slack enforcement, whenever any additional delay is incurred can be overshadowed by the gain through a better scratchpad sharing and allocation scheme. Finally, comparison of algorithm runtimes in the tables of Figure 5 shows that all schemes are reasonably efficient and no scalability issue is evident. 6. CONCLUDING REMARKS In this paper, we have done a detailed study of scratchpad allocation schemes for concurrent embedded software running on single or multiple processing elements. The novelty of our work stems from taking into account both concurrency and real-time constraints in our scratchpad allocation. Our allocation schemes consider (i) communication or interaction among the threads or processes of the application, as well as (ii) interference among the threads or processes due to preemptive scheduling in the processing elements. As the interactions and interference among the processes can greatly affect the worst-case response time (WCRT) of a concurrent application, our scratchpad allocation methods achieve substantial reduction in WCRT as evidenced by our experiments. 7. ACKNOWLEDGMENTS This work is partially supported by NUS research projects R252-000-292-112 and R252-000-321-112. 8. REFERENCES
{"Source-Url": "http://www.cecs.uci.edu/~papers/esweek08/codes/p37.pdf", "len_cl100k_base": 6112, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 24042, "total-output-tokens": 7264, "length": "2e12", "weborganizer": {"__label__adult": 0.0006189346313476562, "__label__art_design": 0.0006623268127441406, "__label__crime_law": 0.0007038116455078125, "__label__education_jobs": 0.000576019287109375, "__label__entertainment": 0.0001373291015625, "__label__fashion_beauty": 0.0002903938293457031, "__label__finance_business": 0.00045228004455566406, "__label__food_dining": 0.000598907470703125, "__label__games": 0.0013294219970703125, "__label__hardware": 0.0167236328125, "__label__health": 0.0009660720825195312, "__label__history": 0.0004627704620361328, "__label__home_hobbies": 0.0002081394195556641, "__label__industrial": 0.00122833251953125, "__label__literature": 0.0002570152282714844, "__label__politics": 0.0004656314849853515, "__label__religion": 0.0007076263427734375, "__label__science_tech": 0.23779296875, "__label__social_life": 8.040666580200195e-05, "__label__software": 0.0089569091796875, "__label__software_dev": 0.72412109375, "__label__sports_fitness": 0.0005674362182617188, "__label__transportation": 0.0018520355224609375, "__label__travel": 0.0003104209899902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29951, 0.02091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29951, 0.32247]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29951, 0.90101]], "google_gemma-3-12b-it_contains_pii": [[0, 5071, false], [5071, 7406, null], [7406, 12731, null], [12731, 19348, null], [19348, 26125, null], [26125, 29951, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5071, true], [5071, 7406, null], [7406, 12731, null], [12731, 19348, null], [19348, 26125, null], [26125, 29951, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29951, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29951, null]], "pdf_page_numbers": [[0, 5071, 1], [5071, 7406, 2], [7406, 12731, 3], [12731, 19348, 4], [19348, 26125, 5], [26125, 29951, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29951, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
050a160c42d75a558edf2413b3b4ea1ba4e2db1e
CodeQA: A Question Answering Dataset for Source Code Comprehension Chenxiao Liu, Xiaojun Wan Wangxuan Institute of Computer Technology, Peking University The MOE Key Laboratory of Computational Linguistics, Peking University {jslcx, wanxiaojun}@pku.edu.cn Abstract We propose CodeQA, a free-form question answering dataset for the purpose of source code comprehension: given a code snippet and a question, a textual answer is required to be generated. CodeQA contains a Java dataset with 119,778 question-answer pairs and a Python dataset with 70,085 question-answer pairs. To obtain natural and faithful questions and answers, we implement syntactic rules and semantic analysis to transform code comments into question-answer pairs. We present the construction process and conduct systematic analysis of our dataset. Experiment results achieved by several neural baselines on our dataset are shown and discussed. While research on question-answering and machine reading comprehension develops rapidly, few prior work has drawn attention to code question answering. This new dataset can serve as a useful research benchmark for source code comprehension. 1 Introduction Question Answering (QA) is the task of answering questions given a context about which the questions are being asked. With the advancement of deep learning and the availability of large-scale data, QA has received increasing attention from researchers. In recent years, QA has been applied into broad application domains, such as news (Hermann et al., 2015; Trischler et al., 2016), science (Khot et al., 2018; Hardalov et al., 2020), movies (Miller et al., 2016), medical field (Pampani et al., 2018), etc. Among QA’s wide applications, code QA is an appealing application scenario on account of the distinctive nature of code differing from text. In this study, we focus on generating QA pairs for source code for the purpose of source code comprehension. QA-based source code comprehension is the ability to read a code snippet and then answer questions about it, which requires understanding both source code and natural language. Take the public void insertChildAt(Element child, int index){ setChildParent(child); children.add(index, child); } Question: What does the code insert at the specified index? Answer: The given child. Table 1: A question-answer pair for a sample code snippet in the QA-based source code comprehension task. question “What does the code insert at the specified index?” together with a code snippet in Table 1 as an example. To answer the question, one probably first reads the source code carefully, figures out that the method adds the given parameter “child” at the specified index into the object “children”. Thus one gives a proper answer: “The given child.”. Compared with code summarization task (Haiduc et al., 2010) that generates comments for codes, QA-based code comprehension task introduces more specific guidance and more explicit signals for models on what to generate. It provides more granularity levels ranging from method to variable, not just regarding several lines of code as a whole. Besides, it is easier to be evaluated since the output is more succinct, constrained and targeted (Kryściński et al., 2019). QA-based source code comprehension has direct use in education to facilitate programming learning, where a system automatically answers questions about codes that someone has read. A more general use is to help improve software maintenance since it can advance the readability of code. Moreover, it can provide diverse information that can be leveraged to help perform a wide range of software engineering tasks, such as bug detection, specification inference, testing and code synthesis. However, constructing a code QA dataset for source code comprehension is very challenging. Naturally occurring QA pairs on the Web are often complicated, noisy and contain information that cannot be inferred from the source code. We tried to collect naturally occurring QA pairs from source code management platforms like Github, QA sites for programmers like StackOverflow, programming online judge platforms like Leetcode. But these QA pairs usually rely on knowledge apart from source code, which makes it problematic to disentangle modeling weakness from data noise. An alternative is to have experienced developers write QA pairs for source code from scratch, which is inefficient and cost-intensive. In this study, we introduce a new data construction process to address the above challenges and propose CodeQA, a free-form question-answering dataset. First, to ensure that QA pairs are natural and clean, we utilize code comments as data source. We pick out two large-scale well-studied datasets from Github - a Java dataset and a Python dataset. Then we select comments that are suitable to generate QA pairs from these datasets. Targeting at generating various types of QA pairs such as Wh-questions and Yes/No questions, we implement syntactic rules and semantic analysis to transform comments into QA pairs. More specifically, comments are transformed into dependency trees and converted to fit question templates that are invoked by semantic role labels. We also analyze the verbal group of comments and generate Yes/No questions. After that, QA pairs with ambiguous answers and unbalanced counts of Yes/No are filtered. Due to the varied nature of code comments, CodeQA covers a variety of information containing in codes, ranging from method to variable. We analyze our dataset and classify all the generated QA pairs into four categories: functionality, purpose, property and workflow. Our experiments with several baseline models demonstrate that neural models struggle to generate correct answers. These results suggest that our dataset could serve as a useful groundwork for QA-based source code comprehension. Prior work (Bansal et al., 2021) built a dataset for code QA, but only a third of their questions are free-form. The main differences between our work and prior work are: first, all of our questions are free-form; second, our questions have diverse textual expressions and they are asking about information of various granularity in code, while questions in prior work are mostly fixed and on the single granularity. Therefore, the contributions of this paper are as follows. - We propose the QA-based source code comprehension task and introduce a large-scale question-answering dataset containing 119,778 QA pairs for 56,545 Java codes, and 70,085 QA pairs for 44,830 Python codes. As far as we know, it is the first diverse free-form QA dataset specially built for source code comprehension. The dataset is available at https://github.com/jadexliu/CodeQA. - We present a data construction process to generate code QA pairs based on code comments and advance a taxonomy to classify code QA pairs into four categories. - We provide several baselines to evaluate the QA-based source code comprehension task. Experimental results demonstrate this dataset could serve as a useful benchmark for model and metric development. 2 Related Work 2.1 Question Answering Question answering has a long history and has attracted increasing attention in recent years. Question Answering tasks are usually divided into four categories (Chen, 2018; Qiu et al., 2019; Liu et al., 2019a): cloze tests, multiple-choice, span prediction and free-form answering. A few examples of QA datasets in each category are CNN & Daily Mail (Hermann et al., 2015), RACE (Lai et al., 2017), SQuAD (Rajpurkar et al., 2016), MS MARCO (Nguyen et al., 2016). Compared with other categories, free-form answering tasks show their superiority in the dimensions of understanding, flexibility, and application, which are the closest to practical application. However, the flexibility of the answer form brings difficulty to build datasets (Liu et al., 2019a). On these datasets, earlier work in question answering employed rule-based and machine-learning-based methods. Recent deep-learning techniques leveraged neural networks with different attention mechanisms and pre-trained text representation (Yamada et al., 2020; He et al., 2020), improving the ability of extracting contextual information and context-question interaction. In code QA, Bansal et al. (2021) designed a context-based QA system for basic questions about subroutines and evaluated the system by an RNN-based encoder-decoder network. They define the “basic” question as a question about a small detail of a method, such as “What are the parameters to the method?”. These questions can be solved by parsing code without source code comprehension. Remaining questions are similar to code summarization, such as “What does method do?”. Compared with existing work, we construct a more complex and attractive QA dataset for the purpose of code comprehension, which requires the understanding of both source code and natural language. 2.2 Code Summarization Code summarization is the task of creating readable summaries that describe the functionality of a code snippet. Neural source code summarization approaches frame the problem as a sequence generation task (Iyer et al., 2016) and use encoder-decoder networks with attention mechanisms. Some approaches utilized the structural information of code, such as Code2seq proposed by Alon et al. (2018), DeepCom proposed by Hu et al. (2018a), astattendgr proposed by LeClair et al. (2019). Structural information can also be encoded into tree structure encoders such as Tree-LSTM (Shido et al., 2019), Tree-Transformer (Harer et al., 2019), and Graph Neural Network (LeClair et al., 2020). Besides, other techniques like reinforcement learning (Wan et al., 2018), dual learning (Wei et al., 2019), retrieval-based techniques (Zhang et al., 2020), language-agnostic representation learning (Zügner et al., 2021) further enhance the code summarization models. Recently, neural architectures like Transformer (Vaswani et al., 2017) and large pre-trained models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Liu et al., 2019b) have brought improvements on code summarization task. Representative works are transformer designed for code (Ahmad et al., 2020), CodeBERT(Feng et al., 2020), which is a model pre-trained on the CodeSearchNet (Husain et al., 2019) dataset for programming and natural languages. Among these noteworthy works, two datasets including a Java dataset (Hu et al., 2018b) and a Python dataset (Barone and Sennrich, 2017) are popular when conducting experiments. We construct our QA dataset based on the two datasets. 3 The Code QA Task In this work, we focus on building a dataset and setting up baselines for the following code QA task: Given a source code $c$ and a natural language question $q$ about $c$, a free-form textual answer $a$ is required to be generated. The textual answer $a$ may be a word, a phrase or a sentence, and it usually cannot be directly extracted from the source code. This task is different from traditional machine reading comprehension, as code is very different from text and we need programming knowledge to understand it. The source code and the natural language question are actually in two different languages and a QA system should have the ability to understand both the code language and the natural language. Moreover, the system needs to generate an answer faithful to the question and the corresponding code, rather than extract some tokens from the code. Therefore, the task is very challenging and it is very meaningful and urgent to construct and release a large-scale dataset for this task. 4 Dataset Construction 4.1 Data Source We construct our code QA dataset based on two code summarization datasets. The first one is a parallel corpus consisting of about a hundred thousand Python methods with descriptions written by their own programmers collected from Github (Barone and Sennrich, 2017). Each source code object contains a “docstring” (documentation string), which is retained at runtime as metadata. Programmers use docstrings to describe the functionality, interface or usage of code objects. Docstrings are extracted as natural language descriptions for code summarization tasks. The second is a parallel corpus of over seventy thousand Java code-comment pairs from Github (Barone and Sennrich, 2017). Each source code object contains a “docstring” (documentation string), which is retained at runtime as metadata. Programmers use docstrings to describe the functionality, interface or usage of code objects. Docstrings are extracted as natural language descriptions for code summarization tasks. The dataset contains the Java methods and the corresponding Javadoc comments. These comments describe the functionality of Java methods and are taken as code summaries. Code comments like Python docstrings and Javadocs can be viewed as the source of QA pairs. As code comments are deemed faithful to the code snippets, the QA pairs generated from the code comments are also faithful to the code snippets. Note that the code comments are only used for generating QA pairs, but not provided in the final code QA dataset. The comment taxonomy constructed by prior work (Zhai et al., 2020) illustrates that the content of comment can be classified into five perspectives: what, why, how-it-is-done, property and how-to-use. The diverse perspectives of content provide rich information to dig up and be transformed into QA pairs. Thus, we can generate question-answer pairs for code snippets by identifying potential answers in code comments, such as asking about the constraints or intentions of the key components of code occurring in comments. 4.2 Comment Selection Not all comments are suitable for generating QA pairs. We thus define a selection process to help filter out noisy comments. Most comments lack the subject, such as “attach votes count to each object of the queryset”, which starts with a verb and the hidden meaning is “the code attaches votes count to each object of the queryset”. Incomplete sentences add difficulty to parsing in the following stage. So if a comment lack the subject, we add “the code” at the beginning of the sentence. Besides, we filter incomplete comments (still under development) or comments unrelated to the corresponding code. These comments are clued by keywords including “TODO”, “license”, “ownership”, etc. (Pascarella and Bacchelli, 2017). 4.3 Question and Answer Formulation Typically, questions can be divided into several types (Day and Park, 2005): General questions, with Yes/No answers; Wh-questions, starting with what, where, when and so on; Choice questions, where there are some options inside the question. Since the questions in this work are converted from code comments, we focus on general questions and wh-questions, and leave choice questions as future work. To obtain wh-questions and general questions from code comments, we implement rule-based and template-based methods to convert syntactic and semantic representations into QA pairs. For wh-questions, we make use of dependency parsing (DP) and semantic role labeling (SRL). For one thing, we transform comments into dependency trees in the format of Universal Dependencies (UD) (De Marneffe et al., 2014) by using the allennlp parser (Gardner et al., 2018). We extract a potential answer where a verb is headed by a few dependency nodes in the dependency tree with the help of semantic role labels (SRL) according to the Propbank. 1.0 specifications (Palmer et al., 2005). The SRL model is provided by Gardner et al. (2018). For another, we extract the roles of each predicate occurred in the comment by SRL. According to Propbank, the roles are proto-agent, proto-patient, location, direction, manner, extent, cause, etc. Roles like location, direction are classified as modifiers, which can formulate our answers. In the end, we use some of the predefined handwritten templates in Dhole and Manning (2020) to generate QA pairs. Table 2 presents a few templates and examples to describe the construction of questions and answers. The first three rows are from dependency heuristics and others are from SRL heuristics. The detailed process of wh-question formulation is provided in Appendix A. With respect to general questions, we analyze the verbal group of the comment and generate a Yes/No question for every predicate that has a finite verb. We generate multiple Yes/No questions for each predicate if a comment contains multiple predicates. First, we select a clause for the current predicate and may rearrange the sequential position of semantic role labeling arguments. The standard declarative word order is preserved when generating a QA. When copular, modals, or cases if an auxiliary be/have/do is already present, we do not provide do-support. Otherwise, we add do-support and may move adjunct arguments relative to the main verb. As the negation label of the main verb in the verbal group indicates the polarity, according to Flor and Riordan (2018), we do not transfer the negation into generated question, but flip the answer from “yes/no” to “no/yes”. For example, from “windows don’t have a mode like linux cli example”, we derive the question “Do windows have a mode like linux cli example?” and the answer “No”. Since some comments can not be successfully parsed to generate QA pairs, we construct 115,807 QA pairs from 44,867 code snippets in Python dataset, and 203,229 QA pairs from 56,583 code snippets in Java dataset. 4.4 Postprocessing To generate high-quality code QA pairs, we filter the QA pairs that have ambiguous answers, such as answers only containing pronouns. Since some comments start with “the method” “this function” and we have added “the code” at the beginning of some comments when preprocessing, some generated QA pairs question about the subject and get answers like “this method”. We also filter these QA pairs as they do not provide specific information about code snippets. Besides, the original ratio of Yes questions to all questions is too high, with a heavily uneven proportion of Yes questions to No questions in our dataset. So we delete the majority of Yes questions to achieve a relative balance between Yes and No questions. Then we split each dataset into training, development and test sets in proportion with 8 : 1 : 1 after shuffling the pairs. 5 Dataset Analysis In this section, we introduce the overall statistics of our dataset and verify the free-form characteristic of our dataset. To explore the distribution of different kinds of source code QA pairs, we propose a taxonomy of four types of source code comprehension. 5.1 Overall Statistics Table 3 describes the basic statistics of our CodeQA dataset. In Java dataset, there are 95,778 training pairs, 12,000 development pairs and 12,000 test pairs. In Python dataset, there are 56,085 training pairs, 7,000 development pairs and 7,000 test pairs. We calculate the percentage of answers in the development or test set that can be found in the training set (excluding Yes/No answers). Besides, Table 5: Distribution of different QA pairs among 100 randomly chosen samples. <table> <thead> <tr> <th>Question Type</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>What</td> <td>67.24%</td> </tr> <tr> <td>How</td> <td>8.93%</td> </tr> <tr> <td>Where</td> <td>5.85%</td> </tr> <tr> <td>When</td> <td>6.89%</td> </tr> <tr> <td>Why</td> <td>1.02%</td> </tr> <tr> <td>For what purpose</td> <td>5.08%</td> </tr> <tr> <td>Yes/No</td> <td>2.86%</td> </tr> <tr> <td>Other</td> <td>2.13%</td> </tr> </tbody> </table> Table 6: Distribution of questions after automatic partitioning. we calculate the percentage of answers in all sets that are spans of the code, and the percentage of answers whose each token could be extracted from the code, as shown in Table 4. The statistics attest to the free-form nature of our dataset. 5.2 Categorization We are not aware of an agreed-upon typology of all code QA types. Categorizations of different types of code summarization exist (Pascarella and Bacchelli, 2017; Zhai et al., 2020), but the provided categorizations differ and are manually classified by coders in general. Bansal et al. (2021) generated QA pairs about code and divided the questions into six types, including basic extractive question types like “What is the return type of method?”, “What are the parameters of method?” and question types equivalent to code summarization, i.e. “What does method do?”. After consulting both prior works and a separate part of the training data, we characterize the data into the following four types. These types consist of (1) Functionality. It provides a definition of the range of functions that the subject and/or its interface can perform. (2) Purpose. It explains the reason why the subject is provided or the design rationale of the subject. (3) Property. It declares properties of the subject, such as pre-conditions and post-conditions of a method or some statements. Pre-conditions specify the constraints that should satisfy in order to use the subject while post-conditions indicate the result of using the subject. (4) Workflow. It describes how the subject is done, which means implementation details like the design or the workflow of the subject. The subject mentioned in the four types can either be the whole method or a key component of the code, e.g. a statement, a variable. To get a better understanding of the categorizations of code QA pairs, we sampled 100 QA pairs in the development set of Python, and then manually labeled the examples with the categories shown in Table 5. The results show that more than half of questions target functionality, while 37% questions ask about purpose, property, or workflow. To show the diversity in QA pairs, we also automatically categorize all the QA pairs in Table 6. We can see that What question makes up 67.24% of the data; 27.77% of the questions are How/Where/When/Why/For what purpose; 2.86% are Yes/No; and the remaining 2.13% are other types. 6 Experiments In this section, we first introduce four baseline models for this task. Then we compare and analyze the results of different models under both automatic and human evaluation metrics. 6.1 Baselines We present baseline results on CodeQA by examining four existing typical approaches. Since no previous work specifically designs a model for QA-based source code comprehension, we make some modifications to each of the existing approaches. Note that we do not employ retrieval models, for the answer repetition rate is quite low as shown in Table 4. Details about hyperparameter settings of all baselines are provided in the Appendix B. - **Seq2seq**: A Seq2seq model (Sutskever et al., 2014) with attention and copy mechanism Table 7: Performance of various models on Java dataset. *means the result is obtained on only 100 questions sampled in the respective dataset. We evaluate the 100 answers generated by CodeBERT and that given by an experienced programmer respectively. The results are used only for comparing CodeBERT and Human. Table 8: Performance of various models on Python dataset. * has the same meaning as in the above table. (See et al., 2017). While originally designed for text-to-text generation, it is commonly used in free-form question-answering as well (Nguyen et al., 2016). The input of the model is in the form of “[CLS] Question [SEP] Code”. Since models using original code tokens could perform better than models using abstract syntax tree (AST) sequences (Ahmad et al., 2020), we employ the code tokens as input for all baseline models. - **Dual Encoder**: A seq2seq model with two encoders. The model first builds a code representation and a question representation by its code-info encoder and question-info encoder respectively. After that, it concatenates the two representations for the decoder. Both encoders and decoder are similar to the architecture in the above Seq2seq model. - **Transformer**: A Transformer encoder-decoder model (Vaswani et al., 2017) with relative position representations (Shaw et al., 2018) and copy attention. The input is a sequence containing a question and a code separated by [SEP]. Since the semantic representation of a code does not rely on the absolute positions of its tokens, the Transformer ignores the directional information and encodes pairwise relationship (Ahmad et al., 2020). - **CodeBERT**: A Transformer encoder-decoder model where the encoder are initialized with CodeBERT (Feng et al., 2020). Following BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019b), CodeBERT is a bimodal model pre-trained with natural language and programming languages including Python and Java, etc. We fine-tune the model parameters on our dataset and predict answers given an input sequence consisting of a question and a code. Note that when training a version of CodeBERT by traversing the AST of code, model does not bring improvements on generation tasks (Feng et al., 2020). Thus we do not transform the code into tree structure. 6.2 Automatic Evaluation Metrics The model output is evaluated using several automatic metrics: BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), METEOR (Banerjee and Lavie, 2005), Exact Match (EM) and F1. 6.3 Model Performance Tables 7, 8 present the results with different models for the QA-based source code comprehension task on Java and Python datasets, respectively. CodeBERT performs the best, followed by Transformer. It is not surprising since pre-trained models on programming codes and texts are more powerful in encoding code representations and bridging the gap between code language and natural language, Table 9: Human evaluation results. Scores of each aspect range from 1 to 3 and higher scores are better. <table> <thead> <tr> <th>Model</th> <th>Fluency</th> <th>Correctness</th> </tr> </thead> <tbody> <tr> <td>Seq2seq</td> <td>2.09</td> <td>2.21</td> </tr> <tr> <td>Dual Encoder</td> <td>2.23</td> <td>2.19</td> </tr> <tr> <td>Transformer</td> <td>2.37</td> <td>2.41</td> </tr> <tr> <td>CodeBERT</td> <td>2.53</td> <td>2.40</td> </tr> </tbody> </table> Q: What is added to the given ioobjects? A: all ioobjects of this container Q: What does the code get? A: page arguments 6.5 Human Evaluation Besides automatic evaluation, we randomly sampled 100 QA pairs from the development and test sets of CodeQA respectively, and asked two programmers to evaluate outputs of baselines on two sets respectively in the following aspects. Fluency measures if an answer is grammatically correct and is fluent to read. Correctness measures if an answer is targeting the given question and code. Each reviewer gives a score between 1 and 3 for each aspect, with 3 indicating the best quality. As shown in Table 9, CodeBERT gets the competitive performance in most metrics. All models get relatively poor performances on the aspect of correctness compared with fluency. The low scores of correctness indicate that it is quite challenging for models to do well in code QA. 6.6 Qualitative Analysis We provide a couple of examples in Table 10 to demonstrate the outputs from different baselines (more qualitative examples are provided in Appendix C). In the Java example, CodeBERT captures the key component “io container” while other models generate imprecise concepts. As the Python code tries to get all arguments of a page, the first three baselines generate answer either about “page” or about “args” while CodeBERT contains both concepts. The examples reveal that, in comparison to the Seq2seq model and the Transformer model, CodeBERT generates more detailed and accurate answers. 7 Conclusion In this paper, we build the first diverse free-form question answering dataset for code by transforming code comments into QA pairs. We also provide several neural baselines, and demonstrate that CodeQA could lay the foundation for further research on QA-based source code comprehension. For future work, how to expand the number of question types and generate more high-quality QA pairs are the major challenges. Besides, we will explore more powerful QA models to better leverage information from code and capture interaction between code and question. Acknowledgments This work was supported by National Natural Science Foundation of China (61772036), Beijing Academy of Artificial Intelligence (BAAI) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments. Xiaojun Wan is the corresponding author. References Xing Hu, Ge Li, Xin Xia, David Lo, Shuai Lu, and Zhi Jin. 2018b. Summarizing source code with transferred api knowledge. A Wh-question Formulation Details To generate wh-questions, we first parse comments into dependency trees in the format of Universal Dependencies (UD) by using the allennlp parser. Dependency trees are syntactic tree structures, where syntactic units are connected via links. We extract the clause of a verb headed by a few dependency nodes which can serve as answers with the help of PropBank’s predicate-argument structure (SRL). The clause is treated as a combination of a subject, an object, the head verb and other non-core arguments. Furthermore, the clause can be refined with modals, auxiliaries and negations if found around the verb. The SRL model is provided by Gardner et al. (2018). Then templates in Dhole and Manning (2020) are used to generate QA pairs. The templates convert What to Who/Whom, When or Where depending on the named entity of the answer. To ensure subject-predicate concord, templates modify do to does or did relying on the tense and number of the subject. Algorithm 1 illustrates the heuristic rules of dependency parsing. Algorithm 1 Heuristic Rules of DP 1: \{d_0, ..., d_n\} ← DP(w_0...w_n) 2: for i = 0 to n do 3: \textbf{if} parent(d_i) is not null \textbf{then} 4: \textbf{for} i = 0 to s do 5: \textbf{if} A_0 or A_1 and \textbf{then} 6: subj ← A_0 7: \textbf{for} A_x ∈ SRL do 8: \textbf{for} A_x ∈ SRL do 9: \textbf{for} A_x ∈ SRL do 10: \textbf{for} A_x ∈ SRL do 11: QA ← template(subj, obj, A_x, verb) Then we extract the roles of each predicate occurred in the comment by the SRL model provided by Gardner et al. (2018). Semantic roles include the generalized core arguments of predicates labeled as A0, A1, etc., with a set of adjunct modifiers. According to Propbank 1.0, the roles are proto-agent, proto-patient, location, direction, manner, extent, cause, etc. Roles like location, direction are classified as modifiers, which can formulate our answers. We make use of a set of predefined handwritten templates in Dhole and Manning (2020), which convert a comment into an interrogative statement by rearranging the arguments according to the modifier. Algorithm 2 describes the heuristic rules of semantic role labeling. Algorithm 2 Heuristic Rules of SRL 1: \{SRL_0, ..., SRL_s\} ← SRL(w_0...w_n) 2: for i = 0 to s do 3: \textbf{if} SRL_i contains A_0 or A_1 and \textbf{then} 4: \{A_0, ..., A_CAUT, A_TMP\} ← SRL_i 5: \textbf{if} A_x = modifier \textbf{then} 6: for A_x ∈ SRL do 7: subj ← A_0 8: A_x ← \sum(A_3, ..., A_TMP - A_x) 9: verb ← \{A_x, modals, negation\} 10: template ← modifier_type 11: QA ← template(subj, A_x, A_x, verb) B Baseline Details - Seq2seq: bidirectional RNN with number of layers = 2, hidden size = 512, batch size = 32, beam size = 4, learning rate = 0.002, dropout = 0.2, Adam optimizer (Kingma and Ba, 2014). - Dual Encoder: Both the code-info encoder and the question-info encoder have 2 layers. Other hyper-parameters are same as Seq2seq. - Transformer: Transformer model with number of layers = 6, number of heads = 8, hidden size = 512, batch size = 32, beam size = 4, initial learning rate = 0.0001, dropout = 0.2, Adam optimizer. - CodeBERT: The encoder is the pre-trained CodeBERT, while the decoder is a transformer structure with number of layers = 6, number of heads = 12. Other hyper-parameters: batch size = 64, beam size = 10, learning rate = 5e-5, Adam optimizer. For each of the first three models, we train the model for a maximum of 200 epochs on a Nvidia 1080 Ti GPU and perform early stop if the validation performance does not improve for 20 consecutive iterations. We fine-tune CodeBERT for 20 epochs on 3 Nvidia 1080 Ti GPUs and select the checkpoint with best BLEU score. C Qualitative Examples public Select<T> sortAsc(String[] columns) { for (String column : columns) { mSortingOrderList.add(column + STRING); } return this; } Question: How does the specified columns sort? Answer: in asc order Seq2seq: in ascending order Dual Encoder: in desc order Transformer: in desc order CodeBERT: in ascending order public static int count(String string, String mark) { if (!TextUtils.isEmpty(string) && !TextUtils.isEmpty(mark)) { int count = _NUM; int index = string.indexOf(mark); while (index != -_NUM) { count++; string = string.substring(index + mark.length()); index = string.indexOf(mark); } return count; } return _NUM; } Question: What does the code count? Answer: how many marks existed in string Seq2seq: the string Dual Encoder: the number of elements in the string Transformer: the number of occurrences of this string CodeBERT: the number of times the given string public synchronized void create(long seqno) throws ReplicatorException, InterruptedException { if (file.exists()) { throw new THLException(STRING + file.getName()); } try { dataOutput = new BufferedFileDataOutput(file, bufferSize); } catch (IOException e) { throw new THLException(STRING + file.getName(), e); } mode = AccessMode.write; try { write(MAGIC_NUMBER); write(MAJOR_VERSION); write(MINOR_VERSION); write(seqno); flush(); } catch (IOException e) { throw new THLException(MAJOR_VERSION + file.getName(), e); } baseSeqno = seqno; if (logFlushTask != null) logFlushTask.addLogFile(this); } Question: What does the code create? Answer: a new log file Seq2seq: a new instance Dual Encoder: a file Transformer: a file CodeBERT: a log file Table 11: Qualitative examples of different models’ performance on Java dataset. def html_to_xhtml(html): try: html = html.getroot() except AttributeError: pass prefix = "{%s}" % XHTML_NAMESPACE for el in html.iter(etree.Element): tag = el.tag if tag[0] != '{': el.tag = prefix + tag Question: How do all tags in an html tree convert to xhtml? Answer: by moving them to the xhtml namespace Seq2seq: recursively Dual Encoder: with the given html Transformer: using xhtml tags CodeBERT: by removing their xhtml namespace def table_extend(tables, keep_headers=True): from copy import deepcopy for ii, t in enumerate(tables[:]): t = deepcopy(t) if t[0].datatype == 'header': t[0][0].data = t.title t[0][0]._datatype = None t[0][0].row = t[0][1].row if not keep_headers and (ii > 0): for c in t[0][1:]: c.data = '' if ii == 0: table_all = t else: r1 = table_all[-1] r1.add_format('txt', row_dec_below='–') table_all.extend(t) table_all.title = None return table_all Question: What does the code extend? Answer: a list of simple tables Seq2seq: a table Dual Encoder: a list of 0 Transformer: the tables CodeBERT: a list of tables def calc_angle(v1, v2, v3): v1 = v1 - v2 v3 = v3 - v2 return v1.angle(v3) Question: What does the code calculate? Answer: the angle between 3 vectors representing 3 connected points Seq2seq: the angle of the v1 Dual Encoder: the v2 Transformer: the angle angle of an error angle CodeBERT: the angle between two numbers Table 12: Qualitative examples of different models’ performance on Python dataset.
{"Source-Url": "https://aclanthology.org/2021.findings-emnlp.223.pdf", "len_cl100k_base": 8187, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 45277, "total-output-tokens": 12323, "length": "2e12", "weborganizer": {"__label__adult": 0.0003886222839355469, "__label__art_design": 0.00024580955505371094, "__label__crime_law": 0.0003032684326171875, "__label__education_jobs": 0.0009312629699707032, "__label__entertainment": 5.352497100830078e-05, "__label__fashion_beauty": 0.00015878677368164062, "__label__finance_business": 0.00014162063598632812, "__label__food_dining": 0.0002963542938232422, "__label__games": 0.0004820823669433594, "__label__hardware": 0.0005626678466796875, "__label__health": 0.0003037452697753906, "__label__history": 0.00012946128845214844, "__label__home_hobbies": 8.660554885864258e-05, "__label__industrial": 0.0002548694610595703, "__label__literature": 0.00023090839385986328, "__label__politics": 0.00016057491302490234, "__label__religion": 0.00030493736267089844, "__label__science_tech": 0.00457763671875, "__label__social_life": 0.00010281801223754884, "__label__software": 0.004749298095703125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002760887145996094, "__label__transportation": 0.0004069805145263672, "__label__travel": 0.0001697540283203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45916, 0.03511]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45916, 0.59603]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45916, 0.82314]], "google_gemma-3-12b-it_contains_pii": [[0, 3931, false], [3931, 8368, null], [8368, 13253, null], [13253, 15512, null], [15512, 19104, null], [19104, 22727, null], [22727, 25636, null], [25636, 27806, null], [27806, 32291, null], [32291, 36990, null], [36990, 38571, null], [38571, 42037, null], [42037, 42283, null], [42283, 44214, null], [44214, 45916, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3931, true], [3931, 8368, null], [8368, 13253, null], [13253, 15512, null], [15512, 19104, null], [19104, 22727, null], [22727, 25636, null], [25636, 27806, null], [27806, 32291, null], [32291, 36990, null], [36990, 38571, null], [38571, 42037, null], [42037, 42283, null], [42283, 44214, null], [44214, 45916, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45916, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45916, null]], "pdf_page_numbers": [[0, 3931, 1], [3931, 8368, 2], [8368, 13253, 3], [13253, 15512, 4], [15512, 19104, 5], [19104, 22727, 6], [22727, 25636, 7], [25636, 27806, 8], [27806, 32291, 9], [32291, 36990, 10], [36990, 38571, 11], [38571, 42037, 12], [42037, 42283, 13], [42283, 44214, 14], [44214, 45916, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45916, 0.05063]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
d90e1491e9018a818e66b4a4a0585d6ab25b58d8
Introduction, practical information, and recap of database background January 31, 2008 Partly based on [RG 1.5-1.8; 4.2] (not curriculum) Today’s lecture (2 hours) - Course introduction - Practical information - Overview of lectures - Quick recap of relevant database background (if time allows) What is a database? According to Webster’s dictionary: **database** a usually large collection of data organized especially for rapid search and retrieval (as by a computer) What is a database management system? Database management system (DBMS): Software system used when implementing databases more precisely System for providing efficient, convenient, and safe storage of and multi-user access to, possibly massive, amounts of persistent data. Problem session: (5 minutes, discuss in groups of 2-4 students) In this course we focus on massive amounts of data. Think about examples of large data sets for which there is a need for some kind of DBMS (existing or still not existing). Your main goals in the course should be to: - Understand how relational database systems work and what influences their performance. Main focus is on large data sets. - Get an understanding of indexing methods, both ones that are presently common, and ones that are likely to play a role in the future. - Be able to use the above knowledge for tuning, and for critically evaluating upcoming database technologies. Why this course is important - Tuning databases is a non-trivial and important activity for many database developers and administrators. ("But what about self-tuning databases?") - Most common tasks of databases can be done fast when the amount of data is small, while it may be very time consuming for larger data sets. Scalability is challenging, especially if performance guarantees are needed. - There is a need for people that are able to handle massive data sets! In many areas the amount of data grows faster than the size of internal memory, e.g. biological data, internet pages. Also, it is becoming common to store large amounts of historical data. This means that the data to process cannot be expected to fit into internal memory. Why this course is interesting - Quote from “Database Tuning”: “Tuning is difficult because the principles and knowledge underlying that common sense requires a broad and deep understanding of the application, the database software, the operating system, and the physics of the hardware.” - You will see many smart and elegant algorithmic ideas that go into DBMS software. - Tuning is like athletics — infinitely challenging, always a way of doing better. The focus will be on scalability aspects of database implementation: - Efficient data structures for indexes - Efficient implementation of relational operations We will spend a lot of time on analytical tools that can be used to reliably predict performance. When rules-of-thumb (guidelines that mostly work well) are presented, we try to identify their limitations. **Typical numbers on what is done in 1 second (PC, 1 hard drive):** - Perform up to 20,000 million instructions (MIPS) - Read 800 million 4-byte words from RAM, sequentially - Read 34 million words from RAM, random access - Read 20 million 4-byte words from disk, sequentially - Read 200 words on disk, random access. **Conclusions:** - The time to retrieve data from RAM or disk is likely to be the bottleneck for data intensive applications. - Sequential memory access is much faster than random access. Block transfers: Because of the large access time, every memory access is used to transfer a whole block of adjacent data. (E.g., disk blocks are typically 4-16 kilobytes.) Question: Why do block transfers help? A particularly simple model of external memory is the I/O model, which will be used for most of the material in the course: - The complexity of an algorithm is the number of block reads and writes (I/Os) it makes. - Complexity depends on block size $B$, and size $M$ of “internal memory”. Pretend you are a DBMS! Now suppose that all the relations mentioned in the queries on the hand-out are very large (residing on external memory). How would you process each one of the queries? Next: Practical information Course format Lectures and problem sessions: Mainly Thursdays 10.00-12.00 and 13.00-15.00. Mix of lectures and problem sessions/exercises without preparation. Project: 4 project deliverables to be handed in during the course, and one project report at the end of the course. The project will be initiated on February 5, and most project activities (common sessions, supervision, feedback) will run on Tuesdays. Office hours for Milan each Tuesday 13-15 in room 3C11 (subject to change). Manning of course and course homepage Lecturer: Rasmus Pagh, pagh@itu.dk, office 3C.07. Project supervisor: Milan Ružić, milan@itu.dk, office 3C.11. Homepage: www.itu.dk/people/pagh/DBT08/ • News. • Reading directions for each lecture. • Lecture slides, and other material. • Material for the project. • Intranet with material (password protected). What we expect from you • Basic course in databases, e.g., – Relational data model / relational databases – SQL (and perhaps relational algebra) • Basic course in algorithms and data structures, e.g., – Search trees – Sorting algorithms – Hashing – Big-O notation – Basic algorithm analysis Next: Course overview (different slide set) Goal: Refresh your memory, and agree on common terminology. - Basic concepts in relational data model, like attribute, schemas, keys etc. - Relational algebra - Basic operations, like set operations, joins, selection etc. - Bags (multisets) - More operations, e.g., duplicate removal, grouping - Indexes - Transactions All major general purpose DBMSs are based on the so-called **relational data model**. This means that all data is stored in a number of tables (with named columns), such as: <table> <thead> <tr> <th>accountNo</th> <th>balance</th> <th>type</th> </tr> </thead> <tbody> <tr> <td>12345</td> <td>1000.00</td> <td>savings</td> </tr> <tr> <td>67890</td> <td>2846.92</td> <td>checking</td> </tr> <tr> <td>32178</td> <td>-3210.00</td> <td>loan</td> </tr> <tr> <td>…</td> <td>…</td> <td>…</td> </tr> </tbody> </table> For historical, mathematical reasons such tables are referred to as **relations**. SQL is a query language for relational databases and is based on **relational algebra**. A **relation instance** is a two-dimensional table of data. The order of rows and columns can be exchanged, and it is still the same relation instance. An **attribute** is the name of a column in a relation instance. **Example:** <table> <thead> <tr> <th>title</th> <th>year</th> <th>length</th> <th>film Type</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>104</td> <td>color</td> </tr> <tr> <td>Wayne’s World</td> <td>1992</td> <td>95</td> <td>color</td> </tr> </tbody> </table> A **tuple** is a row in a table. The values in the row are called components. A relation (instance) can be seen as a set of tuples. **Example:** <table> <thead> <tr> <th>Movie</th> <th>Year</th> <th>Length</th> <th>Film Type</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>104</td> <td>color</td> </tr> <tr> <td>Wayne’s World</td> <td>1992</td> <td>95</td> <td>color</td> </tr> </tbody> </table> A **schema** is a description of a class of relation instances with the same attributes. It consists of a name for the relation and a set of attributes. (It may also contain data types for attributes.) **Example:** <table> <thead> <tr> <th>title</th> <th>year</th> <th>length</th> <th>filmType</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>104</td> <td>color</td> </tr> <tr> <td>Wayne’s World</td> <td>1992</td> <td>95</td> <td>color</td> </tr> </tbody> </table> Schema: Movies(title, year, length, filmType). A set of schemas is called a **database schema**. The word **relation** can refer both to a particular relation instance and to a schema (an “abstract relation instance”). Saying “the relation $R$” is similar to saying “the integer $x$”. Depending on the context we may or may not be thinking of a concrete value-instance. Keys of a relation A key (≠ primary key) for a relation is a set of its attributes that satisfy: - **Uniqueness.** The values of the attributes uniquely identify a tuple. (This should hold for all possible instances of the relation.) - **Minimality.** No proper subset of the attributes has the uniqueness property. If uniqueness is satisfied (but not necessarily minimality) the attributes are said to form a superkey. <table> <thead> <tr> <th>title</th> <th>year</th> <th>length</th> <th>filmType</th> <th>studioName</th> <th>starName</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Carrie Fisher</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Mark Hamill</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Harrison Ford</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>104</td> <td>color</td> <td>Disney</td> <td>Emilio Estevez</td> </tr> </tbody> </table> **Key:** \{title, year, starName\} **Superkey:** \{title, year, length, starName\} Relational algebra is notation for expressing queries on relations. The rest of the recap is about: - Basic operations in relational algebra: - set operations (e.g. union) - selection and projection - join - Bags (multisets). Why they are used and what the consequence is. - More operations, e.g., duplicate removal, grouping - Indexes - Transactions Set operations Operations on two sets \( R \) and \( S \), where \( R \) and \( S \) must have the same set of attributes. We have the three set operations: - Union, \( R \cup S \), - Intersection, \( R \cap S \), and - Difference, \( R \setminus S \). Example: <table> <thead> <tr> <th>( R )</th> <th>( S )</th> </tr> </thead> <tbody> <tr> <td><strong>title</strong></td> <td><strong>year</strong></td> </tr> <tr> <td>Star Wars</td> <td>1977</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> </tr> <tr> <td>Wayne’s World</td> <td>1992</td> </tr> </tbody> </table> **Projection** A projection of relation $R$ on attributes $A_1, \ldots, A_n$ is denoted by $$ \pi_{A_1, \ldots, A_n}(R) $$ and is the relation $R$ restricted to columns for attributes $A_1, \ldots, A_n$. <table> <thead> <tr> <th>title</th> <th>year</th> <th>length</th> <th>filmType</th> <th>studioName</th> <th>starName</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Carrie Fisher</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Mark Hamill</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>color</td> <td>Fox</td> <td>Harrison Ford</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>104</td> <td>color</td> <td>Disney</td> <td>Emilio Estevez</td> </tr> </tbody> </table> $$ \pi_{title, length, studioName}(Movies) = $$ <table> <thead> <tr> <th>title</th> <th>length</th> <th>studioName</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>124</td> <td>Fox</td> </tr> <tr> <td>Mighty Ducks</td> <td>104</td> <td>Disney</td> </tr> </tbody> </table> A selection of tuples satisfying condition $C$ from relation $R$ is denoted by $$\sigma_C(R)$$ and is the relation $R$ restricted to tuples for which condition $C$ is satisfied. $C$ can be any boolean expression, i.e. it may involve multiple attributes, constants, AND, OR, and NOT. **Example:** <table> <thead> <tr> <th>title</th> <th>year</th> <th>filmType</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>color</td> </tr> <tr> <td>Mighty Ducks</td> <td>1991</td> <td>color</td> </tr> <tr> <td>Wayne’s World</td> <td>1992</td> <td>color</td> </tr> </tbody> </table> \[ \sigma_{\text{year}>1981}(\text{Movies}) = \begin{align*} \text{Mighty Ducks} & | 1991 | \text{color} \\ \text{Wayne’s World} & | 1992 | \text{color} \end{align*} \] Natural-Join or Inner-Join: Let $R$ and $S$ be two relations with attributes $R_1, \ldots, R_n$ and $S_1, \ldots, S_m$ respectively. The join of relations $R$ and $S$, denoted $$ R \Join S $$ has attributes $\{R_1, \ldots, R_n\} \cup \{S_1, \ldots, S_m\}$. If $r \in R$ and $s \in S$ agree on attributes $\{R_1, \ldots, R_n\} \cap \{S_1, \ldots, S_m\}$ then the joint tuple for $r$ and $s$ is in $R \Join S$. There are other types of join, e.g., Theta-Join and Outer-Join. ## Join example <table> <thead> <tr> <th>Movies</th> <th>StarsIn</th> </tr> </thead> <tbody> <tr> <td><em>title</em></td> <td><em>year</em></td> </tr> <tr> <td>Star Wars</td> <td>1977</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> </tr> <tr> <td>Mighty D.</td> <td>1991</td> </tr> <tr> <td>Wayne’s W.</td> <td>1992</td> </tr> <tr> <td>Wayne’s W.</td> <td>1992</td> </tr> </tbody> </table> Movies $\Join$ StarsIn = <table> <thead> <tr> <th>title</th> <th>year</th> <th>length</th> <th>studioN.</th> <th>starN.</th> </tr> </thead> <tbody> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>Fox</td> <td>Carrie F.</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>Fox</td> <td>Mark H.</td> </tr> <tr> <td>Star Wars</td> <td>1977</td> <td>124</td> <td>Fox</td> <td>Harrison F.</td> </tr> <tr> <td>Mighty D.</td> <td>1991</td> <td>104</td> <td>Disney</td> <td>Emilio E.</td> </tr> <tr> <td>Wayne’s W.</td> <td>1992</td> <td>95</td> <td>Param.</td> <td>Dana C.</td> </tr> <tr> <td>Wayne’s W.</td> <td>1992</td> <td>95</td> <td>Param.</td> <td>Mike M.</td> </tr> </tbody> </table> Introduction, practical information, and recap of database background Relational algebra is an algebra on sets, but most database systems do not (only) use sets, they (also) use bags. A **bag** or a multiset, is a set where elements may appear more than once. (E.g., in a relation there may be two or more identical rows.) The motivation for using bags instead of sets is that some operations can be implemented faster. E.g., - union - projection Operations on bags vs. sets Some examples of the difference between operations on bags and sets. - \( R \cup S \): All rows in \( R \) and \( S \), even if they appear in both or if they appear more than once in \( R \) or in \( S \). - \( R \cap S \): if tuple \( t \) appears \( n \) times in \( R \) and \( m \) times in \( S \), then it appears \( \min(n, m) \) times in \( R \cap S \). - \( \pi_{A_1,\ldots,A_n}(R) \) (projection): All tuples in \( R \) also appear in \( \pi_{A_1,\ldots,A_n}(R) \), even if the rows become identical when some columns are removed. <table> <thead> <tr> <th>Movies1</th> <th></th> <th>Movies2</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>title</td> <td>year</td> <td>title</td> </tr> <tr> <td></td> <td>Star Wars</td> <td>1977</td> <td>Star Wars</td> </tr> <tr> <td></td> <td>Mighty Ducks</td> <td>1991</td> <td>Mighty Ducks</td> </tr> <tr> <td></td> <td>Wayne’s World</td> <td>1992</td> <td>Star Wars</td> </tr> </tbody> </table> Other useful relational operations often used in languages like SQL: - Duplicate elimination: When bags are used it is useful to be able to get rid of duplicates. \( \delta(R) \) - Aggregation operators: E.g., sum, average, maximum in a column. \( \gamma_{\text{OP}}(A)(R) \), where \( \text{OP} \) is e.g., max. - Grouping (not described in RG): Divide a relation up into groups of tuples depending on the values in one or more attributes. Used together with aggregation. \( \gamma_{A_1,...,A_n,\text{OP}}(A)(R) \). - Extended projection: Creation of new columns from existing columns by performing some kind of computation. SQL SQL is a language that can be used for expressing queries on relations. It is based on a “mixture” of relational algebra for sets and bags. Some SQL examples: - **SELECT** $A_1, \ldots, A_n$ **FROM** $R$ means $\pi_{A_1, \ldots, A_n}(R)$. - **SELECT** * **FROM** $R$ **WHERE** $C$ means $\sigma_C(R)$. - $R$ **UNION** $S$ means $R \cup S$ (set union, for bag union use **UNION ALL**). - $R$ **EXCEPT** $S$ means $R \setminus S$. - $R$ **NATURAL JOIN** $S$ means $R \bowtie S$. - **SELECT** DISTINCT * **FROM** $R$ means $\delta(R)$. - **SELECT** $A$, $\text{OP}(B)$ **FROM** $R$ **GROUP BY** $A$ means $\gamma_{A,\text{OP}(B)}(R)$. SQL also supports the creation and modification of relations. Some SQL examples: - \texttt{CREATE TABLE} \( R \) (\textless\text{schema description}\>) - \texttt{INSERT INTO} \( R \) VALUES \( (v_1, \ldots, v_n) \). - \texttt{DELETE FROM} \( R \) \texttt{WHERE} \( C \). - \texttt{UPDATE} \( R \) \texttt{SET} \( A = v \) \texttt{WHERE} \( C \). Consider the selection query: ```sql SELECT * FROM R WHERE <condition> ``` - If we have to report 80% of the tuples in R, it makes sense to do a full table scan. - On the other hand, if the query is very **selective**, and returns just a small percentage of the tuples we might hope to do better, by using an **index**. To be able to quickly find the first tuple with a specific value for an attribute, the DBMS may build an index on that attribute. A database index is similar to an index in the back of a book: 1. For every piece of data you might be interested in (e.g., the attribute year=1977), the index says where to find it. 2. The index itself is organized such that one can quickly do the lookup. Some indexes are efficient for both point queries (year=1977) and range queries (1985<year<1999), while others only support efficient point queries. Indexes are also used by the DBMS to speed up other operations, e.g., join operations are sometimes considerably faster when a join attribute is indexed. In most DBMSs we can specify what indexes should be created, e.g.: - CREATE INDEX I ON R(A) Transactions One or more updates in a database can be grouped into something called a transaction. This is a way to ensure correct updates of the database. Ideal transactions are said to meet the ACID test: - **Atomicity** – the all-or-nothing execution of transactions. - **Consistency** – transactions preserve database constraints. - **Isolation** – the appearance that transactions are executed one by one. - **Durability** – the effect of a transaction is never lost once it has completed. A good DBMSs should fully implement **A, C and D**, and will allow the user to specify the extent to which **I** should hold (for efficiency reasons). However, **I** always applies to any single SQL statement in a transaction. Summary of recap This part of the lecture was about: - the relational data model - relational algebra - relational algebra on bags - some examples in SQL - Properties of DBMSs These concepts will underlie much of the course.
{"Source-Url": "http://www.itu.dk/people/pagh/DBT08/00-recap.pdf", "len_cl100k_base": 5607, "olmocr-version": "0.1.49", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 56315, "total-output-tokens": 6488, "length": "2e12", "weborganizer": {"__label__adult": 0.0006160736083984375, "__label__art_design": 0.0011167526245117188, "__label__crime_law": 0.0008878707885742188, "__label__education_jobs": 0.224609375, "__label__entertainment": 0.0001842975616455078, "__label__fashion_beauty": 0.00040602684020996094, "__label__finance_business": 0.0011014938354492188, "__label__food_dining": 0.0010671615600585938, "__label__games": 0.0007715225219726562, "__label__hardware": 0.00211334228515625, "__label__health": 0.0017118453979492188, "__label__history": 0.0010356903076171875, "__label__home_hobbies": 0.0005259513854980469, "__label__industrial": 0.0015802383422851562, "__label__literature": 0.000935077667236328, "__label__politics": 0.0005064010620117188, "__label__religion": 0.0011053085327148438, "__label__science_tech": 0.1519775390625, "__label__social_life": 0.0006313323974609375, "__label__software": 0.05767822265625, "__label__software_dev": 0.54736328125, "__label__sports_fitness": 0.0005135536193847656, "__label__transportation": 0.0011463165283203125, "__label__travel": 0.0005321502685546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17738, 0.02815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17738, 0.43765]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17738, 0.82786]], "google_gemma-3-12b-it_contains_pii": [[0, 140, false], [140, 299, null], [299, 476, null], [476, 993, null], [993, 1408, null], [1408, 2154, null], [2154, 2613, null], [2613, 2982, null], [2982, 3491, null], [3491, 3995, null], [3995, 4190, null], [4190, 4218, null], [4218, 4749, null], [4749, 5102, null], [5102, 5409, null], [5409, 5454, null], [5454, 5774, null], [5774, 6338, null], [6338, 6817, null], [6817, 7205, null], [7205, 7737, null], [7737, 8011, null], [8011, 8989, null], [8989, 9352, null], [9352, 9906, null], [9906, 10725, null], [10725, 11377, null], [11377, 11855, null], [11855, 12607, null], [12607, 12988, null], [12988, 14056, null], [14056, 14686, null], [14686, 15325, null], [15325, 15673, null], [15673, 15997, null], [15997, 16386, null], [16386, 16784, null], [16784, 17511, null], [17511, 17738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 140, true], [140, 299, null], [299, 476, null], [476, 993, null], [993, 1408, null], [1408, 2154, null], [2154, 2613, null], [2613, 2982, null], [2982, 3491, null], [3491, 3995, null], [3995, 4190, null], [4190, 4218, null], [4218, 4749, null], [4749, 5102, null], [5102, 5409, null], [5409, 5454, null], [5454, 5774, null], [5774, 6338, null], [6338, 6817, null], [6817, 7205, null], [7205, 7737, null], [7737, 8011, null], [8011, 8989, null], [8989, 9352, null], [9352, 9906, null], [9906, 10725, null], [10725, 11377, null], [11377, 11855, null], [11855, 12607, null], [12607, 12988, null], [12988, 14056, null], [14056, 14686, null], [14686, 15325, null], [15325, 15673, null], [15673, 15997, null], [15997, 16386, null], [16386, 16784, null], [16784, 17511, null], [17511, 17738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17738, null]], "pdf_page_numbers": [[0, 140, 1], [140, 299, 2], [299, 476, 3], [476, 993, 4], [993, 1408, 5], [1408, 2154, 6], [2154, 2613, 7], [2613, 2982, 8], [2982, 3491, 9], [3491, 3995, 10], [3995, 4190, 11], [4190, 4218, 12], [4218, 4749, 13], [4749, 5102, 14], [5102, 5409, 15], [5409, 5454, 16], [5454, 5774, 17], [5774, 6338, 18], [6338, 6817, 19], [6817, 7205, 20], [7205, 7737, 21], [7737, 8011, 22], [8011, 8989, 23], [8989, 9352, 24], [9352, 9906, 25], [9906, 10725, 26], [10725, 11377, 27], [11377, 11855, 28], [11855, 12607, 29], [12607, 12988, 30], [12988, 14056, 31], [14056, 14686, 32], [14686, 15325, 33], [15325, 15673, 34], [15673, 15997, 35], [15997, 16386, 36], [16386, 16784, 37], [16784, 17511, 38], [17511, 17738, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17738, 0.2415]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b7df559ea62b7cc4f22d83e9f216b4f9073949a4
MODEL-DRIVEN CONTENT MANAGEMENT FOR WEB-BASED 3D GEOINFORMATION SERVICES Stephan Nebiker, Stephan Schütz, Thomas Wüst Basel University of Applied Sciences (FHBB), CH-4132 Muttenz, Switzerland – (s.nebiker, s.schuetz, t.wuest)@fhbb.ch ABSTRACT Today we are witnessing an increasing number of interactive 3d maps and of web-based 3d geoinformation and entertainment services covering entire regions or countries. One of the major challenges in making such interactive 3d services a lasting success is the cartographic content. In the future this content needs to be up-to-date, relevant and increasingly personalised. There is also a trend towards the integration of user's content (e.g. own hiking trails, holiday locations) within such interactive services. This paper presents concepts and mechanisms for the model-driven capturing, editing, updating and management of domain-specific 3d cartographic content in a distributed environment. One of the key elements presented is the proposed Geo eXchange Language (GXL) which enables the modelling and exchange of domain-specific cartographic content. The paper also introduces a model-driven software framework for the management of 3d cartographic content and illustrates the benefits of such a solution. 1 INTRODUCTION 1.1 Status and motivation Over the first five years of the new millennium we have witnessed considerable progress in the fields of 3d geo-visualisation and interactive 3d cartography. In the field of multimedia cartography, for example, new generations of digital atlases have been developed which incorporate highly interactive 3d sections, e.g. "Atlas of Switzerland – interactive" (Swisstopo – Bundesamt für Landestopografie, 2004). However, probably the most evident progress took place in the field of web-based 3d cartography. Interactive cartographic 3d applications in the late 90ies were either limited to relatively small data sets or high-performance computers. First nation-wide 3d geoinformation services are operational since 2001, e.g. the service "Flight through Switzerland" by GEFONOVA and Swisstopo (GEOVANA, 2001). This service is based on a 3d landscape model with high-resolution orthoimagery and it can interactively be used on standard PCs via a web browser plugin. Recently, the coverage of such web-based 3d geoinformation services has been extended to the entire globe. A freely accessible example of such a global 3d visualisation solution is NASA's Open Source project "World Wind" (NASA, 2004). Today, the interactive, web-based 3d visualisation of large 3d landscape models is rapidly becoming a commodity. The progress in solving the fundamental problem of serving and visualising large cartographic base models, i.e. terrain and raster-based terrain texture, over the Internet or Intranet has allowed to direct the recent research efforts at a number of unresolved issues. These include the aspects of cartographic models, content and functionality. Theory of 3d cartography – The aspect of cartographical theory of 3d maps had long been neglected, despite the technical break-through listed above. Häberling is one of the first authors systematically addressing this issue (Häberling, 2003). He defines a number of design principles for 3d maps, establishes an inventory of design variables for different types of map objects and addresses the important issue of user-oriented design and presentation of map contents. Häberling also states lacking efforts in researching user's needs and usability issues in the context of 3d cartography. Among the first investigations focussing on these issues are studies by Bleisch on the usefulness of realistic 3d visualisations (Bleisch, 2004). Cartographic content – The content of early interactive 3d maps has been limited to the representation of the actual terrain with few additional types of map objects, such as text or image labels hovering above points of interest (POI). Data models for such map objects were typically very simple and primarily graphics-oriented. Recent research projects, for example, address the support for richer object semantics (Nebiker et al., 2004b), the inclusion of user-defined content (Schweizer and Würth, 2004) as well as data models and interoperability for 3d city models (Kolbe et al., 2005). In summary, all these projects focus on richer, more intelligent and up-to-date cartographic content for 3d geoinformation solutions. 1.2 The «Geo-Roaming» Project Geo-Roaming is an industry and government funded research project which was initiated in 2003 with the goal of improving the accessibility, usefulness and sustainability of web-based 3d geoinformation solutions. In a first project phase a service-based architecture for 3d geoinformation services was developed. The server-based 3d visualisation solution requires no software installation on the client and can be used on PCs, PDAs and Smartphones (Figure 1). The goal of the second project phase were the investigation of a model-driven mechanism and the development of a software framework for the modelling, management, exchange and updating of 3d cartographic content in a distributed 3d geoinformation infrastructure. The following paper presents the findings and results of this second project phase. 2 CARTOGRAPHIC CONTENT MANAGEMENT 2.1 3d map objects – types and characteristics 3d maps can consist of a large variety of map object types as indicated in a preliminary summary in (Häberling, 2003). These map objects can be considered as representing the generic cartographic base model, i.e. the 3d landscape model, on the one hand and the typically application- or domain-specific model extensions on the other hand. Among the main three content types for representing these model extensions are: - **POI (point of interest)** – a point-oriented content object type consisting of a text label or a billboard with various spatial, thematic, graphical and behavioural properties. Examples include place names, landmarks or location indicators for persons or other tracked object. - **2d objects** – a linear or areal vector object type. Examples include: hiking tracks, danger zones or ski slopes etc. - **3d objects** – a volume- or surface-based object type with a potentially very complex geometry and with properties graphical properties such as photorealistic textures. Examples include: 3d models of buildings, traffic infrastructure or vehicles. This short list indicates the large variety in spatial, thematic, graphical and behavioural properties of these typical content types for 3d maps. Among the common properties of such 3d map objects are multiple levels of detail (LOD) or the visible range of objects. With regards to a management solution for such cartographic content, the following requirements can be formulated: - **Rich and extensible content** – A modern content management solution needs to support at least the three content types listed above, with the possibility to specify additional user-defined properties and to add future extensions, such new multimedia types. - **Application domains** – Already today, 3d geoinformation solutions are established in a broad spectrum of application domains, ranging from tourism, sports, education, gaming, aviation and simulation right through to security and defence. Each of these applications requires its own domain-specific cartographic data model, since it is inconceivable to create a universal data model satisfying all the diverse and evolving demands. - **Timeliness and up-to-dateness** – Web-based solutions have the potential to provide up-to-date cartographic content, a potential which has been largely untapped in the past. A modern content management solution should support the automated updating of cartographic content, preferably in near real-time. - **User content** – With the imminent integration of GPS and mobile communication technology mobile positioning is becoming a commodity. This will dramatically change the role of map users. For example, it not only enable them to obtain precise location-based information but also to play a far more active role as users and creators of geoinformation, e.g. by recording and annotating hiking or biking trails and by sharing them with other users. ### 2.2 Model-based 3d content management Due to the diverse and evolving requirements of the different application domains only a model-based content management solution was considered as future-oriented and sustainable. Such a model-based content management solution consists of the following two pillars: - a model-driven data exchange mechanism and - a model-driven software framework The exchange mechanism GXL (Geo eXchange Language) and the software framework of the Geo-Roaming project are outlined in the following two chapters. But first, it might be worthwhile to look at the typical processes of a content management solution for 3d geoinformation services and at the interaction between the model-driven software components and the exchange mechanism in general and the content model in particular. In interactive, web-based 3d geoinformation services there are four basic processes interacting with cartographic content (Figure 3): - (interactive) content capturing and editing - content management and storage - service generation and updating - service utilisation and content visualisation ![Figure 3: Model-driven content management process](image) ![Figure 4: The GXL base schema as a GML application](image) flow with the central GXL-based content model driving the different processes and software components. language and GXL application schemas for different application domains. In a model-based environment all four processes, i.e. all respective software components, are 'driven' by a common content model. The structure and encoding rules for any data objects created by and exchanged between these software components are automatically derived from the content model. In the following chapter this will be outlined in some more detail. 3 THE GEO EXCHANGE LANGUAGE GXL The analysis of the above-mentioned requirements showed the need for a mechanism which a) is capable of handling POI, 2d and 3d objects each with a comprehensive and extensible set of semantic properties and b) supports domain-specific data models and a corresponding data exchange. At the time of the investigations none of the common geospatial formats or data exchange standards were able to fulfil these requirements. 3d formats such as VRML or DXF were lacking an actual data modelling support with an extensible object concept and rich semantics. The geospatial exchange mechanism INTERLIS (Dorfschmid and Brawer, 2003) with a well proven support for model-based data exchange was lacking a support for 3d geometry (Nebiker et al., 2004a). The Geography Markup Language (GML 3) (Lake, 2004) was featuring a very early and unconsolidated 3d geometry model which, for example, was lacking support for object textures. These results led to the development of the Geo eXchange Language (GXL). GXL is an extensible mechanism for the modelling and exchange of cartographic 3d contents and is based on GML 3.1. The main characteristics of GXL are explained below. 3.1 Basic structure of GXL GXL is based on GML 3.1 with a small number of extensions and restrictions and is defined in XML Schema. Content objects in GXL, for example, are generally represented by GML spatial objects (features and feature collections) with their respective spatial and non-spatial properties. Based on positive experiences with INTERLIS, GXL was incorporated a predefined modelling hierarchy, which largely facilitates model-driven architectures. GXL also extends GML with a number of additional 3d data types. The resulting GXL base schema is a GML application schema and as a consequence GXL can be considered a GML application language. However, in contrast to domain-specific GML application schemas, GXL constitutes a general geospatial modelling language which in itself can be applied to specific application domains. The following paragraphs highlight some of the main data modelling features of GXL: Modelling hierarchy GXL follows a three-level modelling hierarchy of Model, Topic and Class. This constitutes a restriction of GML which allows modelling hierarchies of an arbitrary depth and complexity. A GXL data model starts with a Model element, which can contain any number of topics (Topic elements). A Topic can again contain any number of Class elements. At the Class level, the actual content type is defined (e.g. POI or 2d object) together with its object style (appearance) and additional application-specific attributes. Object types The following object types or content types are currently available in GXL: Point of Interest (text label), Point of Interest (symbol or image), 2d vector, 3d object (d3o) and 3d object (vrml). The first three object types use GML geometry types and extend them with additional GXL elements, for example to handle vertical and horizontal offsets between the actual point of interest and the corresponding text label or the billboard. In order to manage 3d content objects, a GXL object was defined which serves as a container for different existing 3d object types and formats. Thus, 3d objects in commonly used formats such as VRML or in the d3o representation of the DILAS 3d GIS (Nebiker, 2003) can be handled with GXL. It is conceivable that the emerging CityGML (Kolbe et al., 2005) representation for 3d city models could be added to GXL as an additional 3d type. Object styles GXL supports specific style properties for the different types of object types. The style or appearance of a content object can either be defined at the object level or at the class level. This allows, for example, to very quickly modify all POI of the class PostOffice but to retain the appearance of the main post office POI in bold letters. User-defined properties One of the key elements of a model-based mechanism is the possibility to define user-defined properties. GXL content object types can be extended with any number of user-defined properties. Content objects of the type POI holding city names could for example have an additional property of the type hyperlink to the city's official web site. Or 3d content objects representing hotel buildings could have an additional property hotelRating representing the official rating of the hotel. 3.1.1 GXL application schemata Based on the GXL base schema it is now possible to create a domain specific application schema, commonly referred to as a domain-specific 'data model'. Such a GXL application schema contains the data structure (class hierarchies etc.), the content topics and classes with the corresponding geometry types and additional, user-defined properties. Figure 5 illustrates structure and contents of a typical GXL application schema in the tourism domain. The application schema contains the above-mentioned modelling hierarchies Model, Topic and Class. From a GML perspective, each of these elements constitutes a so-called feature collection, with a number of predefined GML properties, e.g. a bounding geometry (property gml:boundedBy), a name (gml:name) and an optional description (gml:description). These GML properties are supplemented by GXL properties such as an object state (gxl:objectState) or a one-dimensional content geometry (gxl:geometry1D). The model could further contain any number of user-defined domain-specific properties such as gxl:hotelRating, gxl:averageRoomRate etc. Please note that the two content object instances (hotels Bellevue and Hilton) are not part of the data model and are shown for illustration only. The application schema in Figure 5 is complemented by the top-level element Transfer, which enables the exchange of data sets containing multiple models. ![Figure 5: Illustration of a GXL application schema with the modelling hierarchies Model, Topic and Class with examples of their respective spatial and semantic properties.](image) 3.1.2 GXL instance documents The structure and encoding of GXL instance documents, i.e. of the actual 3d content data, are automatically derived from the respective GXL application schema. Such a GXL instance document could be a small XML or GXL snippet containing a single content object, e.g. a POI, or a large XML document containing the complete content of a certain topic or an entire content database. An extract of a GXL snippet containing a single POI is shown in Figure 6. In this example GML properties and GXL properties can be nicely distinguished by their different namespaces gml: and gxl:. The example also illustrates the use of the standard GML property gml:id as a unique object identifier (OID) for content objects. This OID plays a key role in identifying and handling content objects in distributed system environments. Figure 6: Extract from a GXL instance document with a single point of interest (POI). 4 MODEL-DRIVEN GEO CONTENT MANAGEMENT FRAMEWORK A model-driven software framework is the second pillar of a model-based content management solution. The 3D geo content management (GCM) software framework developed in Geo-Roaming project is fully driven by GXL, i.e. by the respective GXL application schema. This GXL application schema is used for the creation of data structures within all the GCM components, for the definition and configuration of the data structures within the data repositories, for the exchange of data between software components and also for the validation of the data. Thus, the software components can handle any domain-specific 'data model' as long as it still constitutes a valid GXL schema. An additional goal in the design of the software framework was to complement the modelling flexibility with a maximum of storage flexibility. This was achieved by introducing a persistence framework which supports different storage concepts. The main features of GCM framework are presented below. 4.1 Object representations Within the GCM framework content objects can have a number of different but largely equivalent representations (see Figure 7): a) a C++ object representation, b) a proprietary binary Format, c) a proprietary XML format, d) the open GXL XML format as well as e) the representation(s) in the content repository. The mapping between the C++ representation and the other representations is effected by automated object serialisation and de-serialisation. The object representation in the content repository depends on the selected persistence technology. In case of an SQL DBMS the storage is based on (object-) relational tables (see below). 4.2 Query mechanism Among the key elements of a geospatial content management framework are the querying and selection of subsets of content objects based on spatial and non-spatial predicates. Within the GCM framework an object query mechanism was developed based on the OGC Filter Encoding Specification (Vretanos, 2005). This specification defines the XML-based platform- and system-independent encoding for the querying and selection of geospatial objects. The Filter Encoding Specification was originally part of OGC WFS, the Web Feature Service Specification but was promoted to an independent specification in order to utilise it in a number of other standards. The main benefit of the Filter Encoding Specification is the independence from any underlying repository technology, e.g. a SQL database. The mapping of an object filter statement to a query for a specific repository technology (Figure 8), e.g. to a PostgreSQL or Oracle SQL query, is handled by the respective query builder component. ```xml <Filter> <And> <PropertyIsLike wildCard="*" singleChar="#" escapeChar="!"> <PropertyName>name</PropertyName> <Literal>Bellevue</Literal> </PropertyIsLike> <Contains> <PropertyName>BBox</PropertyName> <gml:Polygon srsName="http://www.opengis.org/gml/srs/epsg.xml#21781"> <gml:outerBoundaryIs> <gml:LinearRing> <gml:pos>550000 175000</gml:pos> <gml:pos>550000 275000</gml:pos> <gml:pos>650000 275000</gml:pos> <gml:pos>650000 175000</gml:pos> </gml:LinearRing> </gml:outerBoundaryIs> </gml:Polygon> </Contains> </And> </Filter> ``` SELECT "objectId" FROM "Hotels" WHERE "name" LIKE 'Bellevue' ESCAPE '!' AND (Within("boundingbox", GeometryFromText('POLYGON((550000 175000, 650000 175000, 650000 275000, 550000 275000, 550000 175000))',21781))); Figure 8: Example of an OGC compliant object query statement and the equivalent, derived SQL statement Currently, the content management framework supports three main types of object queries: an object filter, an object ID filter and a compound object filter. The first filter type represents the general case of an object filter supporting hierarchically structured spatial and thematic clauses and is compatible with the OGC specification. The second filter type permits the efficient selection of objects based on their object identifier(s) – an important and frequently occurring task. The third filter type enables the querying of objects over multiple classes – a feature which is currently not supported in the OGC standard. XML-based object filters have the disadvantage that they are more complex to read and write than query languages such as SQL and that they are certainly less established. Thus, it was decided to implement a graphical user interface, which plugs into the GXL model hierarchy and supports users in creating valid queries. 4.3 Persistence management mechanism Various developments in the fields of database technologies such as geospatial support in open source database systems (e.g. PostgreSQL or MySQL), the proliferation of XML in different database technologies (object-relational and native XML) and the trend towards service-based architectures (hiding the underlying database technologies) indicated the need for a platform-independent content storage solution. The persistence management mechanism developed as part of the Geo Content Management Framework (GCMF) is based on multiple abstraction layers hiding the underlying storage technology (see Figure 9) and on the above-mentioned platform-independent object query mechanism. The binding of a specific storage or database technology occurs at the lowest abstraction level. Components at the medium level provide functionality which is common to a certain type of database technology, e.g. the mapping to standard SQL for all (object-) relational DBMSs. Any system-specific dialects are handled on a lower abstraction level. The top level of the persistence management mechanism provides a common interface for all other components of the GCMF is independent from any repository technology. ![Figure 9: Persistence management components within a multi-layer architecture; dependency on specific storage technology increases towards the bottom of the diagram.](image) The two main storage technologies targeted with the GCMF are object-relational DBMS on the one hand and XML-based storage on the other. The current implementation supports PostgreSQL, an object-relational database management system developed as an Open Source project. In an object-relational environment the hierarchical structure of GXL is mapped to a table structure. During the initialisation of a new content database, the GXL application schema is used to automatically generate the domain-specific table structure. 3d content objects are then stored as XML snippets using the GXL representation (see Figure 10). ![Figure 10: Relational view on a XML snippet containing GXL data](image) Relational views on the XML snippets are then used for querying the contents of the GXL objects using standard SQL (Figure 10). The advantages of this solution include: no redundant data storage of multiple representations of the same object (e.g. XML and relational) no expensive, repeated parsing of XML contents for frequent queries no complex table structures which otherwise result from relational mappings of XML data no danger of users interfering with the data consistency due to the use of read-only views As an exception to this approach of relational views on XML objects, selected and important standard properties of content objects are extracted and stored as attribute values within the same table. Examples of such important properties include the bounding geometry (gxml:boundedBy) and the object identifier OID (gml:id). These key properties are extracted in order to provide a highly efficient object access. However, it should be noted that these extracted properties and the relational views serve as index structures only and that the GXL snippet remains the main object representation. 4.4 Data communication The communication between the software components of the GCMF is currently based on TCP/IP sockets. This relatively simple mechanism requires just the IP address and the Port of a host computer with a communications server listening for incoming connection requests. With the systematic use of XML within all software components (application schema, data instances, object filter) the addition of a service-based communication, e.g. by using SOAP (Simple Access Protocol) is the next logical step. The addition of Web Service functionality will also enable the utilisation and integration of geospatial web services (e.g. OGC WFS) within the 3d geo content management framework. 5 APPLICATIONS AND RESULTS The geo content management framework was implemented as a fully operational prototype system (see Figure 11). This solution consists of the 'Geo Content Modeler' for the interactive and graphical creation of domain specific data models, the 'Geo Content Editor' for the capturing and editing of content objects in an interactive 3d viewer environment and the 'Geo Content Manager' which hosts the persistence manager and is used to configure the underlying content repositories. The geo content management framework has successfully been used on a number of commercial and research projects. Figure 2, for example, illustrates an innovative application created for the Swiss National Park combining interactive 3d cartography with content-based quizzing functionality. Another interesting project using geo content management components was «bike3d». In this diploma thesis project a prototype web portal for 3d mountain biking routes was developed, which allows users to upload, download and exchange 3d content based on user-generated GPS tracks. 6 CONCLUSIONS AND OUTLOOK In this paper we presented concepts and mechanisms for the model-driven capturing, editing and management of contents for interactive 3d maps and web-based geoinformation services. The proposed Geo eXchange Language GXL combines the required modelling richness with the strictness of XML schema. This makes it a suitable mechanism for a model-driven software framework. The presented framework is based on international standards such as XML, OGC GML and OGC Filter Encoding. GXL was primarily conceived as a powerful mechanism driving the components of the geo content software framework. However, it will be published shortly and could then provide a valuable input for the definition of a 3d cartography profile of GML (e.g. 3dCartoGML). Further work includes the support of additional storage technologies and the inclusion of new content types. Currently, the development of an Oracle interface for the persistence manager is under way. The primary goal of this development is to provide the 3d geo content management framework with a direct access to the 3d GIS DILAS (Nebiker, 2003) and thus to large 3d city models. The integration of native XML databases as one of the long-term goals of the Geo-Roaming projects will be initiated once a minimal geospatial support such as spatial indexing will become available. Future 3d maps will include additional content such as multimedia objects and highly-interactive thematic cartographic objects – a challenge for GXL and the presented geo content management framework. 7 REFERENCES Dorfschmid, J. and Brawer, S., 2003. Modeling of Space-Related Data - An introduction with regard to UML and INTERLIS. Coordination of geographical information and geographical information systems (KOGIS/COGIS), Seftigenstrasse 264, CH – 3084 Wabern, Switzerland, www.kogis.ch. GEONOVA, 2001. Flug durch die Schweiz (Interactive, web-based 3D-Visualisation of Switzerland), Muttenz, Switzerland. 8 ACKNOWLEDGEMENTS The authors would like to thank the Commission of Technology and Innovation (KTI) of the Swiss Federal Office for Professional Education and Technology (OPET) for the funding of the Geo-Roaming project. Thanks are also due to the team of GEONOVA AG for their support and collaboration.
{"Source-Url": "https://icaci.org/files/documents/ICC_proceedings/ICC2005/htm/pdf/oral/TEMA15/Session%202/STEPHAN%20NEBIKER.pdf", "len_cl100k_base": 5951, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 19105, "total-output-tokens": 6608, "length": "2e12", "weborganizer": {"__label__adult": 0.0004224777221679687, "__label__art_design": 0.0016813278198242188, "__label__crime_law": 0.000518798828125, "__label__education_jobs": 0.0015001296997070312, "__label__entertainment": 0.00014734268188476562, "__label__fashion_beauty": 0.00020420551300048828, "__label__finance_business": 0.0004317760467529297, "__label__food_dining": 0.0004498958587646485, "__label__games": 0.0008711814880371094, "__label__hardware": 0.0017004013061523438, "__label__health": 0.0005688667297363281, "__label__history": 0.0024871826171875, "__label__home_hobbies": 0.00011104345321655272, "__label__industrial": 0.0007767677307128906, "__label__literature": 0.0004396438598632813, "__label__politics": 0.0004544258117675781, "__label__religion": 0.0005168914794921875, "__label__science_tech": 0.2607421875, "__label__social_life": 0.00010186433792114258, "__label__software": 0.072998046875, "__label__software_dev": 0.650390625, "__label__sports_fitness": 0.0003390312194824219, "__label__transportation": 0.0011568069458007812, "__label__travel": 0.0007915496826171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29093, 0.02902]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29093, 0.4911]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29093, 0.88489]], "google_gemma-3-12b-it_contains_pii": [[0, 3935, false], [3935, 6422, null], [6422, 9497, null], [9497, 13942, null], [13942, 16780, null], [16780, 18679, null], [18679, 21305, null], [21305, 23901, null], [23901, 26054, null], [26054, 28788, null], [28788, 29093, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3935, true], [3935, 6422, null], [6422, 9497, null], [9497, 13942, null], [13942, 16780, null], [16780, 18679, null], [18679, 21305, null], [21305, 23901, null], [23901, 26054, null], [26054, 28788, null], [28788, 29093, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29093, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29093, null]], "pdf_page_numbers": [[0, 3935, 1], [3935, 6422, 2], [6422, 9497, 3], [9497, 13942, 4], [13942, 16780, 5], [16780, 18679, 6], [18679, 21305, 7], [21305, 23901, 8], [23901, 26054, 9], [26054, 28788, 10], [28788, 29093, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29093, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
0916c6f85799d1ccc4660204a4398cc5b9d11747
Computability Models Finite-State Machines Robert M. Keller Harvey Mudd College October 2013 Reading • Read in Sipser, “Introduction to the Theory of Computation”, chapters 0 and 1. Types of Computability Models • **Machine-Like Models** - Finite-State Machines - Pushdown Automata - Turing Machines • **Equational Models** - Recursive Equations, rewriting - Language Equations • **Language Models** - Grammars - Regular Expressions Why investigate these models? • Understand what is possible with models of different levels of complexity. • Understand ultimate limitations of computation. • Provide insights for algorithm development (for parsing, translation, other problems). Strings • Strings are used to represent input and output to machines. • Strings can be used and produced incrementally. • Strings also generalize natural numbers. Alphabets - Strings are sequences of letters drawn from an alphabet. - Alphabets are usually finite. - $\Sigma$ and $\Delta$ are common symbols for alphabets. \[ \Sigma^* \] - \( \Sigma^* \) is defined to be the set of all *finite* strings over alphabet \( \Sigma \). - Example \( \Sigma = \{0, 1\} \) \[ \Sigma^* = \{\varepsilon, 0, 1, 00, 01, 10, 11, 000, \ldots\} \] - \( \varepsilon \) represents the **empty string** (string of no letters). \( \lambda \) or \( \Lambda \) are also used for this purpose in some sources. Inductive Definition of $\Sigma^*$ - **Basis:** $\varepsilon \in \Sigma^*$ - **Induction rule:** If $x \in \Sigma^*$ and $\sigma \in \Sigma$, then $x\sigma \in \Sigma^*$. - **The only members of** $\Sigma^*$ **are those obtainable by a finite number of rule applications.** String Concatenation - Concatenation is the main operation on strings. - Concatenation is indicated by juxtaposition (placing one string after another). - If $x$ and $y$ are variables with strings as values, then $xy$ means the string consisting of the letters in $x$ followed by those in $y$. - Example: $x = 011$, $y = 01$, $xy = 01101$. Identity Element • The empty string $\varepsilon$ is the identity element for concatenation. • For any $x \in \Sigma^*$ • $x\varepsilon = x$ • $\varepsilon x = x$ Inductive Definition of Concatenation $xy$ - **Basis:** $\forall x \ x\varepsilon = x$ - **Induction rule:** $\forall x \ \forall y \ \forall \sigma \in \Sigma \ \ x(y\sigma) = (xy)\sigma$ Strings vs. Natural Numbers - Natural numbers $N$ can be viewed as a special case of strings over a 1-letter alphabet. - Let say the letter is just ‘1’. - Then the connection is suggested by: <table> <thead> <tr> <th>$N$</th> <th>${1}^*$</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>$\epsilon$</td> </tr> <tr> <td>1</td> <td>1</td> </tr> <tr> <td>2</td> <td>11</td> </tr> <tr> <td>3</td> <td>111</td> </tr> <tr> <td>4</td> <td>1111</td> </tr> <tr> <td>$n$</td> <td>$1^n$ (n 1’s in a row)</td> </tr> </tbody> </table> String Axioms (similar to Peano axioms) Σ elements behave as multiple successors. - **SA1:** $(\forall x \in \Sigma^*) (\forall \sigma \in \Sigma) \ x\sigma \neq \varepsilon$ - **SA2:** $(\forall x, y \in \Sigma^*) (\forall \sigma, \tau \in \Sigma)$ $x\sigma = y\tau$ implies $\sigma = \tau$ and $x = y$ - **SA3(Induction):** Let $P(x)$ be any formula with free variable $x$. \[ (P(\varepsilon) \land \forall x (P(x) \rightarrow \forall \sigma P(x\sigma))) \rightarrow \forall x P(x) \] Associativity • String concatenation is obviously associative: • For any $x, y, z \in \Sigma^*$ \[ x (y z) = (x y) z \] • This could also be proved by induction, using the inductive definition, similar to proofs for the natural number theory. Algebraic Structure - A structure with an associative operation is called a **semigroup**. - A semigroup with an identity is called a **monoid**. - Thus $\Sigma^*$ with concatenation and identity $\varepsilon$ is a monoid, sometimes called the **free monoid** on $\Sigma$. Length of Strings • The $| \ |$ operator on strings denotes the length of a string. This can be defined inductively: • Basis: $|\varepsilon| = 0$ • Induction rule: \[ \forall x \ \forall \sigma \in \Sigma \quad |x\sigma| = |x| + 1 \] Length of a Concatenation - $|xy| = |x| + |y|$ - This can be proved by induction. Letter Count • For any $\sigma \in \Sigma$ for any $x \in \Sigma^*$ $\#_{\sigma}(x) =$ the number of times $\sigma$ occurs in $x$ • This can be defined by induction. • Example: $\#_0(01010) = 3$, $\#_1(01010) = 2$ Replication - for any $x \in \Sigma^*$, $x^n$ means the concatenation of $n$ copies of $x$. - $x^0 = \varepsilon$ - $x^{n+1} = x^n x$ Machines Computing on Strings - **On-line** models has the machine scan the string one letter at a time. ``` 01101 ``` Types of Output - A **transducer** produces a string as output as well. Types of Output - A **classifier** identifies the input as being in one of several classes. Types of Output - An **acceptor** is a classifier with just two classes: accepting and rejecting Types of Input - **Tape** models potential can move back and forth on the string. 1 1 1 0 1 0 1 0 - Any of the previous types of output are possible, as well as tape output. Types of Storage - Some models, such as combinational logic circuits, have no internal storage. - Finite-state models have a finite set of internal states for storage. - Other models can have infinite storage, provided by a tape or other mechanisms. Focus on Acceptors - Computability theory largely focuses on acceptors. - Why not much generality is lost: - We can convert transducers to classifiers. - We can convert classifiers to acceptors. Transducer to Classifier - A **transducer** produces a string as output in response to a string of input. - Each **letter** in the string is produced based on a certain amount of input. - Treat that letter as indicating one member of the output **class** for that much input. Classifier to Acceptor - A classifier identifies a member of a class as its output. - Encode that class as a **bit vector**, using any of many possible encodings (2-ary, 2-adic, 1-hot, thermometer, etc.). Say this requires $n$ bits. - Operate $n$ acceptors in parallel, each of which provides one bit of the encoding. Example of an Acceptor (finite-state) Accepted: 0 0 1 0 0 1 0 0 1 Not accepted: 0 0 1 0 0 1 0 Another Example of an Acceptor (infinite-state) Accepted: $1^p$ where $p$ is prime Not accepted: $1^q$ where $q$ is composite Components of an Acceptor - Q state set - \( \Sigma \) input alphabet - \( \delta: Q \times \Sigma \rightarrow Q \) state-transition function - \( q_0 \in Q \) initial state - \( F \subseteq Q \) accepting ("final") state set These are referenced as a **5-tuple**: \((Q, \Sigma, \delta, q_0, F)\). DFA • An acceptor with a finite-state set is called a “DFA” (deterministic finite-state acceptor) in the Sipser text. Behavior of an Acceptor • There is a transition function $\delta: Q \times \Sigma \rightarrow Q$ • Machine starts in state $q_0$. • From a current state $q$ it changes state to $q'$ with input $\sigma$ provided that $$\delta(q, \sigma) = q'$$ • The machine accepts a string $x \in \Sigma^*$ provided that $$\delta(q_0, x) \in F$$ where $\delta$ has been extended as defined on the next slide. Extension of $\delta$ to domain $Q \times \Sigma^*$ - $\delta : Q \times \Sigma \rightarrow Q$ is given in the machine definition - $\delta : Q \times \Sigma^* \rightarrow Q$ is defined inductively: - $\forall q \in Q \quad \delta(q, \varepsilon) = q$ - $\forall q \in Q \quad \forall x \in \Sigma^* \forall \sigma \in \Sigma$ $$\delta(q, x\sigma) = \delta(\delta(q, x), \sigma)$$ Above, the inner $\delta$ is the extended $\delta$, while the outer $\delta$ is the original one. Presentation of $\delta$ - As a graph, as shown earlier - By a table: <table> <thead> <tr> <th>$\delta$</th> <th>0</th> <th>1</th> </tr> </thead> <tbody> <tr> <td>$q_0$ initial</td> <td>$q_0$</td> <td>$q_1$</td> </tr> <tr> <td>$q_1$ accepting</td> <td>$q_2$</td> <td>$q_0$</td> </tr> <tr> <td>$q_2$</td> <td>$q_3$</td> <td>$q_2$</td> </tr> <tr> <td>$q_3$ accepting</td> <td>$q_2$</td> <td>$q_3$</td> </tr> </tbody> </table> - As a combination of simpler functions (not shown here) Concatenation Lemma \[ \forall q \in Q \ \forall x \in \Sigma^* \ \forall y \in \Sigma^* \] \[ \delta(q, xy) = \delta(\delta(q, x), y) \] Proof by induction on \( y \): - **Basis**: \( \delta(q, x\varepsilon) = \delta(\delta(q, x), \varepsilon) = \delta(q, x) \) - **Induction step**: Assume \( \delta(q, xy) = \delta(\delta(q, x), y) \), show \( \forall \sigma \in \Sigma \ \delta(q, x(y\sigma)) = \delta(\delta(q, x), y\sigma) \). By associativity of concatenation, \( x(y\sigma) = (xy)\sigma \), so \( \delta(q, x(y\sigma)) = \delta(q, (xy)\sigma) \) then, using the definition on the previous page, \[ = \delta(\delta(q, xy), \sigma) = \delta(\delta(q, x), y), \sigma) = \delta(\delta(q, x), y\sigma) \]. Languages - A **language** over an alphabet $\Sigma$ is a subset of $\Sigma^*$. - In other words, a language is a set of finite strings over a given alphabet. - (The subset is not necessarily proper.) Examples of Languages - English, over the alphabet \{a, b, ..., Z\}. - Greek, over the alphabet \{α, β, ..., Ω\}. - The language of all zip codes, over the alphabet \{0, 1, 2, ..., 9\} - The language of all odd binary numerals, over the alphabet \{0, 1\}. - The Python language, over the alphabet \{a, b, ..., _, #\}. Other Languages - \{0, 1\}^* the language of all strings of 0’s and 1’s - \emptyset the empty language - \{\varepsilon\} the language having one element, the empty string - \{0, 1\} the language having two strings, one a single-letter string 0, the other the single-letter string 1. Still More Languages - \( \{x \in \{0, 1\}^* \mid |x| < 64\} \), the language of all strings of 0’s and 1’s with fewer than 64 letters. - \( \{x \in \{0, 1\}^* \mid \#_0(x) = \#_1(x)\} \), the language of all strings of 0’s and 1’s with the same number of 0’s as 1’s. - \( \{0^n1^n \mid n \geq 0\} \), the language of strings of 0’s and 1’s with any number of 0’s followed by the same number of 1’s. Language of an Acceptor If $M$ is an acceptor $(Q, \Sigma, \delta, q_0, F)$ then $$L(M) = \{x \in \Sigma^* \mid \delta(q_0, x) \in F\}$$ is the language accepted by $M$. Example of a Language Accepted $L = \text{the set of strings of } 0\text{'s and } 1\text{'s containing at most one } 1.$ Example of a Language Accepted L = the set of strings of 0’s and 1’s with no two 1’s in a row. Example of a Language Accepted \[ L = \text{set of binary numerals divisible by 3 MSB first} \] \[ = \{0, 11, 110, 1001, 1100, \ldots\} \] Construct Acceptors for These Languages - The language of binary numerals that are divisible by 2, MSB first. - The language of binary numerals that are divisible by 2, LSB first. - The language of strings of 1’s and 0’s in which no two consecutive symbols are the same. As languages are sets, all set operations apply, with their usual meanings: - \( L \cup M \) - \( L \cap M \) - \( L - M = \{x \in L \mid x \notin M\} \) - \( L \oplus M = (L - M) \cup (M - L) \) (There is one operation corresponding to each binary propositional connective.) Operations on Languages - $LM = \{xy \mid x \in L \land y \in M\}$ is called the "concatenation" of $L$ and $M$. - $L^n = LL \ldots L$ (the n-fold concatenation) n times - Note $L^0 = \{\varepsilon\}$, the language consisting of only the empty string. - Note: $\emptyset^0 = \{\varepsilon\}$ by definition. The Star Operation - \( L^* = \bigcup \{ L^n \mid n \in N \} \) where \( N \) is the set of natural numbers. - \( L^* \) is the set of all strings formed by concatenating any number (including 0) of strings from \( L \). The Plus Operation - \( L^+ = \bigcup \{L^n \mid n > 0\} \) - \( L^+ \) is the set of all strings formed by concatenating one or more strings from \( L \). - So \( L^* = L^+ \cup \{\epsilon\} \) Finite-State / Regular Languages - A language is called **finite-state** if it is $L(M)$ for some finite-state acceptor $M$. - Finite-State Languages are also called “regular languages”. Regular Operations • The following language operations are called “regular” (Kleene, 1956): • concatenation: $LM$ • union $L \cup M$ • star $L^*$ Kleene’s Theorem • A language is finite-state (or regular) iff iff it can be constructed from a set of finite languages using a finite-number of regular operations. Example - \{00, 11\}, \{100\} are two finite languages. - \( L = \{00, 11\}^* \cup \{100\}{100} \) is a language constructed from those language using regular operations. - Therefore \( L \) is regular according to the theorem. ½ Proof of Kleene’s Theorem - Constructed from finite languages using regular operations implies there is a finite state acceptor. Basis • Every finite language is regular. • Proof: • Construct a graph with nodes corresponding to the strings in the language being accepting states. • Direct other transitions to a “dead” state as necessary. Basis Example - Suppose the finite language is \( \{0, 00, 01, 111\} \) - The graph is Induction Step • Suppose \(L\) and \(M\) are regular. Let \(A\) and \(B\) be acceptors accepting \(L\) and \(M\), respectively. • We need to show that \(LM\), \(L \cup M\), and \(L^*\) by constructing acceptors from \(A\) and \(B\). • It is not immediately obvious how to do this. We first generalize the definition of acceptor, then show how to convert the generalized form to a regular acceptor. Non-Deterministic Acceptors - A non-deterministic acceptor is one in which, for each node: a. There can be 0, 1 or more transitions from the node with the same letter. b. There can be “spontaneous” transitions from one node to another, without using a letter. These transitions are appropriately labeled $\varepsilon$, to indicate that no letter of the input is used in making the transition. NFA transitions for a given state and letter 1 transition 2 transitions no transitions Spontaneous Transitions Chained Standard Acronyms • DFA: Deterministic finite-state acceptor • NFA: Non-deterministic finite-state acceptor Transition Function for an NFA - An NFA is represented \((Q, \Sigma, \delta, q_0, F)\) where everything is the same as in a DFA, except for \(\delta\). - For an NFA: \[ \delta : Q \times \Sigma_\varepsilon \rightarrow 2^Q \] where - \(\Sigma_\varepsilon\) means \(\Sigma \cup \{\varepsilon\}\) - \(2^Q\) means the set of all subsets of \(Q\). (book uses \(P(Q)\)). NFA Function Example \[ \begin{array}{cccc} \delta & \varepsilon & 0 & 1 \\ 0 & \emptyset & \{0, 1\} & \emptyset \\ 1 & \{2\} & \emptyset & \{0\} \\ 2 & \emptyset & \{1\} & \{2\} \end{array} \] DFA as a special case of NFA - DFA has \( \delta_{\text{DFA}} : Q \times \Sigma \rightarrow Q \) - NFA has \( \delta_{\text{NFA}} : Q \times \Sigma_{\epsilon} \rightarrow 2^Q \) - In viewing a DFA as an NFA, \[ \delta_{\text{NFA}}(q, x\sigma) = \{ \delta_{\text{DFA}}(q, x\sigma) \} \] so that the set of next states of the NFA is identified with the **singleton** next state of the DFA. Acceptance by an NFA • Consider the graph representation of the NFA. • A string $x \in \Sigma^*$ is accepted iff $x$ corresponds to a labeled path from the initial state to some accepting state. NFA Acceptance Example $\varepsilon$ is not accepted NFA Acceptance Example 0 is accepted NFA Acceptance Example 1 is not accepted NFA Acceptance Example 00 is accepted NFA Acceptance Example Another way in which 00 is accepted (with $\varepsilon$ used a second time) NFA Acceptance Example 01 is accepted NFA Acceptance Example 01 is accepted as on the previous slide, but it is **not** required that all paths labeled 01 go from initial to an accepting state. NFA Acceptance Example What else? Unique Accepting State Assumption • Without loss of generality, it can be assumed that an NFA has exactly one accepting state. • (If not, introduce a new state and direct $\varepsilon$ arcs from every accepting state to it. Then make those states non-accepting and the new state accepting.) • Note: We cannot do this for a DFA in general. Unique Accepting State Transformation Constructing NFAs for Regular Operations: Union - Let \( A \) and \( B \) be NFAs for \( L \) and \( M \) respectively, with initial states \( a \) and \( b \). - Then an NFA for \( L \cup M \) is: The “Guessing” Paradigm - The NFA for union exemplifies what is sometimes called “guessing”. - In order to accept $L \cup M$, the NFA makes a “guess” or **free choice** as to whether the input string is going to be in $L$ or in $M$. As long as it is in either, the string is accepted. Constructing NFAs for Regular Operations: Concatenation - Let A and B be NFAs for L and M respectively. Let a be the unique accepting state of A and let b be the initial state of B. - Then an NFA for LM is: - a is no longer accepting - b is no longer initial. ![Diagram of NFAs for Concatenation](image-url) Guessing for Concatenation • Guessing is less obvious in the preceding construction, but it can also be thought to be present. • When the NFA is in state a having read a portion of the input, it can guess to either stay within A or to make the spontaneous transition to B and continue. Constructing NFAs for Regular Operations: Star - Let $A$ be an NFA for $L$. Let $a$ be the initial state of $A$ and $b$ be the unique accepting state of $A$. - Then an NFA for $L^*$ is: Guessing for Star • When the preceding NFA is started in its initial state, it can "guess" to go to a to read the rest of the input or go directly to the final state. • In the final state, it can choose to go back to the start for more input. Example: Construct NFA for \((\{01\} \cup \{10\})^*)\) Union Operation >> Example: NFA for \((\{01\} \cup \{10\})^*\) \[ \text{{01}} \cup \text{{10}} \] Unique Accepting State Transformation >> Example: NFA for \((\{01\} \cup \{10\})^*)\) Example: NFA for (\{01\} \cup \{10\})^* Shortcuts In some cases, shortcuts are possible that eliminate states, but be careful, because it is easy to mess up. Some shortcuts for the previous NFA are shown here. Further Shortcuts Even More Shortcuts The Ultimate in Shortcuts Mini-Project - Develop a set of graphical rules for shortcuts. Two-Step Construction • We now know how to construct **NFA**s for $LM$, $L \cup M$, and $L^*$. • The next step is to show how to construct, from any NFA, a DFA accepting the same language. • The latter is called the **Subset Construction**. Subset Construction: NFA to DFA For any state q in an NFA, define the closure $c(q)$ of the state to be $\{q\}$ together with the set of states reachable from q using only $\varepsilon$ transitions. <table> <thead> <tr> <th>q</th> <th>c(q)</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>{a}</td> </tr> <tr> <td>b</td> <td>{b}</td> </tr> <tr> <td>c</td> <td>{a,c,d,g,h,i,j}</td> </tr> <tr> <td>d</td> <td>{d}</td> </tr> <tr> <td>e</td> <td>{e}</td> </tr> <tr> <td>f</td> <td>{a,d,f,g,h,i,j}</td> </tr> <tr> <td>g</td> <td>{a,d,g,h,j}</td> </tr> <tr> <td>h</td> <td>{a,d,h}</td> </tr> <tr> <td>i</td> <td>{a,d,g,h,i,j}</td> </tr> </tbody> </table> The states of the new DFA will be among these subsets. Closure of a set of states For a set of states $S$, define $$c(S) = \bigcup \{c(q) \mid q \in S\}$$ Note: Sipser uses $E(S)$ for this, p 56. Subset Construction, continued The initial state of the DFA will be the closure of the initial state. \[ \{a, d, g, h, j\} \] Closure of initial state \(g\). Defining $\delta$ for the DFA - Each state of the DFA is a subset of the states of the NFA. - For each $\sigma \in \Sigma$, define $$\delta(S, \sigma) = \bigcup \{c(\delta(q, \sigma)) \mid q \in S\}$$ where inside the braces is the original NFA’s $\delta$. Example $\delta$ for the DFA - $\{a,d,g,h,j\}$ is the initial state - $\delta(S, \sigma) = \bigcup \{c(\delta(q, \sigma)) \mid q \in S\}$ - $\delta(\{a,d,g,h,j\}, 0) = \bigcup \{c(\delta(q, 0)) \mid q \in \{a,d,g,h,j\}\}$ $= \bigcup \{c(\delta(a, 0)), c(\delta(d, 0)), c(\delta(g, 0)), c(\delta(h, 0)), c(\delta(j, 0))\}$ $= \bigcup \{c(\emptyset), c(\emptyset), c(\emptyset), c(\emptyset), c(\emptyset)\}$ $= \bigcup \{\emptyset, \emptyset, \emptyset, \emptyset, \emptyset\} = \{b\}$ - similarly $\delta(\{a,d,g,h,j\}, 1) = \{e\}$ Subset Construction, continued Transitions from the new initial state. Example $\delta$ for the DFA - $\delta(S, \sigma) = \bigcup \{ c(\delta(q, \sigma)) \mid q \in S \}$ - $\delta(\{b\}, 0)$ $= \bigcup \{ c(\delta(q, 0)) \mid q \in \{b\} \}$ $= \bigcup \{ \emptyset \} = \emptyset$ - $\delta(\{b\}, 1)$ $= \bigcup \{ c(\delta(q, 1)) \mid q \in \{b\} \}$ $= \{ a, c, d, g, h, i, j \}$ Subset Construction, continued Transitions from other states \[ \begin{array}{c} \{b\} \\ \{a,d,g,h,j\} \\ \{e\} \end{array} \xrightarrow{0} \begin{array}{c} \{a,c,d,g,h,i,j\} \\ \emptyset \end{array} \xrightarrow{1,0,1} Example $\delta$ for the DFA - $\delta(S, \sigma) = \cup \{ c(\delta(q, \sigma)) \mid q \in S \}$ - $\delta(\{e\}, 0)$ $\quad = \cup \{ c(\delta(q, 0)) \mid q \in \{f\} \}$ $\quad = \{a, d, f, g, h, i, j\}$ - $\delta(\{e\}, 0) = \emptyset$ Subset Construction, continued Subset Construction, completed Accepting States for the DFA Any state of the DFA that contains an accepting state of the original NFA is accepting. Accepting States Outlined Notes on the Previous Example • We intentionally used the version of NFA dictated by the regular operators for illustration, even though it would have been simpler to start with the “shortcut” version. • The three accepting states of the DFA could be merged into one. State minimization in general is a separate topic saved for later discussion. State Reachability Given a DFA $\langle \Sigma, Q, q_0, f, F \rangle$ a state $q \in Q$ is said to be **reachable** provided $$\exists x \in \Sigma^* \quad q = \delta(q_0, x)$$ Normally states not reachable, and transitions from them, can be removed without affecting the language accepted. In the subset construction, it is best to start with the initial state and only construct states reachable from it. This keeps the size of the machine smaller. State Reachability for NFA Given an NFA \((\Sigma, Q, q_0, f, F)\) a state \(q \in Q\) is said to be \textit{reachable} provided \[ \exists x \in \Sigma^* \quad q \in \delta(q_0, x) \] where we extend the given \(\delta: Q \times \Sigma_\epsilon \rightarrow 2^Q\) to \(\Sigma^*\) as follows: \[ \delta(q_0, \epsilon) = c(q_0) \quad \text{(the closure of } q_0) \\ \delta(q_0, x\sigma) = \bigcup \{ c(\delta(q, \sigma)) \mid q \in \delta(q_0, x) \} \] where the rightmost \(\delta\) is the extended version. Applications of the Subset Construction • Suppose we want to construct a DFA that will accept the language of **strings ending with a given string**, for example: 01011. • The **tricky** part here is that if we get as input, for example, 01010, while this is not accepted, the last part 010 can possibly be used as the initial part of a string that is accepted, for example 0101011. • By constructing an appropriate NFA and converting it, it is easy to get the DFA right. An NFA for \( \{0, 1\}^* \ 01011 \) Constructing a DFA for \(\{0, 1\}^* 01011\) Notes on the Previous Example - Not all subsets were reachable. Those that weren’t were not generated. - The number of states is the same in the NFA and DFA. This is a coincidence; it may be more or fewer. - The transition structure is more complex in the DFA. This is typical. - A similar idea can be used to determine whether a string is contained in a given string. Algorithmic Applications • The principle underlying the illustrated method for text matching was the insight and basis for two text searching algorithms: • **Knuth-Morris-Pratt**: Search for a single string • **Aho-Corasick**: Search for a finite set of strings Another Possibility for Search • Rather than go through the DFA construction and simulate the result to do search a text, it is possible to simulate the NFA directly. • In this case there would be a set of “current states” rather than a single one. • Just use multiple state pointers for the simulation rather than a single one.
{"Source-Url": "https://www.cs.hmc.edu/courses/2013/fall/cs81/StringsMachinesLanguages.pdf", "len_cl100k_base": 7544, "olmocr-version": "0.1.53", "pdf-total-pages": 114, "total-fallback-pages": 0, "total-input-tokens": 156322, "total-output-tokens": 11775, "length": "2e12", "weborganizer": {"__label__adult": 0.00046133995056152344, "__label__art_design": 0.0005879402160644531, "__label__crime_law": 0.0004131793975830078, "__label__education_jobs": 0.003452301025390625, "__label__entertainment": 0.00015854835510253906, "__label__fashion_beauty": 0.0002551078796386719, "__label__finance_business": 0.00040793418884277344, "__label__food_dining": 0.000568389892578125, "__label__games": 0.0008378028869628906, "__label__hardware": 0.0019502639770507812, "__label__health": 0.001178741455078125, "__label__history": 0.0005393028259277344, "__label__home_hobbies": 0.000240325927734375, "__label__industrial": 0.0010786056518554688, "__label__literature": 0.0009355545043945312, "__label__politics": 0.0004405975341796875, "__label__religion": 0.00101470947265625, "__label__science_tech": 0.453857421875, "__label__social_life": 0.00020515918731689453, "__label__software": 0.007232666015625, "__label__software_dev": 0.5224609375, "__label__sports_fitness": 0.00042891502380371094, "__label__transportation": 0.0009059906005859376, "__label__travel": 0.0002472400665283203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24175, 0.01814]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24175, 0.83727]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24175, 0.79748]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 185, null], [185, 453, null], [453, 702, null], [702, 868, null], [868, 1030, null], [1030, 1401, null], [1401, 1681, null], [1681, 2026, null], [2026, 2195, null], [2195, 2392, null], [2392, 2941, null], [2941, 3437, null], [3437, 3693, null], [3693, 3970, null], [3970, 4223, null], [4223, 4307, null], [4307, 4533, null], [4533, 4669, null], [4669, 4790, null], [4790, 4863, null], [4863, 4956, null], [4956, 5054, null], [5054, 5231, null], [5231, 5484, null], [5484, 5685, null], [5685, 5964, null], [5964, 6285, null], [6285, 6381, null], [6381, 6509, null], [6509, 6810, null], [6810, 6929, null], [6929, 7327, null], [7327, 7814, null], [7814, 8154, null], [8154, 8870, null], [8870, 9074, null], [9074, 9394, null], [9394, 9678, null], [9678, 10081, null], [10081, 10255, null], [10255, 10377, null], [10377, 10473, null], [10473, 10613, null], [10613, 10885, null], [10885, 11163, null], [11163, 11474, null], [11474, 11697, null], [11697, 11895, null], [11895, 12084, null], [12084, 12240, null], [12240, 12421, null], [12421, 12650, null], [12650, 12782, null], [12782, 12997, null], [12997, 13085, null], [13085, 13486, null], [13486, 13884, null], [13884, 13974, null], [13974, 14007, null], [14007, 14117, null], [14117, 14500, null], [14500, 14695, null], [14695, 15088, null], [15088, 15285, null], [15285, 15339, null], [15339, 15377, null], [15377, 15419, null], [15419, 15458, null], [15458, 15558, null], [15558, 15597, null], [15597, 15754, null], [15754, 15789, null], [15789, 16131, null], [16131, 16169, null], [16169, 16368, null], [16368, 16655, null], [16655, 16966, null], [16966, 17254, null], [17254, 17441, null], [17441, 17686, null], [17686, 17761, null], [17761, 17883, null], [17883, 17928, null], [17928, 17968, null], [17968, 18139, null], [18139, 18157, null], [18157, 18177, null], [18177, 18203, null], [18203, 18267, null], [18267, 18512, null], [18512, 19145, null], [19145, 19289, null], [19289, 19450, null], [19450, 19716, null], [19716, 20258, null], [20258, 20330, null], [20330, 20656, null], [20656, 20879, null], [20879, 21126, null], [21126, 21157, null], [21157, 21188, null], [21188, 21306, null], [21306, 21332, null], [21332, 21602, null], [21602, 21680, null], [21680, 22135, null], [22135, 22647, null], [22647, 23122, null], [23122, 23158, null], [23158, 23202, null], [23202, 23575, null], [23575, 23844, null], [23844, 24175, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 185, null], [185, 453, null], [453, 702, null], [702, 868, null], [868, 1030, null], [1030, 1401, null], [1401, 1681, null], [1681, 2026, null], [2026, 2195, null], [2195, 2392, null], [2392, 2941, null], [2941, 3437, null], [3437, 3693, null], [3693, 3970, null], [3970, 4223, null], [4223, 4307, null], [4307, 4533, null], [4533, 4669, null], [4669, 4790, null], [4790, 4863, null], [4863, 4956, null], [4956, 5054, null], [5054, 5231, null], [5231, 5484, null], [5484, 5685, null], [5685, 5964, null], [5964, 6285, null], [6285, 6381, null], [6381, 6509, null], [6509, 6810, null], [6810, 6929, null], [6929, 7327, null], [7327, 7814, null], [7814, 8154, null], [8154, 8870, null], [8870, 9074, null], [9074, 9394, null], [9394, 9678, null], [9678, 10081, null], [10081, 10255, null], [10255, 10377, null], [10377, 10473, null], [10473, 10613, null], [10613, 10885, null], [10885, 11163, null], [11163, 11474, null], [11474, 11697, null], [11697, 11895, null], [11895, 12084, null], [12084, 12240, null], [12240, 12421, null], [12421, 12650, null], [12650, 12782, null], [12782, 12997, null], [12997, 13085, null], [13085, 13486, null], [13486, 13884, null], [13884, 13974, null], [13974, 14007, null], [14007, 14117, null], [14117, 14500, null], [14500, 14695, null], [14695, 15088, null], [15088, 15285, null], [15285, 15339, null], [15339, 15377, null], [15377, 15419, null], [15419, 15458, null], [15458, 15558, null], [15558, 15597, null], [15597, 15754, null], [15754, 15789, null], [15789, 16131, null], [16131, 16169, null], [16169, 16368, null], [16368, 16655, null], [16655, 16966, null], [16966, 17254, null], [17254, 17441, null], [17441, 17686, null], [17686, 17761, null], [17761, 17883, null], [17883, 17928, null], [17928, 17968, null], [17968, 18139, null], [18139, 18157, null], [18157, 18177, null], [18177, 18203, null], [18203, 18267, null], [18267, 18512, null], [18512, 19145, null], [19145, 19289, null], [19289, 19450, null], [19450, 19716, null], [19716, 20258, null], [20258, 20330, null], [20330, 20656, null], [20656, 20879, null], [20879, 21126, null], [21126, 21157, null], [21157, 21188, null], [21188, 21306, null], [21306, 21332, null], [21332, 21602, null], [21602, 21680, null], [21680, 22135, null], [22135, 22647, null], [22647, 23122, null], [23122, 23158, null], [23158, 23202, null], [23202, 23575, null], [23575, 23844, null], [23844, 24175, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24175, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24175, null]], "pdf_page_numbers": [[0, 95, 1], [95, 185, 2], [185, 453, 3], [453, 702, 4], [702, 868, 5], [868, 1030, 6], [1030, 1401, 7], [1401, 1681, 8], [1681, 2026, 9], [2026, 2195, 10], [2195, 2392, 11], [2392, 2941, 12], [2941, 3437, 13], [3437, 3693, 14], [3693, 3970, 15], [3970, 4223, 16], [4223, 4307, 17], [4307, 4533, 18], [4533, 4669, 19], [4669, 4790, 20], [4790, 4863, 21], [4863, 4956, 22], [4956, 5054, 23], [5054, 5231, 24], [5231, 5484, 25], [5484, 5685, 26], [5685, 5964, 27], [5964, 6285, 28], [6285, 6381, 29], [6381, 6509, 30], [6509, 6810, 31], [6810, 6929, 32], [6929, 7327, 33], [7327, 7814, 34], [7814, 8154, 35], [8154, 8870, 36], [8870, 9074, 37], [9074, 9394, 38], [9394, 9678, 39], [9678, 10081, 40], [10081, 10255, 41], [10255, 10377, 42], [10377, 10473, 43], [10473, 10613, 44], [10613, 10885, 45], [10885, 11163, 46], [11163, 11474, 47], [11474, 11697, 48], [11697, 11895, 49], [11895, 12084, 50], [12084, 12240, 51], [12240, 12421, 52], [12421, 12650, 53], [12650, 12782, 54], [12782, 12997, 55], [12997, 13085, 56], [13085, 13486, 57], [13486, 13884, 58], [13884, 13974, 59], [13974, 14007, 60], [14007, 14117, 61], [14117, 14500, 62], [14500, 14695, 63], [14695, 15088, 64], [15088, 15285, 65], [15285, 15339, 66], [15339, 15377, 67], [15377, 15419, 68], [15419, 15458, 69], [15458, 15558, 70], [15558, 15597, 71], [15597, 15754, 72], [15754, 15789, 73], [15789, 16131, 74], [16131, 16169, 75], [16169, 16368, 76], [16368, 16655, 77], [16655, 16966, 78], [16966, 17254, 79], [17254, 17441, 80], [17441, 17686, 81], [17686, 17761, 82], [17761, 17883, 83], [17883, 17928, 84], [17928, 17968, 85], [17968, 18139, 86], [18139, 18157, 87], [18157, 18177, 88], [18177, 18203, 89], [18203, 18267, 90], [18267, 18512, 91], [18512, 19145, 92], [19145, 19289, 93], [19289, 19450, 94], [19450, 19716, 95], [19716, 20258, 96], [20258, 20330, 97], [20330, 20656, 98], [20656, 20879, 99], [20879, 21126, 100], [21126, 21157, 101], [21157, 21188, 102], [21188, 21306, 103], [21306, 21332, 104], [21332, 21602, 105], [21602, 21680, 106], [21680, 22135, 107], [22135, 22647, 108], [22647, 23122, 109], [23122, 23158, 110], [23158, 23202, 111], [23202, 23575, 112], [23575, 23844, 113], [23844, 24175, 114]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24175, 0.05165]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
a38cb61ed060d979cd53ec13e8488f877bcefd90
Standards, compliance, and Rational Unified Process Integrating RUP and the PMBOK William Cottrell May 26, 2004 from The Rational Edge: This article explains the relationship between IBM Rational Unified Process, or RUP, and the PMBOK, the Project Management Body of Knowledge, maintained by the Project Management Institute, or PMI. It describes the relationship of RUP to industry standards, what compliance means, how to leverage standards to improve your tailored use of RUP, and how you integrate those standards with RUP to achieve business value. Standards -- how some people hate that word. And while I don't share their distaste, I do understand it; nearly every day, I have some reason to discuss the relationship of industry standards to Rational Unified Process®, or RUP®. Some think of standards as their nemesis. I think of standards as an opportunity for companies to better integrate their software development efforts with the business and legal processes defining much of today's corporate landscape. True, software teams may not be involved in most audits, inspections, certifications, or assessments, but more than likely, IT teams are heavily involved with the results in the form of financial, organizational, and/or business process improvements. What drives us to pursue compliance with industry standards? Consider a competitive merger in which the acquired company may already be compliant with some industry standard and the acquiring company wants to achieve the same level of compliance, company wide. Or consider a company deciding to outsource some part of its business that is not a core competency. More than likely, the external supplier is using standards that may or may not be part of the hiring company's current business processes. Finally, consider a regulatory audit. Any deficiencies will no doubt require changes in business processes. That means software development could be affected. The list of drivers goes on, and all are cases for standards adoption. Believe it or not, there are industry standards, guidelines, and, yes, processes that can help us all achieve business goals, resolve business process deficiencies, gain a competitive edge, do more with less, or simply provide the equivalent of a thesaurus for better communication. Over the past two decades, I’ve seen -- or been a part of -- my share of process improvement projects, and I’m now convinced that industry standards and guidance are not about whether your company is compliant with them, but whether you can leverage them to improve your processes. Yes, it is sometimes important to have that banner of compliance hanging outside the office building, but the bottom line is, if your improvements don't add value, the improvements will be superficial. RUP is designed to be your business process for software development. While RUP embodies many industry best practices included in industry standards, in general, it is not designed to be compliant with industry standards. But RUP can be, and is, implemented -- with some tailoring -- to help a company become compliant with industry standards. In other words, RUP is a vehicle to help achieve business goals, gain a competitive edge, or simply provide a better communication vehicle so industry standards can add value in your workplace. One such standard or guide is the Project Management Institute’s "Project Management Body of Knowledge®," or PMBOK®. The PMBOK "describes the sum of knowledge within the profession of project management". Its stated purpose is to "identify and describe that subset of the PMBOK that is generally accepted" and to "provide a common lexicon within the profession and practice of talking and writing about project management," but clearly some practitioners think of the PMBOK as their software project management process and procedures. This article explains the proper relationship between RUP and industry standards, what compliance means, how to leverage standards to improve your tailored use of RUP, and how you integrate them with RUP to achieve business value. If you are not familiar with RUP or the PMBOK, start by reading about each in the next section. Otherwise, head straight to the section entitled How RUP and the PMBOK relate to each other. Relating RUP and the PMBOK To understand how RUP and the PMBOK relate to each other, you must first understand their respective concepts and frameworks. What is RUP? RUP is a risk-driven, use-case-based, and architecture-centric, iterative software development process. RUP embodies industry-standard management and technical methods and techniques to provide a software engineering process particularly suited to creating and maintaining component-based software system solutions. RUP communicates roles, activities, and artifacts organized by process workflows that guide project teams through software engineering disciplines under the purview of operational business phases and decision making milestones. If you are not familiar with RUP or the PMBOK, start by reading about each in the next section. Otherwise, head straight to the section entitled How RUP and the PMBOK relate to each other. RUP’s foundation consists of three key elements: the role, the activity, and the artifact, as shown in Figure 1. The role performs activities and produces artifacts. Each role is primarily responsible for a set of activities and artifacts. But all roles will contribute to other activities and artifacts. Roles, activities, and artifacts are used repeatedly during the execution of workflows. The workflows form a sequence of tasks unique to each of the nine software disciplines in the RUP iterative development software lifecycle framework (see Figure 2). **Figure 1: Key elements of IBM Rational Unified Process** ![Figure 1: Key elements of IBM Rational Unified Process](image) The RUP framework is two dimensional, with axes indicating *time* and *content*. The time dimension is organized by phases, iterations, and milestones. The content dimension consists of software disciplines containing the workflows, roles, activities, and artifacts as they apply to that discipline. You implement the RUP framework via a complementary toolset, the capabilities of which generally map to the types of activities and artifacts required (Figure 3). **Figure 2: RUP framework** ![Figure 2: RUP framework](image) Figure 3: The RUP framework is implemented via a complementary toolset As shown in Figure 3, RUP consists of five distinct parts: 1. **The RUP process framework.** This is the knowledge base of industry-proven best practices that forms the heart of RUP. 2. **The process delivery tools.** These are the tools that deliver the valuable process content to the practitioner when needed, in the form and quantity they need. 3. **The Rational Process Workbench.** This consists of RUP Organizer and RUP Modeler. RUP Organizer allows you to create simple plug-ins that complement, without altering, RUP's underlying structure. RUP Modeler allows you to create structural plug-ins for RUP that change RUP's underlying meta-model. 4. **The Configuration tool.** Otherwise known as RUP Builder, helps RUP users configure a base RUP configuration with the plugins created in RUP Organizer and RUP Modeler. What is the PMBOK? The PMI PMBOK is a basic reference for those interested in or already working in the project management profession. It contains a subset of the body of knowledge maintained by project management practitioners and academic institutions. That subset includes the generally understood best practices that are widely used for project management in a wide variety of industries. Like RUP, the PMBOK describes key elements and processes, and it defines a project management framework. But it does not provide a prescription for using them in the context of software development. Rather, it is a general guideline for project management in any industry. The PMI expects experts in their specific industries to apply these guidelines to their respective business processes. The key elements include roles, project management processes, and artifacts. Just as in RUP, the role performs the process activities to produce artifacts. The PM role, project management processes, and artifacts are grouped in the project management discipline as knowledge areas. The PM processes describe best practice details for each knowledge area. The PMBOK framework consists of process groups, knowledge areas, and project management (PM) processes\(^2\) (Figure 4). The knowledge areas group the PM processes by project management content. That is, we can categorize the content of the PM processes into one of nine knowledge areas. The process groups (initiating, planning, executing, controlling, and closing, etc.) organize the more detailed PM processes over time. Figure 4: The PMBOK PM process matrix If we illustrate the framework in a diagram (see Figure 5) showing level of activity for each process group based on the time and content dimensions, we see a relative weight of activity over the project lifecycle that somewhat resembles that of the RUP framework diagram (see Figure 2 above). Figure 5: PMBOK framework Relating RUP and the PMBOK Now that we understand the RUP and PMBOK concepts and frameworks, we can compare them to help us understand how they relate. We will compare their utility to determine how one framework fits with the other. Here is the comparison: - The PMBOK describes guidelines based on industry best practices. RUP helps software development teams implement software industry best practices. While RUP is tool-independent, when you use it in conjunction with IBM’s software development tools, you can significantly improve productivity, completeness, reusability, and more. - The PMBOK describes a generic project lifecycle. RUP prescribes a generic software development lifecycle within a project lifecycle. - The PMBOK describes guidelines for any size project. RUP can be tailored to implement any size software project. In making the above comparisons, my point is that the PMBOK describes project management best practices and RUP prescribes -- helps us implement -- software development best practices, some of which are related to project management. This is a key differentiation that helps answer the question I am often asked: "Does the PMBOK fit into RUP, or does RUP fit into the PMBOK"? The answer is “neither.” Why? Because the PMBOK by its own definition is designed to be applied to your existing business processes. Therefore, we can implement RUP as our software development business process, tailor it to our company, a line of business, a department, a program, or some other organizational unit, then apply PMBOK best practices. So, how does all this fit together with respect to RUP and PMBOK utility? Let's look at a few pictures to try to explain. **Figure 6: The PMBOK framework is implemented during each iteration within a RUP project** Figure 6 illustrates how the PMBOK framework is implemented during each iteration within a RUP project. A RUP project uses PMBOK best practices in every iteration, in all four RUP phases (Inception, Elaboration, Construction, and Transition) as part of the project management discipline. That means we need to tailor RUP to the PMBOK key elements. While the PMBOK is a framework and guidelines, it implies some roles, activities, and artifacts; so, we will consider the PMBOK as an existing process and incorporate its best practices into RUP. **Tailoring RUP to PMBOK best practices** At the risk of sounding too much like a commercial, tailoring RUP is now easier than ever with the use of the Rational Process Workbench. Once we know how and why we are going to tailor RUP, we can build plug-ins to configure into RUP. Unfortunately, we still have to apply some brain power to the accomplish the what and how part of task. It would be wonderful if all we had to tailor was the RUP Project Management discipline. Unfortunately, project management activities are embedded in more than just this discipline. The first step is to find the things you need to tailor. Next step, figure out how to capture and communicate those changes. Find what needs to be tailored Below, I outline the steps to find what you need to tailor. Teams should document the results of these steps in whatever manner they wish, but I will offer an example of a result in a simple here: 1. Decide what configuration of RUP to start tailoring. RUP comes in three sizes: small, medium, and large (called Classic RUP). Determining how to choose one of these three is beyond the scope of this article. But for example purposes, I will use the RUP Classic configuration (the full RUP). There will be more roles, activities, and artifacts to review for tailoring. So choose wisely. 2. Ensure that you are familiar with the details of PMBOK framework elements and the RUP key elements. I would not start this activity without having thoroughly read both the PMBOK and the RUP content (at least the role-based material) so that you retain some of the content from each. 3. This step will take you a long way in your understanding of RUP and PMBOK content. To tailor RUP with the PMI best practices we will need to build a map of PMBOK roles, PM processes, and outputs to RUP roles, activities and artifacts. But the mapping is not itself the tailoring we require. It is only the first step. While I have seen a number of mappings of RUP and the PMBOK, I would suggest that anyone looking to tailor RUP with the PMBOK best practices might want to do their own mapping. First, it is not that difficult, because there are not so many roles, activities, and artifacts in the PMBOK. Second, the mapping effort will give you a deep appreciation and insight into the content of each framework. Third, there is enough subjectivity in a mapping that you would probably tailor any existing mapping anyway, just to create consistency with your environment. For these reasons, I recommend you do this mapping yourself and be confident that it fits your organizations project management paradigm and that nothing has been overlooked. There are many ways you could build a map. I suggest you start with each RUP role diagram (there are about 30 in the full RUP configuration), each of which lists all the activities that that role must complete. 1. For each RUP activity in that role (there are about 150 activities for all roles combined in the full RUP configuration), map it to a PMBOK Process Group (Initiating, Planning, Executing, Controlling, Closing). You will not have to spend much time on each. The title alone will give you a hint if there is anything project management related or not. 2. Repeat this step for each role and collect all the RUP activities for PM processes. 3. Then compare the content of each PMBOK PM process for that process group in the PMBOK to the RUP activities associated with that group. 4. From that comparison, determine if you need to adjust any RUP input artifacts with any PM process inputs, RUP steps with PM process tools and techniques, or RUP resulting artifacts (there are more than a hundred for all roles combined in the full RUP configuration) with PM process outputs. (Remember, this part of the process is very subjective, which is why you should do your own mapping.) 5. If you find that any changes are required, write them down. 6. Repeat these steps until you covered all activities for all roles; that should include all artifacts, too. For the purpose of this article, I will provide an initial mapping of all the PM Process Groups to the RUP Project Manager role and associated activities. This mapping is shown in Figure 7. Figure 7: RUP - PMBOK mapping example Now, let's examine the first PMBOK Process Group: Initiating. Initiating, in red, is mapped to three RUP activities: Developing Business Case, Initiate Project, Initiate Iteration (also shown in red). Each PM process is mapped to the RUP activities. I repeat that for each role. Table 1 shows Initiating mappings to RUP activities; for the purposes of this article, I will only trace the Initiating mappings to RUP. Table 1: PMBOK Initiating PM process group mapping to RUP activities <table> <thead> <tr> <th>PMBOK process group</th> <th>RUP roles affected</th> <th>RUP activities</th> <th>Changes to ask</th> </tr> </thead> <tbody> <tr> <td>Initiating</td> <td>Project manager</td> <td>Developing business case, Initiate project, Initiate Iteration</td> <td>Include company project selection methods using our portfolio analysis process by modifying those steps under Developing the Business Case.</td> </tr> <tr> <td></td> <td>Management reviewer</td> <td>Project approval review</td> <td>Add Rational Unified Process work order artifact to the resulting artifacts list of the management reviewer and as input to the initiate project activity.</td> </tr> <tr> <td></td> <td>Business process analyst</td> <td>Identify business goals</td> <td>Add our company project selection criteria as input.</td> </tr> </tbody> </table> Note that in Table 1, I describe some changes to be made where, in my opinion, RUP does not satisfy the intent of the PMBOK guidelines; note also that I show only the exceptions. If I were trying to show my management or an auditor that I am complying with the intent of the PMI Standard, I would probably have to include all the mappings of RUP key elements to PMBOK elements. Once I finish this approach for all PMBOK PM Process Groups in each RUP activity for each RUP role, I can decide how I will make those changes. **Capture and communicate the changes** The simplest way to capture and communicate the RUP tailoring is to build a RUP development case. The development case template is available in RUP to capture the process after you have tailored it, and you can place this development case on the project Website for reference. In it you will have links to the tailored process. Everyone can review the tailored material can be reviewed in a team meeting prior to the start of a RUP iteration. I could take this process a step further by illustrating how to build a plug-in using RUP Process Workbench (RPW) customization tools. As noted at the beginning of this article, these tools are RUP Organizer, which lets you make common customizations that don't affect the underlying RUP meta-model, and RUP Modeler, which you use to make structural plug-ins. But the details of that process could easily double the size of this article. So I will end this discussion here. **Try this yourself** I've seen or participated in my share of process improvement projects over the last couple of decades. In doing so, I have witnessed no more than limited success when the industry standard is used as the process for the company, line of business, department, program, or some other organizational unit. I've also experienced attempts to map a company's process to standards and guidelines, and those performing the mapping were certain that the effort would show the way to compliance, maturity, or certification. While mapping a standard to a business provides helpful insight into the standard and the process, it is only part of the effort needed to integrate them. Case in point: RUP and the PMI PMBOK. By following the techniques outlined in this article, I believe you can tailor RUP to include the PMBOK best practices. Specifically, at IBM Rational, we have found that we can adjust RUP input artifacts with PMBOK process inputs, RUP steps with PMBOK process tools and techniques, and RUP resulting artifacts with PMBOK process outputs. Once we have found what needs to be tailored, we have a few options to make those changes available to the project team. We can build a development case document, use RUP Organizer, and/or use RUP Modeler. **References** Notes 2 Op. cit. Reproduced here by permission of the PMI. 3 Editor's Note: An alternative method for mapping RUP and the PMBOK is detailed in Serge Charbonneau's article, Software project management: A mapping between RUP and the PMBOK," also published in this issue of The Rational Edge. © Copyright IBM Corporation 2004 Trademarks (www.ibm.com/developerworks/ibm/trademarks/)
{"Source-Url": "https://www.ibm.com/developerworks/rational/library/4763-pdf.pdf", "len_cl100k_base": 4267, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 20715, "total-output-tokens": 4829, "length": "2e12", "weborganizer": {"__label__adult": 0.0002899169921875, "__label__art_design": 0.0003528594970703125, "__label__crime_law": 0.0004494190216064453, "__label__education_jobs": 0.002834320068359375, "__label__entertainment": 5.27501106262207e-05, "__label__fashion_beauty": 0.0001264810562133789, "__label__finance_business": 0.0033111572265625, "__label__food_dining": 0.00031280517578125, "__label__games": 0.0003995895385742187, "__label__hardware": 0.0003063678741455078, "__label__health": 0.00030732154846191406, "__label__history": 0.00016891956329345703, "__label__home_hobbies": 9.757280349731444e-05, "__label__industrial": 0.0003390312194824219, "__label__literature": 0.0002510547637939453, "__label__politics": 0.0002465248107910156, "__label__religion": 0.00025463104248046875, "__label__science_tech": 0.0034465789794921875, "__label__social_life": 0.00013887882232666016, "__label__software": 0.0169677734375, "__label__software_dev": 0.96875, "__label__sports_fitness": 0.00024509429931640625, "__label__transportation": 0.00029969215393066406, "__label__travel": 0.00020170211791992188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20842, 0.01498]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20842, 0.2748]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20842, 0.93168]], "google_gemma-3-12b-it_contains_pii": [[0, 2007, false], [2007, 5174, null], [5174, 6387, null], [6387, 7574, null], [7574, 9141, null], [9141, 10898, null], [10898, 12519, null], [12519, 15842, null], [15842, 17328, null], [17328, 20306, null], [20306, 20842, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2007, true], [2007, 5174, null], [5174, 6387, null], [6387, 7574, null], [7574, 9141, null], [9141, 10898, null], [10898, 12519, null], [12519, 15842, null], [15842, 17328, null], [17328, 20306, null], [20306, 20842, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20842, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20842, null]], "pdf_page_numbers": [[0, 2007, 1], [2007, 5174, 2], [5174, 6387, 3], [6387, 7574, 4], [7574, 9141, 5], [9141, 10898, 6], [10898, 12519, 7], [12519, 15842, 8], [15842, 17328, 9], [17328, 20306, 10], [20306, 20842, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20842, 0.05495]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
309b68d4af053e5b18319f5bc00f1b8357064448
Identifying and Estimating Technical Debt for Service Composition in SaaS Cloud Satish Kumar and Rami Bahsoon School of Computer Science University of Birmingham Birmingham, U.K. {s.kumar.8.r.bahsoon}@cs.bham.ac.uk Tao Chen Department of Computing and Technology Nottingham Trent University Nottingham, U.K. tao.chen@ntu.ac.uk Rajkumar Buyya School of Computing and Information System, The University of Melbourne Melbourne, Australia rbuyya@unimelb.edu.au Abstract—A composite service in multi-tenant SaaS cloud would inevitably operate under dynamic changes on the workload from the tenants, and thus it is not uncommon for the composition to encounter under-utilization and over-utilization on the component services. However, both of those cases could be good or bad: the former implies that although there is under-utilization, the pay-off afterwards are more significant; the latter, in contrast, refers to the over-utilization that leads to trivial pay-off, or nothing at all. Such a notion perfectly matches with the Technical Debt (TD) metaphor in Software Engineering. As a result, it is necessary to identify the root causes of the debts and where the debt can be manifested in the service composition, which, in turn, would offer great help on the decision making process of service composition. In this paper, we propose a novel approach for identifying the technical debt in service composition under SaaS cloud. The approach combines time series forecasting and a newly proposed technical debt model to estimate the future debt and utility in the service composition. Through a real world case study, we demonstrate that our approach can successfully identify both the good and bad debts, while producing satisfactory accuracy on estimating the technical debt involved in the service composition under SaaS cloud. Keywords: Service composition, Technical debt, Service utility, Quality of Service, Multi-tenant. I. INTRODUCTION Service composition is a logical combination of multiple abstract services resulting into a single unit (e.g., an application) for performing complex requests submitted by the users in the multi-tenant SaaS cloud [1]. An abstract service can be realized by a set of candidate component services [1], each of which comes with different capacities to process \( n \) requests per second. However, uncertainty in workload generated by the tenants may affect the overall Quality of Service (QoS), and more importantly, it may cause under-utilization or over-utilization on the component services with respect to their capacities. Consequently, the operational cost could outweigh the service revenue or violates the tenants’ Service Level Agreement (SLA), which may trigger a recomposition. A service composition is sub-optimal from the utility point of view when the selected component services are under-/over-utilized during execution. While over-utilization generally implies negative impacts, under-utilization could be good or bad: if the pay off in the future is more significant, then we can temporally accept under-utilization; otherwise, the under-utilization would only incur unneeded cost. Such a notion perfectly matches with the Technical Debt (TD) metaphor [2][3] in Software Engineering. In particular, there may be an imperfect composition decision, leading to a new forms of technical debts that explicate this category of systems. The debt can be intentional; it can be due to recomposition plans that provide higher services capacity than what is currently demanded by the users. Technical debt could be incurred unintentionally in the service composition, for example, when a component service receives a high volume of requests workload, and the underlying component service cannot process all the requests, consequently, violating the SLA. In this example, the penalty costs against the response time violation and the eventuality recomposition costs can be viewed as interest on the debt for a given execution instance. To better support the decision making process of service composition in SaaS cloud, in this paper, we propose an approach that combines technical debt metaphor and time series forecasting for identifying and estimating technical debt in service composition. Notably, we have made the following contributions: - We tailor a time series forecasting method, namely ARFIMA model [4], into the debt model for estimating future debt. - We propose a model that explicitly maps the concepts of TD in the contexts of service composition. Such a model is capable of quantifying both good and bad debt. - The proposed debt model, enhanced by the time series forecasting method, allows us to build a utility model that provides more informed insights to the decision making process of service composition. II. BACKGROUND AND RELATED WORK Technical debt can be attributed to sub-optimal decisions, shortcut on decisions, and/or deferred activities that can incur extra cost/rework, if it would be carried in the future as when compared the current time. Technical debt metaphor was initially coined by Cunningham in 1992 [2]. Software engineering community presented this metaphor and discussed its applicability to many software artifacts, covering code, requirements, architecture, testing and documentation, among the other. The common understanding is that technical debt is the result of making technical compromises that are expedient in short-term but that create a technical context that increases complexity and cost in the long term [3]. If these technical compromises are not paid back than technical debt can be incurred and degrade the system quality or the development team productivity in long term. By incurring technical debt is not always bad, if organization makes informed decisions or strategic reasons about to incur debt [5]. McConnell [6] classified the term “technical debt” into intentional and unintentional. An intentional debt is the debt which is taken by an organization to optimize the present value in the software project rather than future value or to make informed decisions for gaining short-term benefits. On the other hand, unintentional debt can be incurred unknowingly when an organization makes non-strategic or inappropriate decisions in software project. In recent years, researchers applied technical debt metaphor in cloud computing based services. Alzaghoul et al. [7] applied real option approach for managing technical debt at the service selection for the cloud-based service oriented architecture. They identified technical debt of substitution decisions base on the Binomial Real Option approach. Skourletopoulos et al. [8] described an approach for evaluating the technical debt for the selection of mobile cloud-based service. The problem is formulated based on the cost-benefits analysis and considered linear growth of users in system modelling. However, the proposed approach did not consider time sensitivity and runtime service execution environment. The effective management of technical debt needs to consider these attributes because runtime environment changes must have the root of causing technical debt and time sensitivity guides to make debt-aware decision about when to pay-off the accumulated debt in service composition under SaaS cloud. III. TECHNICAL DEBT IN SERVICE COMPOSITION Technical debt in service composition can be observed at different levels (e.g., service utility, recomposition decisions, or SLA violation etc.). Technical debt can be attributed to ill-informed design decisions that can, for instance, relate to sub-optimal capacity planning and be incurred when a composite service is not fully utilized. This, for example, can be due to significant drop in requests workload in SaaS cloud. As a consequences, the operational cost may exceed the service’s revenue. Furthermore, a composite service can bear a technical debt by design when environmental changes (e.g., partner service failure or QoS fluctuation etc.) put pressure on the system to recompose the composite service. Additionally, technical debt can be associated with an inappropriate engineering decisions or poorly justified run-time decisions for recomposing the composite services. These decisions can carry short-term benefits in terms of improving service utility, but they might not be geared towards long-term benefits or future value creation. In summary, we argue that a technical debt-aware recomposition decision is needed for managing the above described issues. We motivate the need for treating these accrued debt as a “time-sensitive moving target” in service composition that needs to dynamically monitored for transforming the accumulated debt into future value. A. Technical Debt Indicators in Service Composition Technical Debt Indicators consist information about what type of technical debt (good or bad debts) is, why and when was incurred, how much debt was estimated, when it will be pay off in future [10]. We identified following key TD indicators in service composition. SLA Violation: SLA violation constitutes the unintentional technical debt in service composition when a composite service does not satisfy the predefined response time mentioned in end-users SLA then a penalty cost against each request violation would be count as interest over the technical debt. Runtime decisions: An inappropriate or poorly justified run-time decision for service recomposition may lead the technical debt in a way to select unsuitable component services for composing a new composite service which can not support the scalability requirements in changing requests workload. Service utility: Service utility constitutes the technical debt when a composite service is sub-optimal from the utility point of view. For example, a sub-optimal composite service can incur an intentional debt by getting service scalability benefits in the future. Moreover, a decision making needs to know the nature of accumulated debt in terms of good or bad debts. We describe the good and bad debts in composite service perspective and identifying their consequences. Good Debt: A good technical debt in service composition is viewed as “time-sensitive moving target” that needs to monitor for transforming the accumulated debt into future values creation. For example, Figure 2 shows that a composite service is underutilized in a way to deliver more than the required demand of the users at time $t_1$ and intentionally accumulates the debt for a time period (e.g., $t_1$ to $t_5$). We may accept such Bad Debt: A bad debt in service composition may lead to the situation of continuous under-utilization of composite service and will not be able to pay off the accumulated debt in future as shown in Figure 3. As consequences, such accumulated debt negatively impacts the service utility that needs to manage by taking proactive decisions. IV. MEASURING TECHNICAL DEBT IN SERVICE COMPOSITION In this section, we describe how ARFIMA, a time series forecasting method, can be used to predict workload of a component service. Drawing on the prediction, we then present a formal technical debt model in the context of service composition under SaaS cloud. Such a model identifies and estimates the possible technical debt with respect to the overall utility, which would provide greater insights to the decision making process of recomposition. 1) Requests workload prediction: Undoubtedly, the dynamic changes of workload on a component service is the fundamental causes of possible technical debt. To predict such a workload, we use Autoregressive Fractionally Integrated Moving Average model (ARFIMA) [4], a widely used time series model that guarantee the prediction accuracy when a time series contains long memory pattern. We prepared the requests workload time series that contains the number of observed requests at each time interval (e.g., seconds) and feed it as an input to ARFIMA for predicting the future requests workload at every second. Formally, the general expression of ARFIMA($p, d, q$) can be expressed as: $$ (1 - \sum_{i=1}^{p} \phi_i B^i)(1 - B)^d W_t = (1 + \sum_{i=1}^{q} \theta_i B^i) \varepsilon_t $$ whereby $W_t$ is the workload for a component service at time $t$, $\varepsilon_t$ is a white noise process. $B$ is the backward shift operator and $(1 - B)^d$ is the fractional differencing operator. The fractional number $d$ is the memory parameter and $d \in [-0.5, 0.5]$, $1 - \sum_{i=1}^{p} \phi_i B^i$ is the autoregressive polynomial of order $p$ and $1 + \sum_{i=1}^{q} \theta_i B^i$ is the moving average polynomial of order $q$ in the lag operator B. We estimate the value of memory parameter $d$ using $fgPH()$ function in the R forecast package proposed by [11]. The value of $p$ is the autoregressive order that indicates the number of distinct lags appearing in the forecasting equation, and $q$ is the moving average order that shows the number of lagged forecast error in the prediction equation. A. Technical Debt Computing Model To estimate technical debt in service composition, we adopt the notions of principal and interest [3][10] from technical debt metaphor into a contextualized model for the analysis. 1) Recomposition Principal: In the context of service re-composition, we use principal to denote the invested cost of recomposing the entire composite service for improving service utility. The principal can be derived from the resources usages, such as the CPU time or the effort spent by software engineer for the decision making of the service composition. Specifically, we compute the principal for recomposing a service using equation 2. $$ Prinicipal = E \times C_{cpu} $$ 2) Interest: An interest can be accumulated over time on the component service which may be under-utilized or over-utilized. In such context, the interests may be accumulated over time on the $g$th component service for the $x$th abstract service (denoted as $CS_{xy}$). For such a component service, the interests accumulated up to the future $n$ timesteps can be derived from the actual service capacity (i.e., service throughput denoted as $T$) and the predicted workload at time $t$ (i.e., $W_t$) from equation 1, as shown below: $$ Int(CS_{xy}) = \begin{cases} \sum_{t=1}^{n} ((T - W_t) \times C) & \text{if } W_t \leq T \\ \sum_{t=1}^{n} (\frac{W_t}{T} - R_{SLA}) \times P) & \text{otherwise} \end{cases} $$ Clearly, the interests are different depending on two different scenarios of utilizing the capacity of a component service: (a) Service under-utilization: When the component service is under-utilized, i.e., the predicted workload is smaller than or equals to the capacity of component service ($W_t \leq T$), interest can be calculated as the accumulated cost of unused service capacity. For example, on a component service, suppose that the execution cost of processing each request is $0.00015$ (denoted as $C_{cpu}$), and a component service has the capacity to process 55 requests per second while the predicted workload on this component service is 48 requests per second. Assuming that the accumulated interests till now is $\$1.02$, then this component service will carry the interest as $1.02 + (55 - 48) \times 0.00015 = $1.0305. (b) Service over-utilization: When the component service is over-utilized, i.e., the predicted workload is greater than the capacity of component service ($W_t > T$), the SLA requirement on latency (denoted as $R_{SLA}$) would often be violated [12], and thus a penalty rate (denoted as $P$) would be used to compute the extra cost to be paid. Suppose again, for a component service, that the accumulated interests till now is $\$1.02$, and that a given SLA contains the requirement of 2 seconds latency and the penalty rate of latency violation is $0.0025$ per second. Now, assuming that the average service latency, derived from the predicted workload and its capacity, is 3.5 seconds, then the interest would be \( \$1.02 + (3.5-2.0) \times 0.0025 = \$1.0237 \). Finally, up to the future \( n \) timesteps, the overall technical debt (denoted as Debt) of a decision for recomposing the services can be identified and estimated according to the principal and interests, as shown in equation 4: \[ Debt = Principal + \sum_{k=1}^{n} Interest(CS_{x,y}) \tag{4} \] where \( k \) is the total number of abstract services and \( CS_{x,y} \) is the selected component service for the \( x \)th abstract service **B. Calculating Debt Aware Utility for Service Composition** The utility of a service composition consists of the revenue and the fundamental operation cost. In particular, the revenue and cost accumulated for future \( n \) timesteps can be calculated as the following: \[ Revenue(CS_{x,y}) = \sum_{t=1}^{n} W_t \times C_{tenants} \tag{5} \] \[ Cost(CS_{x,y}) = \sum_{t=1}^{n} W_t \times C \tag{6} \] whereby \( C_{tenants} \) is the charge to the tenants per request, which directly contributes to the revenue generated by the composite service. \( C \) is again the cost per request to the SaaS provider for using a component service and its infrastructure. \( W_t \) is the predicted workload at time \( t \). Combining with the debt model, the utility of a service composition (denoted as \( U \)) decision for future \( n \) timesteps can be calculated as: \[ U = \sum_{x=1}^{n} Revenue(CS_{x,y}) - \sum_{k=1}^{n} Cost(CS_{x,y}) - Debt \tag{7} \] Such a utility model is debt-aware and predictive, which helps to consolidate the decision making process of service composition. **C. Identifying Good and Bad Debt** According to our definition about the good and bad debt in section IV, the debt depends on the service utility over execution time. Our approach dynamically monitors the service utility for the future timesteps. The incurred debt can be considered as good or bad based on equation 7, suppose the overall utility \( (U_t) \) generated at time \( t \) is $1.5 along with debt acceptance and for the next monitoring period, the predictive utility at \( t+n \) is $2.0, which implies that the accumulated debt at time \( t \) is good and it should be accepted. This is because the accumulated debts in the past would be paid off by \( t+n \), leading to an anticipated improvement on the overall utility. Otherwise, the service capacity can become underutilized and accumulates the bad debt that is unlikely to be paid off in future. Specifically, We calculate the good and the bad debt using following equation: \[ Debt = \begin{cases} Debt_{good} & \text{if } U_t \leq U_{t+n} \\ Debt_{bad} & \text{otherwise} \end{cases} \tag{8} \] The notion of good and bad debt provide a simple, intuitive, yet effective way for the service provider to make more informed decision on the recomposition of service. **V. EXPERIMENTAL RESULTS** **A. Experimental Setup** We extended the Service Composition Middleware [13], a tool for modelling and simulating multi-tenant service composition. Such a tool exploits evolutionary algorithm to optimize service composition in the SaaS cloud. On top of that, we implemented our approach to identify and estimate technical debt throughout the life-cycle of a service composition. In particular, our experiments aim to answer the following research questions: - **RQ1**: Whether the approach is sufficiently accurate in estimating the technical debt (and utility) for service composition in SaaS cloud? - **RQ2**: Whether our approach can successfully identify good and bad debts? We conducted all experiments on the same machine with Intel Core i7 2.60 Ghz. Processor, 8GB RAM and windows 10. We use Sales CRM, a real-world application, as our testing environment (shown in Figure 3). The Sales CRM application processes the incoming requests workload (actual) as shown in Figure 6. In our experiments, the workload is collected from the 1998 FIFA World Cup website trace [14] for the length of 7200 seconds. To evaluate the prediction quality, we preprocess the workload by using the first half as the samples for training the forecasting model, while the remaining workload data is used for testing the accuracy. We conduct a monitoring every 5 seconds. The ARFIMA model is implemented using the arfima package in R [15]. We run a simulation on multi-tenant middleware by implementing our technical debt approach with the simulation parameters shown in Table I. **B. Results Discussion for RQ1** To predict the workload for each component service, we fit the ARFIMA model and evaluate the prediction accuracy using common accuracy metrics[11]. These metrics contain Mean Absolute Error (MAE) measures the prediction accuracy by averaging the absolute value of the difference between actual value and predicted value, Root Mean Square Error (RMSE) is a standard deviation which is measured by the difference between the actual value and predicted value. Moreover, Theil's coefficient indicates the good forecasting if Theil value lies between 0 and 1, otherwise shows the poor prediction. Table II provides a summary of the mean accuracy of predicting workload for all the component services. From the table, we see that the MAE and RMSE is within 15% of the general workload, which has a value between 35 and 50 request per second. This is considered a relatively low error and thus the accuracy is acceptable. As a more detailed example, Figure 4 illustrates the workload trace for a component service. As we can see, the results of the predicted workload generally match with the actual one. Drawing on the predicted workload, the technical debt can be identified and estimated. Figure 5 shows the predicted and actual debt for all component services. Clearly, the two traces do not match exactly. However, we see that the slope generally follow similar patterns, but differs only in terms of the magnitudes. The deviations between the two traces are also acceptable. In Figure 6, we also plot the predicted and actual utility of the service composition, and again we see generally similar trace. This implies that the predicted workload can also help to estimate the revenue and cost, not only the likely debt. C. Results Discussion for RQ2 Throughout the entire 3600 seconds run of the service composition and drawing from equation 8, we were able to identify numbers of good and bad debt incurred by the decision of recomposition (monitor every 5 seconds), shown in Table III. Clearly, in our case study, the number of good debt is higher than that of the bad ones, implying a generally healthier status of the service composition. To provide a more detailed analysis of the good and bad debt, in Figure 7, we illustrate the total utilization of all the component services involved. In particular, we highlight two sets of example: one set is classified as 60 bad debt and another is considered as good (20 debts). At time horizon 1200s to 1500s, the debt accumulated till every 5 seconds length is consider as bad, and hence we have 60 bad debt as there are 60 monitoring. This is because from a point to the next time point (5 seconds later), the utility is decreasing. In contrast, between 2400s and 2500s time points, we found 20 good debt as the accumulated debt at every point would be paid off by the next time point. VI. CONCLUSION AND FUTURE WORK This paper leverages the notion of technical debt to model the utility of service composition. Specifically, we identified key technical debt indicators that contribute to the accumulation of technical debt during the execution of a composite service. The model, enhanced by time series forecasting of requests workload, can identify and estimate the future debt and utility for service composition. In future work, we will study the time-sensitivity and its impacts over taking technical debt aware proactive decision for dynamic service recomposition. REFERENCES
{"Source-Url": "http://www.buyya.com/papers/TechDebtSaaSCloud-ICWS2019.pdf", "len_cl100k_base": 5073, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17572, "total-output-tokens": 6296, "length": "2e12", "weborganizer": {"__label__adult": 0.0003399848937988281, "__label__art_design": 0.0004520416259765625, "__label__crime_law": 0.0003643035888671875, "__label__education_jobs": 0.0012683868408203125, "__label__entertainment": 0.0001138448715209961, "__label__fashion_beauty": 0.0001811981201171875, "__label__finance_business": 0.0023708343505859375, "__label__food_dining": 0.0004088878631591797, "__label__games": 0.000492095947265625, "__label__hardware": 0.00079345703125, "__label__health": 0.0007781982421875, "__label__history": 0.000278472900390625, "__label__home_hobbies": 0.00011152029037475586, "__label__industrial": 0.00043082237243652344, "__label__literature": 0.0004401206970214844, "__label__politics": 0.0003418922424316406, "__label__religion": 0.0003643035888671875, "__label__science_tech": 0.0697021484375, "__label__social_life": 0.000148773193359375, "__label__software": 0.0194244384765625, "__label__software_dev": 0.900390625, "__label__sports_fitness": 0.00023317337036132812, "__label__transportation": 0.00045180320739746094, "__label__travel": 0.0002036094665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26451, 0.01529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26451, 0.23732]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26451, 0.89991]], "google_gemma-3-12b-it_contains_pii": [[0, 5571, false], [5571, 10571, null], [10571, 15942, null], [15942, 21101, null], [21101, 26451, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5571, true], [5571, 10571, null], [10571, 15942, null], [15942, 21101, null], [21101, 26451, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26451, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26451, null]], "pdf_page_numbers": [[0, 5571, 1], [5571, 10571, 2], [10571, 15942, 3], [15942, 21101, 4], [21101, 26451, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26451, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5d01c28043146472a307ca2ec96dec78cc2a25a0
DNS Queries over HTTPS (DoH) Abstract This document defines a protocol for sending DNS queries and getting DNS responses over HTTPS. Each DNS query-response pair is mapped into an HTTP exchange. Status of This Memo This is an Internet Standards Track document. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Further information on Internet Standards is available in Section 2 of RFC 7841. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at https://www.rfc-editor.org/info/rfc8484. Copyright Notice Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust’s Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Table of Contents 1. Introduction ................................................. 3 2. Terminology ................................................. 3 3. Selection of DoH Server ................................... 4 4. The HTTP Exchange ....................................... 4 4.1. The HTTP Request ................................... 4 4.1.1. HTTP Request Examples ....................... 5 4.2. The HTTP Response ................................... 7 4.2.1. Handling DNS and HTTP Errors ............... 7 4.2.2. HTTP Response Example ..................... 8 5. HTTP Integration ........................................... 8 5.1. Cache Interaction ................................... 8 5.2. HTTP/2 ............................................. 10 5.3. Server Push ....................................... 10 5.4. Content Negotiation ............................... 10 6. Definition of the "application/dns-message" Media Type .... 10 7. IANA Considerations ....................................... 11 7.1. Registration of the "application/dns-message" Media Type 11 8. Privacy Considerations ..................................... 12 8.1. On the Wire ....................................... 12 8.2. In the Server ..................................... 12 9. Security Considerations .................................... 14 10. Operational Considerations ............................... 15 11. References ................................................ 16 11.1. Normative References ............................ 16 11.2. Informative References .......................... 18 Appendix A. Protocol Development .............................. 20 Appendix B. Previous Work on DNS over HTTP or in Other Formats 20 Acknowledgments .............................................. 21 Authors' Addresses ........................................... 21 1. Introduction This document defines a specific protocol, DNS over HTTPS (DoH), for sending DNS [RFC1035] queries and getting DNS responses over HTTP [RFC7540] using https [RFC2818] URIs (and therefore TLS [RFC8446] security for integrity and confidentiality). Each DNS query-response pair is mapped into an HTTP exchange. The described approach is more than a tunnel over HTTP. It establishes default media formatting types for requests and responses but uses normal HTTP content negotiation mechanisms for selecting alternatives that endpoints may prefer in anticipation of serving new use cases. In addition to this media type negotiation, it aligns itself with HTTP features such as caching, redirection, proxying, authentication, and compression. The integration with HTTP provides a transport suitable for both existing DNS clients and native web applications seeking access to the DNS. Two primary use cases were considered during this protocol’s development. These use cases are preventing on-path devices from interfering with DNS operations, and also allowing web applications to access DNS information via existing browser APIs in a safe way consistent with Cross Origin Resource Sharing (CORS) [FETCH]. No special effort has been taken to enable or prevent application to other use cases. This document focuses on communication between DNS clients (such as operating system stub resolvers) and recursive resolvers. 2. Terminology A server that supports this protocol is called a "DoH server" to differentiate it from a "DNS server" (one that only provides DNS service over one or more of the other transport protocols standardized for DNS). Similarly, a client that supports this protocol is called a "DoH client". The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here. 3. Selection of DoH Server The DoH client is configured with a URI Template [RFC6570], which describes how to construct the URL to use for resolution. Configuration, discovery, and updating of the URI Template is done out of band from this protocol. Note that configuration might be manual (such as a user typing URI Templates in a user interface for "options") or automatic (such as URI Templates being supplied in responses from DHCP or similar protocols). DoH servers MAY support more than one URI Template. This allows the different endpoints to have different properties, such as different authentication requirements or service-level guarantees. A DoH client uses configuration to select the URI, and thus the DoH server, that is to be used for resolution. [RFC2818] defines how HTTPS verifies the DoH server’s identity. A DoH client MUST NOT use a different URI simply because it was discovered outside of the client’s configuration (such as through HTTP/2 server push) or because a server offers an unsolicited response that appears to be a valid answer to a DNS query. This specification does not extend DNS resolution privileges to URIs that are not recognized by the DoH client as configured URIs. Such scenarios may create additional operational, tracking, and security hazards that require limitations for safe usage. A future specification may support this use case. 4. The HTTP Exchange 4.1. The HTTP Request A DoH client encodes a single DNS query into an HTTP request using either the HTTP GET or POST method and the other requirements of this section. The DoH server defines the URI used by the request through the use of a URI Template. The URI Template defined in this document is processed without any variables when the HTTP method is POST. When the HTTP method is GET, the single variable "dns" is defined as the content of the DNS request (as described in Section 6), encoded with base64url [RFC4648]. Future specifications for new media types for DoH MUST define the variables used for URI Template processing with this protocol. DoH servers MUST implement both the POST and GET methods. When using the POST method, the DNS query is included as the message body of the HTTP request, and the Content-Type request header field indicates the media type of the message. POSTed requests are generally smaller than their GET equivalents. Using the GET method is friendlier to many HTTP cache implementations. The DoH client SHOULD include an HTTP Accept request header field to indicate what type of content can be understood in response. Irrespective of the value of the Accept request header field, the client MUST be prepared to process "application/dns-message" (as described in Section 6) responses but MAY also process other DNS-related media types it receives. In order to maximize HTTP cache friendliness, DoH clients using media formats that include the ID field from the DNS message header, such as "application/dns-message", SHOULD use a DNS ID of 0 in every DNS request. HTTP correlates the request and response, thus eliminating the need for the ID in a media type such as "application/dns-message". The use of a varying DNS ID can cause semantically equivalent DNS queries to be cached separately. DoH clients can use HTTP/2 padding and compression [RFC7540] in the same way that other HTTP/2 clients use (or don’t use) them. 4.1.1. HTTP Request Examples These examples use HTTP/2-style formatting from [RFC7540]. These examples use a DoH service with a URI Template of "https://dnsserver.example.net/dns-query{?dns}" to resolve IN A records. The requests are represented as bodies with media type "application/dns-message". The first example request uses GET to request "www.example.com". ``` :method = GET :scheme = https :authority = dnsserver.example.net :path = /dns-query?dns=AAABAAABAAAAAAAA3d3dwdleGFtcGxlA2NvbQAAAQAB accept = application/dns-message ``` The same DNS query for "www.example.com", using the POST method would be: :method = POST :scheme = https :authority = dnsserver.example.net :path = /dns-query accept = application/dns-message content-type = application/dns-message content-length = 33 <33 bytes represented by the following hex encoding> 00 00 01 00 00 01 00 00 00 00 03 77 77 77 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 00 01 00 01 In this example, the 33 bytes are the DNS message in DNS wire format [RFC1035], starting with the DNS header. Finally, a GET-based query for "a.62characterlabel-makes-base64url-distinct-from-standard-base64.example.com" is shown as an example to emphasize that the encoding alphabet of base64url is different than regular base64 and that padding is omitted. The DNS query, expressed in DNS wire format, is 94 bytes represented by the following: 00 00 01 00 00 01 00 00 00 00 01 61 3e 36 32 63 68 61 72 61 63 74 65 72 6c 61 62 65 6c 2d 6d 61 6b 65 73 2d 62 61 73 65 36 34 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 00 01 00 01 [method = GET :scheme = https :authority = dnsserver.example.net :path = /dns-query? :dnss=AAABAAABAAAAAAAWE-NjJjaGFyYWN0ZXJsYWJl (no space or CR) bC1tYWtlcy1iYXNlNjr1cmwtZGlzdGluY3QtZnJvbS1z (no space or CR) dGFuZGFyZC1iYXNlNjQHZXhhbXBsZQNjb20AAAAEAQ accept = application/dns-message 4.2. The HTTP Response The only response type defined in this document is "application/dns-message", but it is possible that other response formats will be defined in the future. A DoH server MUST be able to process "application/dns-message" request messages. Different response media types will provide more or less information from a DNS response. For example, one response type might include information from the DNS header bytes while another might omit it. The amount and type of information that a media type gives are solely up to the format, which is not defined in this protocol. Each DNS request-response pair is mapped to one HTTP exchange. The responses may be processed and transported in any order using HTTP’s multi-streaming functionality (see Section 5 of [RFC7540]). Section 5.1 discusses the relationship between DNS and HTTP response caching. 4.2.1. Handling DNS and HTTP Errors DNS response codes indicate either success or failure for the DNS query. A successful HTTP response with a 2xx status code (see Section 6.3 of [RFC7231]) is used for any valid DNS response, regardless of the DNS response code. For example, a successful 2xx HTTP status code is used even with a DNS message whose DNS response code indicates failure, such as SERVFAIL or NXDOMAIN. HTTP responses with non-successful HTTP status codes do not contain replies to the original DNS question in the HTTP request. DoH clients need to use the same semantic processing of non-successful HTTP status codes as other HTTP clients. This might mean that the DoH client retries the query with the same DoH server, such as if there are authorization failures (HTTP status code 401; see Section 3.1 of [RFC7235]). It could also mean that the DoH client retries with a different DoH server, such as for unsupported media types (HTTP status code 415; see Section 6.5.13 of [RFC7231]), or where the server cannot generate a representation suitable for the client (HTTP status code 406; see Section 6.5.6 of [RFC7231]), and so on. 4.2.2. HTTP Response Example This is an example response for a query for the IN AAAA records for "www.example.com" with recursion turned on. The response bears one answer record with an address of 2001:db8:abcd:12:1:2:3:4 and a TTL of 3709 seconds. :status = 200 content-type = application/dns-message content-length = 61 cache-control = max-age=3709 <61 bytes represented by the following hex encoding> 00 00 81 80 00 01 00 01 00 00 00 03 77 77 77 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 00 1c 00 01 c0 0c 00 1c 00 01 00 00 0e 7d 00 10 20 01 0d b8 ab cd 00 12 00 01 00 02 00 03 00 04 5. HTTP Integration This protocol MUST be used with the https URI scheme [RFC7230]. Sections 8 and 9 discuss additional considerations for the integration with HTTP. 5.1. Cache Interaction A DoH exchange can pass through a hierarchy of caches that include both HTTP- and DNS-specific caches. These caches may exist between the DoH server and client, or they may exist on the DoH client itself. HTTP caches are generic by design; that is, they do not understand this protocol. Even if a DoH client has modified its cache implementation to be aware of DoH semantics, it does not follow that all upstream caches (for example, inline proxies, server-side gateways, and content delivery networks) will be. As a result, DoH servers need to carefully consider the HTTP caching metadata they send in response to GET requests (responses to POST requests are not cacheable unless specific response header fields are sent; this is not widely implemented and is not advised for DoH). In particular, DoH servers SHOULD assign an explicit HTTP freshness lifetime (see Section 4.2 of [RFC7234]) so that the DoH client is more likely to use fresh DNS data. This requirement is due to HTTP caches being able to assign their own heuristic freshness (such as that described in Section 4.2.2 of [RFC7234]), which would take control of the cache contents out of the hands of the DoH server. The assigned freshness lifetime of a DoH HTTP response MUST be less than or equal to the smallest TTL in the Answer section of the DNS response. A freshness lifetime equal to the smallest TTL in the Answer section is RECOMMENDED. For example, if a HTTP response carries three RRsets with TTLs of 30, 600, and 300, the HTTP freshness lifetime should be 30 seconds (which could be specified as "Cache-Control: max-age=30"). This requirement helps prevent expired RRsets in messages in an HTTP cache from unintentionally being served. If the DNS response has no records in the Answer section, and the DNS response has an SOA record in the Authority section, the response freshness lifetime MUST NOT be greater than the MINIMUM field from that SOA record (see [RFC2308]). The stale-while-revalidate and stale-if-error Cache-Control directives [RFC5861] could be well suited to a DoH implementation when allowed by server policy. Those mechanisms allow a client, at the server’s discretion, to reuse an HTTP cache entry that is no longer fresh. In such a case, the client reuses either all of a cached entry or none of it. DoH servers also need to consider HTTP caching when generating responses that are not globally valid. For instance, if a DoH server customizes a response based on the client’s identity, it would not want to allow global reuse of that response. This could be accomplished through a variety of HTTP techniques, such as a Cache-Control max-age of 0, or by using the Vary response header field (see Section 7.1.4 of [RFC7231]) to establish a secondary cache key (see Section 4.1 of [RFC7234]). DoH clients MUST account for the Age response header field’s value [RFC7234] when calculating the DNS TTL of a response. For example, if an RRset is received with a DNS TTL of 600, but the Age header field indicates that the response has been cached for 250 seconds, the remaining lifetime of the RRset is 350 seconds. This requirement applies to both DoH client HTTP caches and DoH client DNS caches. DoH clients can request an uncached copy of a HTTP response by using the "no-cache" request Cache-Control directive (see Section 5.2.1.4 of [RFC7234]) and similar controls. Note that some caches might not honor these directives, either due to configuration or interaction with traditional DNS caches that do not have such a mechanism. HTTP conditional requests [RFC7232] may be of limited value to DoH, as revalidation provides only a bandwidth benefit and DNS transactions are normally latency bound. Furthermore, the HTTP response header fields that enable revalidation (such as "Last- Modified" and "Etag") are often fairly large when compared to the overall DNS response size and have a variable nature that creates constant pressure on the HTTP/2 compression dictionary [RFC7541]. Other types of DNS data, such as zone transfers, may be larger and benefit more from revalidation. 5.2. HTTP/2 HTTP/2 [RFC7540] is the minimum RECOMMENDED version of HTTP for use with DoH. The messages in classic UDP-based DNS [RFC1035] are inherently unordered and have low overhead. A competitive HTTP transport needs to support reordering, parallelism, priority, and header compression to achieve similar performance. Those features were introduced to HTTP in HTTP/2 [RFC7540]. Earlier versions of HTTP are capable of conveying the semantic requirements of DoH but may result in very poor performance. 5.3. Server Push Before using DoH response data for DNS resolution, the client MUST establish that the HTTP request URI can be used for the DoH query. For HTTP requests initiated by the DoH client, this is implicit in the selection of URI. For HTTP server push (see Section 8.2 of [RFC7540]), extra care must be taken to ensure that the pushed URI is one that the client would have directed the same query to if the client had initiated the request (in addition to the other security checks normally needed for server push). 5.4. Content Negotiation In order to maximize interoperability, DoH clients and DoH servers MUST support the "application/dns-message" media type. Other media types MAY be used as defined by HTTP Content Negotiation (see Section 3.4 of [RFC7231]). Those media types MUST be flexible enough to express every DNS query that would normally be sent in DNS over UDP (including queries and responses that use DNS extensions, but not those that require multiple responses). 6. Definition of the "application/dns-message" Media Type The data payload for the "application/dns-message" media type is a single message of the DNS on-the-wire format defined in Section 4.2.1 of [RFC1035], which in turn refers to the full wire format defined in Section 4.1 of that RFC. Although [RFC1035] says "Messages carried by UDP are restricted to 512 bytes", that was later updated by [RFC6891]. This media type restricts the maximum size of the DNS message to 65535 bytes. Note that the wire format used in this media type is different than the wire format used in [RFC7858] (which uses the format defined in Section 4.2.2 of [RFC1035] that includes two length bytes). DoH clients using this media type MAY have one or more Extension Mechanisms for DNS (EDNS) options [RFC6891] in the request. DoH servers using this media type MUST ignore the value given for the EDNS UDP payload size in DNS requests. When using the GET method, the data payload for this media type MUST be encoded with base64url [RFC4648] and then provided as a variable named "dns" to the URI Template expansion. Padding characters for base64url MUST NOT be included. When using the POST method, the data payload for this media type MUST NOT be encoded and is used directly as the HTTP message body. 7. IANA Considerations 7.1. Registration of the "application/dns-message" Media Type Type name: application dns-message Required parameters: N/A Optional parameters: N/A Encoding considerations: This is a binary format. The contents are a DNS message as defined in RFC 1035. The format used here is for DNS over UDP, which is the format defined in the diagrams in RFC 1035. Security considerations: See RFC 8484. The content is a DNS message and thus not executable code. Interoperability considerations: None. Published specification: RFC 8484. Applications that use this media type: Systems that want to exchange full DNS messages. 8. Privacy Considerations [RFC7626] discusses DNS privacy considerations in both "on the wire" (Section 2.4 of [RFC7626]) and "in the server" (Section 2.5 of [RFC7626]) contexts. This is also a useful framing for DoH’s privacy considerations. 8.1. On the Wire DoH encrypts DNS traffic and requires authentication of the server. This mitigates both passive surveillance [RFC7258] and active attacks that attempt to divert DNS traffic to rogue servers (see Section 2.5.1 of [RFC7626]). DNS over TLS [RFC7858] provides similar protections, while direct UDP- and TCP-based transports are vulnerable to this class of attack. An experimental effort to offer guidance on choosing the padding length can be found in [RFC8467]. Additionally, the use of the HTTPS default port 443 and the ability to mix DoH traffic with other HTTPS traffic on the same connection can deter unprivileged on-path devices from interfering with DNS operations and make DNS traffic analysis more difficult. 8.2. In the Server The DNS wire format [RFC1035] contains no client identifiers; however, various transports of DNS queries and responses do provide data that can be used to correlate requests. HTTPS presents new considerations for correlation, such as explicit HTTP cookies and implicit fingerprinting of the unique set and ordering of HTTP request header fields. A DoH implementation is built on IP, TCP, TLS, and HTTP. Each layer contains one or more common features that can be used to correlate queries to the same identity. DNS transports will generally carry the same privacy properties of the layers used to implement them. For example, the properties of IP, TCP, and TLS apply to implementations of DNS over TLS. The privacy considerations of using the HTTPS layer in DoH are incremental to those of DNS over TLS. DoH is not known to introduce new concerns beyond those associated with HTTPS. At the IP level, the client address provides obvious correlation information. This can be mitigated by use of a NAT, proxy, VPN, or simple address rotation over time. It may be aggravated by use of a DNS server that can correlate real-time addressing information with other personal identifiers, such as when a DNS server and DHCP server are operated by the same entity. DNS implementations that use one TCP connection for multiple DNS requests directly group those requests. Long-lived connections have better performance behaviors than short-lived connections; however, they group more requests, which can expose more information to correlation and consolidation. TCP-based solutions may also seek performance through the use of TCP Fast Open [RFC7413]. The cookies used in TCP Fast Open allow servers to correlate TCP sessions. TLS-based implementations often achieve better handshake performance through the use of some form of session resumption mechanism, such as Section 2.2 of [RFC8446]. Session resumption creates trivial mechanisms for a server to correlate TLS connections together. HTTP’s feature set can also be used for identification and tracking in a number of different ways. For example, Authentication request header fields explicitly identify profiles in use, and HTTP cookies are designed as an explicit state-tracking mechanism between the client and serving site and often are used as an authentication mechanism. Additionally, the User-Agent and Accept-Language request header fields often convey specific information about the client version or locale. This facilitates content negotiation and operational workarounds for implementation bugs. Request header fields that control caching can expose state information about a subset of the client’s history. Mixing DoH requests with other HTTP requests on the same connection also provides an opportunity for richer data correlation. The DoH protocol design allows applications to fully leverage the HTTP ecosystem, including features that are not enumerated here. Utilizing the full set of HTTP features enables DoH to be more than an HTTP tunnel, but it is at the cost of opening up implementations to the full set of privacy considerations of HTTP. Implementations of DoH clients and servers need to consider the benefit and privacy impact of these features, and their deployment context, when deciding whether or not to enable them. Implementations are advised to expose the minimal set of data needed to achieve the desired feature set. Determining whether or not a DoH implementation requires HTTP cookie [RFC6265] support is particularly important because HTTP cookies are the primary state tracking mechanism in HTTP. HTTP cookies SHOULD NOT be accepted by DOH clients unless they are explicitly required by a use case. 9. Security Considerations Running DNS over HTTPS relies on the security of the underlying HTTP transport. This mitigates classic amplification attacks for UDP-based DNS. Implementations utilizing HTTP/2 benefit from the TLS profile defined in Section 9.2 of [RFC7540]. Session-level encryption has well-known weaknesses with respect to traffic analysis, which might be particularly acute when dealing with DNS queries. HTTP/2 provides further advice about the use of compression (see Section 10.6 of [RFC7540]) and padding (see Section 10.7 of [RFC7540]). DoH servers can also add DNS padding [RFC7830] if the DoH client requests it in the DNS query. An experimental effort to offer guidance on choosing the padding length can be found in [RFC8467]. The HTTPS connection provides transport security for the interaction between the DoH server and client, but it does not provide the response integrity of DNS data provided by DNSSEC. DNSSEC and DoH are independent and fully compatible protocols, each solving different problems. The use of one does not diminish the need nor the usefulness of the other. It is the choice of a client to either perform full DNSSEC validation of answers or to trust the DoH server to do DNSSEC validation and inspect the AD (Authentic Data) bit in the returned message to determine whether an answer was authentic or not. As noted in Section 4.2, different response media types will provide more or less information from a DNS response, so this choice may be affected by the response media type. Section 5.1 describes the interaction of this protocol with HTTP caching. An adversary that can control the cache used by the client can affect that client’s view of the DNS. This is no different than the security implications of HTTP caching for other protocols that use HTTP. In the absence of DNSSEC information, a DoH server can give a client invalid data in response to a DNS query. Section 3 disallows the use of DoH DNS responses that do not originate from configured servers. This prohibition does not guarantee protection against invalid data, but it does reduce the risk. 10. Operational Considerations Local policy considerations and similar factors mean different DNS servers may provide different results to the same query, for instance, in split DNS configurations [RFC6950]. It logically follows that the server that is queried can influence the end result. Therefore, a client’s choice of DNS server may affect the responses it gets to its queries. For example, in the case of DNS64 [RFC6147], the choice could affect whether IPv6/IPv4 translation will work at all. The HTTPS channel used by this specification establishes secure two-party communication between the DoH client and the DoH server. Filtering or inspection systems that rely on unsecured transport of DNS will not function in a DNS over HTTPS environment due to the confidentiality and integrity protection provided by TLS. Some HTTPS client implementations perform real time third-party checks of the revocation status of the certificates being used by TLS. If this check is done as part of the DoH server connection procedure and the check itself requires DNS resolution to connect to the third party, a deadlock can occur. The use of Online Certificate Status Protocol (OCSP) [RFC6960] servers or Authority Information Access (AIA) for Certificate Revocation List (CRL) fetching (see Section 4.2.2.1 of [RFC5280]) are examples of how this deadlock can happen. To mitigate the possibility of deadlock, the authentication given DoH servers SHOULD NOT rely on DNS-based references to external resources in the TLS handshake. For OCSP, the server can bundle the certificate status as part of the handshake using a mechanism appropriate to the version of TLS, such as using Section 4.4.2.1 of [RFC8446] for TLS version 1.3. AIA deadlocks can be avoided by providing intermediate certificates that might otherwise be obtained through additional requests. Note that these deadlocks also need to be considered for servers that a DoH server might redirect to. A DoH client may face a similar bootstrapping problem when the HTTP request needs to resolve the hostname portion of the DNS URI. Just as the address of a traditional DNS nameserver cannot be originally determined from that same server, a DoH client cannot use its DoH server to initially resolve the server’s host name into an address. Alternative strategies a client might employ include 1) making the initial resolution part of the configuration, 2) IP-based URIs and corresponding IP-based certificates for HTTPS, or 3) resolving the DNS API server’s hostname via traditional DNS or another DoH server while still authenticating the resulting connection via HTTPS. HTTP [RFC7230] is a stateless application-level protocol, and therefore DoH implementations do not provide stateful ordering guarantees between different requests. DoH cannot be used as a transport for other protocols that require strict ordering. A DoH server is allowed to answer queries with any valid DNS response. For example, a valid DNS response might have the TC (truncation) bit set in the DNS header to indicate that the server was not able to retrieve a full answer for the query but is providing the best answer it could get. A DoH server can reply to queries with an HTTP error for queries that it cannot fulfill. In this same example, a DoH server could use an HTTP error instead of a non-error response that has the TC bit set. Many extensions to DNS, using [RFC6891], have been defined over the years. Extensions that are specific to the choice of transport, such as [RFC7828], are not applicable to DoH. 11. References 11.1. Normative References Hoffman & McManus Standards Track [Page 16] 11.2. Informative References Appendix A. Protocol Development This appendix describes the requirements used to design DoH. These requirements are listed here to help readers understand the current protocol, not to limit how the protocol might be developed in the future. This appendix is non-normative. The protocol described in this document based its design on the following protocol requirements: - The protocol must use normal HTTP semantics. - The queries and responses must be able to be flexible enough to express every DNS query that would normally be sent in DNS over UDP (including queries and responses that use DNS extensions, but not those that require multiple responses). - The protocol must permit the addition of new formats for DNS queries and responses. - The protocol must ensure interoperability by specifying a single format for requests and responses that is mandatory to implement. That format must be able to support future modifications to the DNS protocol including the inclusion of one or more EDNS options (including those not yet defined). - The protocol must use a secure transport that meets the requirements for HTTPS. The following were considered non-requirements: - Supporting network-specific DNS64 [RFC6147] - Supporting other network-specific inferences from plaintext DNS queries - Supporting insecure HTTP Appendix B. Previous Work on DNS over HTTP or in Other Formats The following is an incomplete list of earlier work that related to DNS over HTTP/1 or representing DNS data in other formats. The list includes links to the tools.ietf.org site (because these documents are all expired) and web sites of software. Acknowledgments This work required a high level of cooperation between experts in different technologies. Thank you Ray Bellis, Stephane Bortzmeyer, Manu Bretelle, Sara Dickinson, Massimiliano Fantuzzi, Tony Finch, Daniel Kahn Gilmor, Olafur Gudmundsson, Wes Hardaker, Rory Hewitt, Joe Hildebrand, David Lawrence, Eliot Lear, John Mattsson, Alex Mayrhofer, Mark Nottingham, Jim Reid, Adam Roach, Ben Schwartz, Davey Song, Daniel Stenberg, Andrew Sullivan, Martin Thomson, and Sam Weiler. Authors’ Addresses Paul Hoffman ICANN Email: paul.hoffman@icann.org Patrick McManus Mozilla Email: mcmanus@ducksong.com
{"Source-Url": "http://potaroo.net/ietf/rfc/PDF/rfc8484.pdf", "len_cl100k_base": 7438, "olmocr-version": "0.1.48", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 42983, "total-output-tokens": 10919, "length": "2e12", "weborganizer": {"__label__adult": 0.0004091262817382813, "__label__art_design": 0.0003986358642578125, "__label__crime_law": 0.00087738037109375, "__label__education_jobs": 0.0010995864868164062, "__label__entertainment": 0.00024437904357910156, "__label__fashion_beauty": 0.0002053976058959961, "__label__finance_business": 0.000942707061767578, "__label__food_dining": 0.00035452842712402344, "__label__games": 0.0008068084716796875, "__label__hardware": 0.0036869049072265625, "__label__health": 0.0005764961242675781, "__label__history": 0.0006756782531738281, "__label__home_hobbies": 8.022785186767578e-05, "__label__industrial": 0.0005640983581542969, "__label__literature": 0.0006165504455566406, "__label__politics": 0.0005354881286621094, "__label__religion": 0.0006427764892578125, "__label__science_tech": 0.34375, "__label__social_life": 0.00015592575073242188, "__label__software": 0.1605224609375, "__label__software_dev": 0.481201171875, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.0009984970092773438, "__label__travel": 0.00033545494079589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38711, 0.05846]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38711, 0.62892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38711, 0.85827]], "google_gemma-3-12b-it_contains_pii": [[0, 1411, false], [1411, 3305, null], [3305, 5335, null], [5335, 7457, null], [7457, 9250, null], [9250, 10561, null], [10561, 12575, null], [12575, 14538, null], [14538, 17142, null], [17142, 19236, null], [19236, 20875, null], [20875, 22222, null], [22222, 24671, null], [24671, 27099, null], [27099, 29638, null], [29638, 31865, null], [31865, 33718, null], [33718, 35052, null], [35052, 36397, null], [36397, 38097, null], [38097, 38711, null], [38711, 38711, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1411, true], [1411, 3305, null], [3305, 5335, null], [5335, 7457, null], [7457, 9250, null], [9250, 10561, null], [10561, 12575, null], [12575, 14538, null], [14538, 17142, null], [17142, 19236, null], [19236, 20875, null], [20875, 22222, null], [22222, 24671, null], [24671, 27099, null], [27099, 29638, null], [29638, 31865, null], [31865, 33718, null], [33718, 35052, null], [35052, 36397, null], [36397, 38097, null], [38097, 38711, null], [38711, 38711, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38711, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38711, null]], "pdf_page_numbers": [[0, 1411, 1], [1411, 3305, 2], [3305, 5335, 3], [5335, 7457, 4], [7457, 9250, 5], [9250, 10561, 6], [10561, 12575, 7], [12575, 14538, 8], [14538, 17142, 9], [17142, 19236, 10], [19236, 20875, 11], [20875, 22222, 12], [22222, 24671, 13], [24671, 27099, 14], [27099, 29638, 15], [29638, 31865, 16], [31865, 33718, 17], [33718, 35052, 18], [35052, 36397, 19], [36397, 38097, 20], [38097, 38711, 21], [38711, 38711, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38711, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
0555bb96e0b6b9ede7ba51e8f3ee283bd4e3e4dc
The KITE Model for Assessment of Academic Software Products Abstract We reflect on the topic of assessing the merit of software products developed by research groups within the academia. To this end, a model is proposed to define the score of an arbitrary software product. The model consists of four determinants, namely new knowledge dissemination effect (K), impact in target population (I), technological innovation (T), and engineering achievement (E). These determinants are integrated into a "KITE" graphical model. The model admits both geometric and numeric interpretations, enabling decision makers to analyze profiles of software productivity for a particular academic unit from a quantitative or qualitative viewpoint. The ratings, which enable software to be scored regarding each determinant, are also described. Following the model, preliminary test lists are sketched as a proposal of measurement instruments for these scores. Key words: Assessment of software products, technological innovation, academic groups productivity. Resumen Se presenta a continuación una propuesta para Valorar productos de software desarrollados por grupos de investigación en el ámbito académico. Con este objetivo, se describe un modelo que consiste de cuatro ejes determinantes para la medición o valoración de un producto de software cualquiera: El efecto en la diseminación avance en el conocimiento (K), el impacto en la población usuaria potencial (I), la innovación tecnológica (T) y los aspectos de calidad del producto desde la perspectiva de la disciplina de la Ingeniería de Software (E). Estos determinantes se integran en un modelo gráfico que hemos denominado KITE. El modelo admite tanto interpretación numérica como geométrica para facilitar, a los tomadores de decisiones, el análisis de perfiles de productividad de software de una unidad académica, desde el punto de vista cuantitativo o cualitativo. Las escalas de las valoraciones para cada determinante son también descritas de manera sucinta y están acompañadas de listas de chequeo preliminares, esquematizadas como una propuesta de instrumentalización de la medición en concordancia con el modelo. Palabras claves: Evaluación de productos de software, innovación tecnológica, productividad de grupos académicos. 1. Introduction Productivity of research groups is associated with the amount and calibre of their intellectual output, particularly in the form of research papers, books, patents, screenplays, musical compositions, and so forth. Quality and visibility assessment of such output is achieved usually by means of self-regulatory dynamics of knowledge dissemination within academic communities. For example, an influential paper is expected to be published in a journal with a high impact factor, which guarantees a strict peer-review examination by some of the fellow experts in the corresponding field. Another example would be the case of a classic book whose permanent reader demand requires the published in several editions on a regular basis. Similar dynamics would apply to the other products (patents, scripts, scores). In other words, the assessment of these products is a relatively straightforward task using such implicit, and at the same time, objective mechanism (although well-documented criticisms and flaws have been presented for established metrics such as the impact factor [8], [11], [13], [16]). In contrast, the case of academic software products seems blurry. This is because software is a product usually confined to the limits of industrial output, not faculty output. To the best of our knowledge there is no clear definition of ratings for quality, dissemination, usefulness, innovativeness or other software attributes in an academic context. On the other hand, however, software can be an important factor when defining aspects such as the orientation of an academic unit (scientific or technological), its target population (local, national, worldwide), its standards of community-interaction and so on. Therefore, a model for objective evaluation and assessment of such products is necessary and must be made clear and available to the community. As any measurement instrument, such a model should allow calibration of the software productivity of a particular academic unit, and also be helpful in guiding their efforts towards the production of high-quality, valuable and user-friendly academic software. The latter is a challenging task. Software evaluation is usually confined to the industry, where a wide range of metrics and estimation models have been proposed and consolidated [6], [12], [15]. There, software development is driven by profits. Within the academia however, software is developed on the basis of its contribution to research projects. In this context, software is driven by knowledge and innovation. The purpose of this paper is to discuss these elements and organize them into a model that can be used to evaluate the merit of software products and moreover, that serves as a tool for academic decision-makers when identifying the profiles of software productivity in their faculty, and also when making suitable policies to support, promote and reward any achievement accordingly. 2. The model The difficulty in defining a model for software assessment within the academia lies in the fact that, besides the intrinsic complexity of software development, there are research-related factors that must be taken into account. Some examples of the sort of questions that may rise in this regard are as follows: Is the software product implementing and The KITE Model for Assessment of Academic Software Products distributing new ideas or technologies in a given field or subject? Is the software product needed by, relevant to or widely-usable by the community it was designed for? Is the software product robust, fast, well-documented, available, safe, reliable, reusable? Or even better, to what extent is the software product complying with all of the previous attributes? Our attempt to solve these questions is described in the following. We devised four determinants that are relevant to define the value a software product within the academia, namely, new knowledge dissemination effect (K), impact in target population (I), technological innovation (T), and engineering practices adopted during its development (E). These determinants can be geometrically combined as the axes of four quadrants (K-I, I-T, T-E, and E-K) in a two-dimensional plane. We assume an arbitrary software product can be rated in each determinant; then, by joining with straight lines the scoring marks in each determinant, a frame is obtained whose inner area would define the product final score (assessment). Such frame visually resembles the shape of a rhomboidal kite, which inspired the name of the model (see Figure 1). As we shall discuss in the next section, the maximum ratings in each determinant would be different, as also would be the actual contribution to the total area from the triangular regions in each quadrant. Hence, the maximum contributions for a total score (area) of 100% according to the efforts devoted to the development as well as to the outlook for potential use of the software, were appraised as follows: - **E-T quadrant.** The area of this component would depend on the quality of software engineering attributes adopted during product development, and also on the degree of technological innovation it comprises. Its measurement would be firmly supported by the standards embraced within the software industry, particularly in terms of what is commonly accepted as good engineering practices as well as the widely-known innovation frameworks. The ratings of the product ![Figure 1: The KITE model for software assessment](image-url) --- 1 We remark that the estimation of these proportions was made according to our expectations on the merits of each determinant for a first-class research-oriented academic institution. However, these proportions can be thought of as model parameters adjustable to other academic institution profiles (training-only, technical or vocational schools). regarding these aspects would be the core of the assessment model, thus allowing a maximum contribution to the total score of 48%. - **K-E quadrant.** In the context of academia, software production would be ideally closely related to research. The purpose of measuring this aspect is to grant additional merit to high-quality software intended to support or promote the dissemination and application of new knowledge. Consequently, a relevant contribution to the total score of up to 32% is allocated to this component. - **I-T quadrant.** The incorporation of the I determinant is aimed at promoting wider distribution and awareness of academic software. Therefore, this component is intended to evaluate the scope and impact on local and external target communities, where the software product may become useful technology. We decided to associate a maximum contribution of 12% of the total score to this aspect. - **K-I quadrant.** Similar to the I-T component, the purpose here is to give some credit to the extent and capacity of the software to disseminate new embodied knowledge (if any) to its intended research audience. Although secondary from an industrial-oriented viewpoint, this aspect is regarded as a particularly important goal for academic-oriented software. Hence, a minor yet relevant maximum contribution of 8% to the total score is assigned. Now, an arbitrary software product can be assessed by computing the score \( S_\delta \) obtained from the sum of the areas in the resulting triangular regions in each quadrant. Let us denote the ratings of the software product in each determinant as \( K, I, T, E, \) and the area of triangles \( K-E-T \) and \( K-I-T \) as \( \Delta_E \) and \( \Delta_I \) respectively; then the final score is straightforward to compute: \[ S_\delta = \Delta_E + \Delta_I = \left( \frac{KE}{2} + \frac{TE}{2} \right) + \left( \frac{KI}{2} + \frac{TI}{2} \right) = \frac{1}{2}(K+T)(E+I) \] (1) It is worth noting that the model can be regarded from two different points of view. In the first one, the kite can be split into an upper half and a bottom half. In this view the maximum contribution of the \( K-E-T \) triangle (upper half) accounts for 80% of the total score. This percentage would be highly correlated to the E score, in other words, to the software engineering effort and practices involved during the development of the product, which is intuitively the most relevant aspect to be assessed in a software product. Notice that the final contribution of this triangle would be modulated by the degree of technological innovation as well as the new-knowledge injection. On the other hand, the \( K-I-T \) triangle (bottom half) contribution represents the impact of the software product in terms of visibility and usage, once again, modulated by the \( K \) and \( T \) scores. The latter is an important aspect of communication in academia, which motivated the inclusion of this component. Nevertheless, we consider it as a less relevant target for software development, hence the smaller score allocation from the total score (a maximum of 20%). From another point of view, the kite can be split into a right-hand half and left-hand half. In this view, the $E·T·I$ triangle (right half) would account for a maximum of 60% of the total score. This reflects the common scenario of a software product that becomes technology of choice, again depending on its engineering maturity and visibility impact. A higher weight in the final scoring is assigned to this aspect. The $E·K·J$ triangle (left half) however, also makes a contribution of up to 40% of the total score. This is specifically aimed at academic contexts where promoting the association between software production and research is extremely relevant. In order to compute Equation (1), we have designed a number of tests that determine the merit of the software product in each determinant. The criteria, range, ratings and rationale of such tests are discussed next. ### 3 Determinants range and ratings Let us assume that the finest academic software product will shape up a kite with an area equivalent to 100 points. Thus, we define the following range for the ratings in each determinant: $$E \in [0, 16], K \in [0, 4], T \in [0, 6], I \in [0, 4].$$ Observe that within those ranges, a first-rate academic software product $S^*$ would get the ratings $E = 16, K = 4, T = 6, I = 4$. By setting up unitary scales in each axis, the KITE model of $S^*$ would be rendered as in Figure 2. Notice that despite the irregularity of the shapes in the resulting triangles, this arrangement will preserve the maximum proportion of contributions mentioned in Figure 1. In fact, the area (score) of $S^*$ would amount to: ![Figure 2: The unnormalised kite for $S^*$](image-url) which also equals the score that would have been computed through Equation (1). The remainder of this section focuses on the rationale behind these ranges, as well as on the proposal of tests designed to measure the ratings in each determinant. Assuming that engineering should be the crux behind software creation process, we shall proceed first with the E determinant. ### 3.1 The E determinant Academic software is on its own nature, commonly regarded as the early prototypes of proof-of-concept or proof-of-technology endeavours that stem from non-industrial academic factories (in the best scenario) or more frequently, from academic research groups. Nevertheless, we believe that academic software production must be guided by the principles of software engineering so as to guarantee to a certain extent the development of high-quality products that would eventually embark on a feasible trail of future industrial development. The more quality attributes the product achieves, the more potential benefit or profit the academic group receives in return for their invested efforts and costs. In the light of such remarks, this determinant is pivotal to the model, since it measures the degree of fulfillment of software engineering practices adopted during the making. Its purpose is to motivate compliance with minimal standards in order to guarantee the development of valuable software, software that would be really helpful or appreciated by its target community. The measurement instrument of this determinant is inspired by established and widely-known software estimation models in the industry. The instrument consists on a series of check-lists for a number of technical attributes defined in [3]. The designs of these tests are based on the models of [6], [12], [15], adapted in scope and pertinence to an academic context. The definitions of each quality attribute \( E_i \) are given below, where each \( E_i \) is a number between \([0, 2]\). The tests designed as measurement tools for these attributes are provided in different guides or technical books as [14], [12] while the preliminary KITE’s checklists for this determinant are showed in Appendix B. \[ S^*_\oplus = \Delta_{KE} + \Delta_{ET} + \Delta_{TI} + \Delta_{IK} = \left( \frac{16(4)}{2} + \frac{16(6)}{2} + \frac{4(6)}{2} + \frac{4(4)}{2} \right) = 100, \] where each \( E_i \) is a number between \([0, 2]\). - **E1**: Robustness. Resistance against improper, malicious or illegitimate inputs or operating environments for the software. - **E2**: Maintainability-Extensibility. Simplicity in updating the software product either by adding new features or changing existing (possibly flawed) features, or else, in scaling up its capabilities. - **E3**: Performance. Efficiency in managing machine resources (processor time, memory, bandwidth, etc.) in order to accomplish the intended purpose of the software, especially for large data volumes. The KITE Model for Assessment of Academic Software Products ### E Determinant - **E₁**: **Usability.** User-friendliness: how easy or convenient to use the software product actually is. - **E₂**: **Integrity.** The quality of maintaining consistency as well as safeguarding the information processed by the software product. - **E₃**: **Portability.** Possibility of running the software product on more than one operating system or hardware platform with minimal effort. - **E₄**: **Compatibility.** Support of input and output data formats and persistence schemes used by the same software in previous versions, or by other related software tools, without major conversion or modifications required. - **E₅**: **Documentation.** The availability of technical documentation related to the development and utilisation of the software product (design diagrams, listings, test reports, manuals, user-guides, online help, etc.). The final rating in this determinant would be given by the sum in Equation (2). The range would be clearly, $E \in [14]$. \[ E = \sum_{i=1}^{8} E_i \] (2) #### 3.2 The K Determinant In contrast to industry settings, research-driven development of software can be a relevant goal for an academic unit. Consequently, the model was designed so as to assign some merit to a software product developed as a dissemination device for new knowledge originated in either basic or applied research. In this respect the $K$ determinant modulates the contribution of the $E$ determinant to the overall assessment of the product, since the contribution of the $\Delta_K \equiv \frac{KE}{2}$. In other words, the more firmly engineered and research-supporting is the product, the higher the score it will obtain. In scoring this determinant we took some inspiration from the widely accepted practice of publishing scholarly papers, which is closely related to the premise of knowledge dissemination stated above. Thus, the following two criteria were defined: - **$K_0$**: **Ingenuity.** Is the software product a realization of a previously unknown natural, social, organizational, scientific, algorithmic or computing model? - **$K_1$**: **Dissemination.** To what extent the fundamental research associated to the software product has been communicated to the relevant scientific community? The first criteria is an indicator variable that characterises the software product as either the output of a research study or not ($K_0 \in \{0, 1\}$). The second criteria is associ- ted to the dissemination of the research foundation that prompted the creation of the software product, i.e. its publication in scholarly peer-reviewed journals, conferences or scientific repositories. The range of this variable is $K_1 \in [0, 4]$. The score of the determinant is computed using Equation (3). Evidently, $K \in [0, 4]$. The checklists designed to rate these criteria are given in Appendix A. $$K = K_0 K_1$$ (3) ### 3.3 The $T$ determinant This determinant is aimed at measuring the degree of technological innovation achieved through the software product. One of the difficulties in defining this measuring aspect is that there is no definitive agreement regarding the meaning of innovation in the software industry [4]. Based on discussions reported recently on the literature [4], [5], [9], [10] and also on our own experience in the field, we settled for the following viewpoint. Key to the concept of innovation is the formulation of a novel idea. With respect to the problem at hand, a novel idea would be realized when the software product is proposing a new usage of a known technology (e.g. in application software, when transferring technology from one field of application to another, or from one community to another), or when the software itself yields new technology (especially in system or embedded software when new architectures, protocols or models of computation are proposed). Thus, technological innovation would refer to the process of embarking on, testing, adjusting and refining that novel idea, in such a way that its materialisation within a new context or process produces a positive effect [9]. We focus now on defining how to measure technological innovation. We restrict ourselves to the concept of product innovation since other facets of innovation (process, marketing, or organizational) in our opinion are not intrinsically relevant to a software product. Building upon the approach in [10], which has been rigorously validated in the industry, the following aspects are considered, suitably adjusted to the context of academic software (they are denoted $\{T_i\}_{i=1}^6$ with $T_i \in [0, 1]$). The corresponding checklists are reported in Appendix D. $T_1$: **Novelty.** Does the software product embody new technology or a previously unknown application (to a field or problem) of a known technology? $T_2$: **Scope.** Is the software product new to the world/country/academic institution? $T_3$: **Competitiveness.** In what ways does the software product outperform other known similar products or previous versions (aesthetic/core/performance)? $T_4$: **Continuous improvement.** To what extent are the software attributes improved over previous versions (in any attribute $E_{i1} - E_{i2}$ or in saving costs)? 3.4 The I determinant This determinant was included to adjust the assessment of the software product with respect to aspects such as visibility, availability, openness and utilisation by its target community. These aspects are combined into what we regard as the “impact” of the software product. The rationale is that the merit of the software product should be proportional not only to the quality of the software per se but also to the impact it is having or might have on its latest or future releases. In this sense the model attempts to promote, on the one hand, continuity of software projects for the academic group that creates the software product (to allow improvement through new technologies and functionalities as the product spreads and popularises within its target community). On the other hand, this will motivate the academic units to evaluate the relevance, effect and scope of their associated software authors or factories on a regular basis. The definition of these criteria, denoted \( \{I_i\}_{i=1}^4 \), with \( I_i \in [0, 1] \), was tailored to the context of academia as explained below. The checklists designed to measure this determinant are provided in Appendix C. \( I_1 \): Coverage. The extent to which the software product is visible for its intended audience (local/regional/world-wide). \( I_2 \): Availability. Convenient support regarding the deployment of the software product (release/versioning/download/installation). \( I_3 \): Utilisation. Evidence of usage and positive feedback from the community (academic/otherwise). \( I_4 \): Openness. The level of restriction when distributing, using or changing the source code of the software product (open source/source available/proprietary). The final rating associated to this determinant is computed using Equation (5), resulting in a range \( I \in [0, 4] \). 4 Conclusions Increasing rates of academic research output, due to the growth of their corresponding software production lines, motivate the proposal of comprehensive and impartial models for the evaluation of such products so as to provide unbiased assessment of their quality, impact, innovation and originality. Furthermore, such models may become useful tools for researchers and decision-makers alike. Researchers can use them to open up new perspectives in considering academic software development seriously (as one of their career aims). Decision-makers can use them to identify the suitable software production profiles of their institutions, and also to support and reward their faculty accordingly. We expect the rationale behind the KITE model described in this paper to contribute to taking initial steps towards such scenario. In addition to its hypothetical use as an academic software productivity-profiling tool, the model can also be considered as an evaluation instrument within compensation schemes in the academia. Such schemes are designed to award faculty with salary bonuses proportional to the quality and visibility of their academic products. A well-balanced appraisal between the merit and the reward of software products in this context will encourage academic groups to develop first-class research-oriented or technology-oriented software. The latter is likely to have a positive impact on academic productivity, as it has been already highlighted in various studies in other areas [1], [7]. As a final remark with respect to the KITE model itself, it is worth noting that its modularity and conceptual abstraction make it feasible to extend the model to the assessment of other kinds of academic products, such as hardware, business processes, integrated circuit layouts, industrial designs and technical standards or any other engineering prototypes. In fact, these products share a common technological nature as artifacts resulting from engineering and scientific principles. The extension of the model would require a deeper insight into some definitions in order to achieve generalization, perhaps as a meta-model formulation. We anticipate that, in such formulation, the notions ascribed to the E determinant would be pivotal (those concerning the rigorous exercise of the relevant engineering branch or other involved disciplines in order to create a first-class product). The path to develop this meta-model is still under discussion considering that either a deductive or an inductive construction may be possible. For the time being, we are working on the concrete instruments needed to make the KITE model operative. Instrumentation and validation of the model will be reported in a forthcoming study. References Appendix A. K tests <table> <thead> <tr> <th>K attributes</th> <th>description</th> </tr> </thead> <tbody> <tr> <td>K₀</td> <td>The software implements a previously not known natural, social, organizational, scientific, algorithmic or computing model.</td> </tr> <tr> <td>K₁</td> <td>The research study originating the software has been disseminated to the academic community in recognised scholarly journals or well-known academic conferences on a relevant field.</td> </tr> </tbody> </table> K final score: \( K = K₀K₁ \) Appendix B. **E tests** <table> <thead> <tr> <th>E attributes</th> </tr> </thead> <tbody> <tr> <td><strong>E&lt;sub&gt;1&lt;/sub&gt;</strong></td> </tr> <tr> <td>Resistance towards invalid input data or incorrect commands.</td> </tr> <tr> <td>Fault-tolerance to operating system or hardware crashes.</td> </tr> <tr> <td>Agreement between the software product and its specification.</td> </tr> <tr> <td>The software product can be adjusted to unforeseen changes in its underlying operating environment.</td> </tr> <tr> <td><strong>E&lt;sub&gt;2&lt;/sub&gt;</strong></td> </tr> <tr> <td>The software product admits incorporation of external changes with low effort.</td> </tr> <tr> <td>Support to functionality extensions in future versions.</td> </tr> <tr> <td>A well-defined version control procedure for the software.</td> </tr> <tr> <td>The software product adheres to architectural styles allowing easy scalability (e.g. blackboard style, publish-subscribe style).</td> </tr> <tr> <td>Capability of scaling-up designs in order to improve performance.</td> </tr> <tr> <td>The software uses configuration files for operating parameter settings.</td> </tr> <tr> <td>The user interface is decoupled from the domain logic.</td> </tr> <tr> <td>The software can be easily debugged.</td> </tr> <tr> <td>The software exhibits adaptive maintainability.</td> </tr> <tr> <td>Well-documented architectural design.</td> </tr> <tr> <td><strong>E&lt;sub&gt;3&lt;/sub&gt;</strong></td> </tr> <tr> <td>Compliance with stated expected response times for each use-case or functionalities.</td> </tr> <tr> <td>The software delivers within admissible stated response times (average, minimal, maximal).</td> </tr> <tr> <td>The software is able to handle concurrency.</td> </tr> <tr> <td>Usability under lower or bad rates of performance.</td> </tr> <tr> <td>Compliance with stated expected response times for batch processing, if any.</td> </tr> <tr> <td><strong>E&lt;sub&gt;4&lt;/sub&gt;</strong></td> </tr> <tr> <td>The user interface is self-explained or easy to understand.</td> </tr> <tr> <td>The user interface is customizable.</td> </tr> <tr> <td>Depth vs breadth ratio of user options is appropriate.</td> </tr> <tr> <td>The software can be adapted easily to new operating systems or hardware.</td> </tr> <tr> <td><strong>E&lt;sub&gt;5&lt;/sub&gt;</strong></td> </tr> <tr> <td>The software mitigates the impact of expected security breaches.</td> </tr> <tr> <td>The software properly catches security breaches.</td> </tr> <tr> <td>High safety level against known vulnerabilities.</td> </tr> <tr> <td>High data reliability level for simulated operating system or hardware breakdowns.</td> </tr> <tr> <td>High data integrity level for unexpected breakdowns or unauthorized access.</td> </tr> <tr> <td><strong>E&lt;sub&gt;6&lt;/sub&gt;</strong></td> </tr> <tr> <td>The software is platform-independent.</td> </tr> <tr> <td>New layers of software can be added to the original product.</td> </tr> <tr> <td>The software provides portability to other hardware platforms.</td> </tr> <tr> <td><strong>E&lt;sub&gt;7&lt;/sub&gt;</strong></td> </tr> <tr> <td>The software supports standard technologies for system integration.</td> </tr> <tr> <td>The software architecture (subsystems or components) is well-documented and comprehensible.</td> </tr> <tr> <td>The software provides versioning information for subsystems and components.</td> </tr> <tr> <td><strong>E&lt;sub&gt;8&lt;/sub&gt;</strong></td> </tr> <tr> <td>Functional model documentation is provided.</td> </tr> <tr> <td>Structure, domain and persistence models are well-documented.</td> </tr> <tr> <td>Dynamic and behaviour models are well-documented.</td> </tr> <tr> <td>User guide and administrator manual are well-documented and comprehensible.</td> </tr> <tr> <td>The software is equipped with extensive and friendly online help assistance.</td> </tr> </tbody> </table> **E final score:** \[ T = \sum_{i=1}^{4} I \] Appendix C. I tests I attributes I₁: The software has been announced or advertised in worldwide/local coverage. I₂: The software is released in English language or is parameterisable to a suitable language for the intended public. I₃: The software is hosted and distributed in a public software repository or in private dedicated downloading server. I₄: The software is provided with an install/setup assistant application. I₅: The reported number of different users of the software is larger than 10/100/1000. I₆: More than 50% of users have given positive reviews to the software. I₇: The software was released with an open source or proprietary license. I final score: \( I = \sum_{i=1}^{7} I_i \) Appendix D. T tests T attributes T₁: The software is a new technology as such or an unknown application to a field or problem of a known technology. T₂: The software is new to the world/country/institution. T₃: The software implements a distinct core or aesthetic feature compared to similar products or previous versions. T₄: The software improves over memory consumption or execution times compared to similar products or previous versions. T₅: The software improves any of its quality attributes \( E_i \).\( \cdots \)\( E_n \). T₆: The software saves in installation costs. T₇: The software is released as a new installation or an upgrade version. T₈: The idea originating the innovation realised by the software was conceived by authors affiliated to the institution or affiliated to an external institution or both. T final score: \( T = \sum_{i=1}^{8} T_i \) Sergio A. Rojas-Galeano He has received a BEng. in Systems Engineering from National University of Colombia, a MSc. in Intelligent Systems from University of London (2005), and a PhD. in Computer Science from University of London (2009). His areas of research interest include machine learning, data mining, optimization, and scientific software. He is currently an assistant professor at the District University of Bogota and also a researcher in the LAMIC research group. e-mail: srojas@udistrital.edu.co Henry Diosa He has received a BEng. in Systems Engineering from National University of Colombia, a MSc. in Information Sciences from District University of Bogota, and a PhD. in Computing Science from Yale University. His areas of research interest include software architecture and software engineering. He is currently an assistant professor at the District University of Bogota and also the leader of the ARQUISOFT research group. e-mail: hdiosa@udistrital.edu.co Miguel Melgarejo He has received a BEng. in Electronics Engineering from District University of Bogota, a MSc. in Electronics Engineering from Los Andes University, and is currently a PhD in Electronics Engineering at Javeriana University. His areas of research interest include machine learning, fuzzy logic systems, complex systems and scientific software. He is currently an assistant professor at the District University of Bogota and also a researcher in the LAMIC research group. e-mail: mmelgarejo@udistrital.edu.co
{"Source-Url": "http://revistas.udistrital.edu.co/ojs/index.php/reving/article/download/4903/7056", "len_cl100k_base": 7099, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 32740, "total-output-tokens": 8154, "length": "2e12", "weborganizer": {"__label__adult": 0.0003659725189208984, "__label__art_design": 0.0006508827209472656, "__label__crime_law": 0.0003552436828613281, "__label__education_jobs": 0.0164642333984375, "__label__entertainment": 0.0001029372215270996, "__label__fashion_beauty": 0.0001823902130126953, "__label__finance_business": 0.0007138252258300781, "__label__food_dining": 0.00040221214294433594, "__label__games": 0.000759124755859375, "__label__hardware": 0.0006852149963378906, "__label__health": 0.00039505958557128906, "__label__history": 0.00028252601623535156, "__label__home_hobbies": 0.0001392364501953125, "__label__industrial": 0.0002751350402832031, "__label__literature": 0.0006208419799804688, "__label__politics": 0.0002219676971435547, "__label__religion": 0.0003941059112548828, "__label__science_tech": 0.0149688720703125, "__label__social_life": 0.0001958608627319336, "__label__software": 0.018585205078125, "__label__software_dev": 0.9423828125, "__label__sports_fitness": 0.0002377033233642578, "__label__transportation": 0.0004184246063232422, "__label__travel": 0.0001809597015380859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34677, 0.02654]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34677, 0.29168]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34677, 0.90014]], "google_gemma-3-12b-it_contains_pii": [[0, 2289, false], [2289, 5596, null], [5596, 8155, null], [8155, 11274, null], [11274, 12961, null], [12961, 15897, null], [15897, 18400, null], [18400, 21182, null], [21182, 23044, null], [23044, 25792, null], [25792, 28643, null], [28643, 31604, null], [31604, 34677, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2289, true], [2289, 5596, null], [5596, 8155, null], [8155, 11274, null], [11274, 12961, null], [12961, 15897, null], [15897, 18400, null], [18400, 21182, null], [21182, 23044, null], [23044, 25792, null], [25792, 28643, null], [28643, 31604, null], [31604, 34677, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34677, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34677, null]], "pdf_page_numbers": [[0, 2289, 1], [2289, 5596, 2], [5596, 8155, 3], [8155, 11274, 4], [11274, 12961, 5], [12961, 15897, 6], [15897, 18400, 7], [18400, 21182, 8], [21182, 23044, 9], [23044, 25792, 10], [25792, 28643, 11], [28643, 31604, 12], [31604, 34677, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34677, 0.28342]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
7636e24169628346719435cb4fe86af3ae94699d
Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems Original Availability: This version is available at: 11583/1494576 since: Publisher: IEEE Published DOI:10.1109/ConTEL.2005.185835 Terms of use: openAccess This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright (Article begins on next page) Comparative Evaluation of Packet Classification Algorithms for Implementation on Resource Constrained Systems Gianluca Varenni*, Federico Stirano***, Elisa Alessio**, Mario Baldi*, Loris Degioanni*, Fulvio Risso* * Politecnico di Torino, Dipartimento di Automatica e Informatica, Torino, Italy ** Telecom Italia Labs - System On Chip, Torino, Italy *** Istituto Superiore Mario Boella, Torino, Italy {gianluca.varenni,mario.baldi,loris.degioanni,fulvio.risso}@polito.it; stirano@ismb.it; elisa.alessio@tilab.com Abstract – This paper provides a comparative evaluation of a number of known classification algorithms that have been considered for both software and hardware implementation. Differently from other sources, the comparison has been carried out on implementations based on the same principles and design choices. Performance measurements are obtained by feeding the implemented classifiers with various traffic traces in the same test scenario. The comparison also takes into account implementation feasibility of the considered algorithms in resource constrained systems (e.g. embedded processors on special purpose network platforms). In particular, the comparison focuses on achieving a good compromise between performance, memory usage, flexibility and code portability to different target platforms. I. INTRODUCTION A vast literature on classification algorithms and their performance does exist, but our work is necessary, hence relevant since existing evaluations do not allow a significant comparison based on real-life data. In fact, a comparison based on existing literature could be carried out only according to analytical worst-case bounds. Even though figures on the performance of classification algorithm implementations in real-life scenarios can be found, they are part of studies on a single algorithm: the measurement scenarios are different and the implementations are not uniform, consequently the results are not comparable. This work studies known classification algorithms with respect to their suitability for being (i) deployed for common networking applications (i.e., not optimized for a specific one), and (ii) implemented in embedded systems, i.e., systems with strict requirements, limited resource availability, and no specific hardware support, such as content addressable memories. A (packet) classifier is a collection of rules — usually called ruleset — that is used to partition network traffic into different groups, sometimes called flows or buckets. Every rule specifies a subset of the network traffic, for example “IP traffic”, or “traffic sent from host 1.2.3.4”, thus somehow characterizing packets grouped into that flow. When a packet satisfies a rule, the packet is said to match the given rule. A classification algorithm determines whether a packet matches at least one rule of a classifier. Packet classifiers are widely used in IP networking where rules usually involve one or more packet header fields (e.g. IP source address, TCP destination port). Each rule $R$ is composed of $i$ components, so that each component $R[i]$ applies to a specific header field. When more than one field is considered, the classifier is said to be multifield. As an example, Table 1 shows a small multifield ruleset that includes value/mask rules on the source and destination IP addresses. Packet classifiers are widely used for various network applications, many of which related to quality of service (QoS) provision, and consequently in several types of network devices that might be implemented as or composed of embedded systems. Examples of QoS related applications of packet classifiers are: - Traffic conditioning and shaping appliances; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to be able to apply on them admission, marking and shaping policies. Traffic conditioning applications or functionality are fundamental whenever in the deployment of both the IntServ [1] and DiffServ [2][3] approach. - IntServ routers; they use multifield classifiers, usually on session tuples, to separate traffic flows in order to store packets in different queues on which scheduling algorithms suitable to provide the required QoS are applied. - DiffServ routers; they use single field classifiers based with a limited ruleset concerning the value of the DS (Differentiated Services) field [3] to separate packets belonging to different traffic classes in order to handle them according to the corresponding per-hop behavior (PHB). This work aims at identifying classification algorithms that can be effectively implemented on embedded systems and deployed in any of the above listed applications. Execution in embedded systems imposes strict limits on the characteristics of the algorithms, such as simple (static) memory management, limited code size, limited CPU usage requirements, limited data storage necessities. ### Table 1. Sample Multifield Ruleset <table> <thead> <tr> <th>Rule</th> <th>IP source</th> <th>IP destination</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Value = 130.192.1.0, Mask = 255.255.255.0</td> <td>Value = 130.192.2.0, Mask = 255.255.255.0</td> </tr> <tr> <td>2</td> <td>Value = 130.192.2.0, Mask = 255.255.255.0</td> <td>Value = 130.192.1.0, Mask = 255.255.255.0</td> </tr> <tr> <td>3</td> <td>Value = 130.192.0.0, Mask = 255.255.0.0</td> <td>Value = 130.192.3.0, Mask = 255.255.255.0</td> </tr> </tbody> </table> adaptability to various hardware platforms and architectures. Our work, and this paper describing it, was organized as follows. The various algorithms proposed in the literature (Section B) as well as the metrics commonly deployed to evaluate them (Section A) are first surveyed. The implementation objectives and the guidelines followed to develop software for embedded systems are then shown in Section III. Based on this, selection criteria followed to develop software for embedded systems are then formulated and are used to identify a limited set of algorithms on which to perform a more detailed and targeted comparative evaluation. Section IV provides the results of the comparative evaluation conducted with real-life traffic traces and final conclusive remarks are provided in Section V. II. THEORETICAL ANALYSIS OF CLASSIFICATION ALGORITHMS Among the others [5], the comparative survey of classification algorithms by Gupta and McKeown [4] provides a detailed comparison of the most important known algorithms for multifield classification. Even though this work represents a complete and interesting tutorial on classification algorithms, it does not present any performance comparison based on real life network traffic. Our work leverages off some of the criteria and results presented by Gupta and McKeown to select a reduced set of classification algorithms that best fit to be implemented in embedded systems. Another contribution of our work lies in the detailed and homogeneous evaluation of such selected algorithms that have been implemented with common criteria and evaluated in a common test bed using real traffic captures. A. Evaluation metrics and parameters The metrics adopted are the ones commonly used by various authors [6][7][8][9][11][12] in literature, including Gupta and McKeown in [4]: search time, memory consumption, and update time. Search time (T), i.e. the amount of time needed to classify a packet, is the most obvious metric; in order to devise a measurement (at least partially) independent from the particular test bed, the search time is measured in terms of CPU clock cycles. Memory consumption (M) is the amount of memory needed to store the ruleset in some specific data structure in memory, computed either at instantiation or run time. Memory consumption is an excellent indicator of the compression capability of the algorithm measured as the ratio between the ruleset size (i.e. number of rules and number of fields) and its footprint in memory. The update time (U) is the amount of time necessary to insert, delete, or modify a rule in the running ruleset An interesting metric is represented by the number of memory accesses performed by the algorithm, but it is not widely used because getting this data is far from being trivial. The three metrics previously described generally depend on the following parameters: - The number of rules N in the ruleset - The number of fields d globally used within the R[i] components of each rule - The length of each field, in bits, called Wi. In order to simplify the evaluation of the algorithms, we will use a new fictitious parameter W, defined as W = max(Wi) Section A will provide some insight in the implications of such simplification on the comparative evaluation presented later. B. Theoretical complexity of some well-known algorithms In order to have a first general comparison of the classification algorithms and select which to adopt for a more thorough analysis, the theoretical worst-case bounds for the metrics identified in Section A were taken into consideration. Table 2 shows the formulas expressing the bound for each of the metrics. Such formulas were either taken directly from the literature, when available, or inferred from a paper describing the corresponding algorithm. <table> <thead> <tr> <th>Algorithm</th> <th>Search time (T)</th> <th>Memory usage (M)</th> <th>Update time (U)</th> </tr> </thead> <tbody> <tr> <td>Linear search</td> <td>N</td> <td>N</td> <td>N/A</td> </tr> <tr> <td>Set pruning tries [11]</td> <td>dW</td> <td>N</td> <td>N</td> </tr> <tr> <td>Heap-on-Trie [6]</td> <td>W</td> <td>NW</td> <td>W*logN</td> </tr> <tr> <td>Binary search-on-Trie [6]</td> <td>W*logN</td> <td>NW</td> <td>W<em>d</em>logN</td> </tr> <tr> <td>Cross producing [7]</td> <td>dW</td> <td>N</td> <td>N/A</td> </tr> <tr> <td>Hierarchical Cuttings [9]</td> <td>D</td> <td>N</td> <td>N/A</td> </tr> <tr> <td>Tuple Space Search [8]</td> <td>N</td> <td>N</td> <td>N</td> </tr> <tr> <td>Recursive Flow Classification [12]</td> <td>D</td> <td>N</td> <td>N/A</td> </tr> </tbody> </table> Hardware based [14] and ad-hoc algorithms [10] were not included in this evaluation since either the selected metrics cannot be applied to them, or a comparison based on them is meaningless due to the particular nature of such algorithms. Instead, the linear algorithm was included because it is widely used by software based firewalls (e.g. Linux netfilter/iptables [13]) and it is an excellent baseline against which other algorithms can be compared to, especially in the implementation and testing part of this work. The bound on the update time is not shown for some of the algorithms since they do not explicitly support dynamic updates to the running ruleset. This stems from the fact that these algorithms preprocess the ruleset into a specific custom data structure that does not support insertion or removal of rules. Instead, in order to cope with ruleset changes the whole ruleset must be re-processed thus yielding a new data structure. Such an approach is usually inefficient, since the preprocessing time is typically quite high. C. Practical issues with the theoretical complexity The worst cases in Table 2 show quite clearly that the linear search algorithm outperforms the other algorithms in terms of memory consumption and update time. Its search time performance is comparable to the other algo- rithms when the number of rules is not large; for exam- ple, when classifying UDP flows or TCP connections \(d=5\) and \(W=32\) the break point is one or two hundreds rules. In fact, the search time of the other algorithms de- pends on the total number of bits \(dW\) of the various fields in each rule because the classification algorithm proc- desses the classification fields bit by bit– in particular, this is the approach used by all the algorithms based on tries. Consequently, the linear algorithm might be particularly interesting in cases, IPv6 addresses, in which the total number of bits \(dW\) is high. As a matter of fact, the theoretical analysis previously conducted is limited by several factors: - The performance of many classification algorithms when used with real traffic might be very different from the theoretical results shown in Table 2; this is particularly true for heuristics, that are engineered to achieve good performances in the average case, and not in the worst case. - The theoretical complexities shown in Table 2 have been devised assuming that all fields used for the clas- sification have the same length, equal to the length of the largest one; this simplification can bring to unreal- istic theoretical results (e.g. in the case of IPv6 session identifiers, we consider the length of a TCP/UDP port to be 128 bit, and this is completely misleading). A so- lution to this problem could be to re-formulate each metric taken into consideration using the various fields’ lengths \(W_i\), but this out of the scope of this pa- per. III. IMPLEMENTATION An objective of this work is to identify and evaluate the packet classification algorithms that are more suitable for an implementation on resource constrained systems. When writing software for an embedded system, specific constraints have to be taken into account in order to grant good performances and flexibility in terms of code port- ability to different target platforms: hence, several aspects have been considered while implementing the above mentioned algorithms. First of all, the main goal of our work was to write a code portable to different target platforms, independent from the processor and the operating system used. To ac- complish this objective, we developed a software library made up of pure ANSI C, trying to avoid any use of OS/compiler support functions that could not be available on special purpose processors. The crucial point in gen- erating portable code is to separate the coding of functional modules from the one related to the specific target envi- rонment. This can be achieved by defining some sort of API, which avoids the use of platform dependent func- tions directly inside the code. A second consideration is that the code should use static memory allocation, since a dynamic allocation infrastructure is not granted to be pre- sent on all the target platforms. Another requirement is that the code should avoid the use of explicit pointers in the raw data structures contain- ing the ruleset; in fact, sometimes the code creating and initializing the data structure and the code that classifies packets using this structure run either on different proces- sors (e.g. network processors using multiple processing units) or within different address spaces (e.g. code run- ning partially at kernel level and partially at user level on a general purpose PC). A commonly used solution to the problem is to make use of indirect addressing, using only displacement pointers in the data structure, and the base pointer outside it. In a network embedded system we can distinguish among data-plane functions (related to packet processing functionalities, with high performance requirements) that usually run on specific processor engines and control- plane functions (for data structure initialization and con- figuration, usually with high memory requirements) that may run on a general purpose processor. Thus, one gen- eral issue is to modularize the code as deeply as possible, trying to separate the main algorithm functionalities, which may have high performance requirements, from the control and configuration functions that may run on a different processor. A. Selecting the algorithms to be implemented Given previous considerations and taking into account the practical issues enlightened in Section 2, we decided which algorithms to implement to meet our objectives. 1. We excluded Cross-Producing and Set-Pruning Tries, because their memory consumption grows as \(N^d\), which is extremely critical even with rather low values of \(N\) and \(d\) (e.g. with \(N=100\) rules and \(d=4\) fields memory consumption is about \(10^7\)). While RFC and HiCuts have the same worst case memory consump- tion, they are heuristic algorithms, therefore this value is not enough to get rid of them. 2. We excluded Heap on Tries and Binary trees on Tries, because their memory consumption and search time is proportional to \(W\) which is too large (e.g. this value is larger that \(10^8\), when the maximum field size \(W\) is 128 bits and the number of fields \(d\) is 5); moreover the pa- per presenting these algorithms does not give any hint about any working implementation of them. Although the Hierarchical Tries algorithm has the same search time as the two previous ones, it has not been excluded because of its excellent characteristics referred to memory consumption. 3. We excluded HiCuts, because this algorithm is patent pending. 4. Tuple Space Search was excluded essentially because it was decided that the comparative study would in- clude a single heuristic algorithm and from the infor- mation we gathered in the literature the implementa- tion details of RFC seemed clearer. In summary, we decided to implement the Linear al- gorithm, to be used as a baseline for the comparison, the Hierarchical Tries algorithm (the only remaining non- heuristic algorithm after the screening described above), and the Recursive Flow Classification algorithm. IV. PERFORMANCE EVALUATION Although our implementation is targeted to both gen- eral and special purpose platforms, so far it has been vali- dated through extensive tests only on a standard personal computer. We did not consider tests on special purpose platforms in the context of this work since it specifically aims at giving a homogeneous comparison between the implementation of various algorithms by measuring their performance in real-life working conditions. Moreover, the obtained experimental results are compared against the theoretical worst-case results. However, tests on special purpose platforms will be carried out as a future work in an effort to evaluate the performance disparities on different platforms. A. Testbed The tests were conducted using a network trace taken from our university link to the Italian trans-university backbone. This trace has the following characteristics: - duration: 6 hours - total packets: 24 millions - total bytes: 13 GBytes - average traffic: 5 MBps, 1100 pps. The implemented algorithms have been compiled with the Microsoft Visual C++ 6.0 SP 5 compiler. We used an Intel Pentium IV 2GHz workstation with 1GB RAM, running Microsoft Windows XP. The measurements were taken with the x86 assembler instruction RDTSC (Read TimeStamp Counter), which gives the number of CPU clock ticks from the machine bootstrap. We used the ruleset running on the router connected to the same link on which we captured the network trace (the packets were captured immediately before the router classifier); this ruleset is formed of 349 rules, each rule working on these fields: - source / destination IPv4 address - Layer 4 protocol (TCP/UDP/ICMP/any) - source / destination TCP/UDP port. In order to evaluate the algorithms with rulesets of different size, we extrapolated some fictitious ruleset from the original one. These are the new rulesets we defined: - 2 rulesets formed of 50 rules (rules 1-50 and 51-100 of the original ruleset) - 2 rulesets formed of 100 rules (rules 1-100 and 101-200 of the original ruleset) - 1 ruleset formed of 200 rules (rules 1-200 of the original ruleset). B. Search time test results This test aims at measuring the average packet classification time for the various rulesets; the results are shown in Table 3. The results of this test show that the mean search time grows linearly with the number of rules in the case of the linear algorithm; in the case of the Hierarchical Tries algorithm, the search time seems to grow linearly, too, but the trend is much lower than the linear one. The RFC algorithm, instead, shows a mean search time that is independent on the number of rules in the ruleset. By comparing the results in Table 3 and the worst cases in Table 3, we can note that: - the linear algorithm performs worse than the other two algorithms in our tests, compared to the theoretical results; - the Hierarchical Tries algorithm seems to be loosely dependent on the number of rules Ν, while its worst case is independent from this parameter. This behavior could be due to the fact that the number of recursive visits of the tries grows with the number of rules Ν. C. Memory consumption test results We measured the amount of memory needed to store the raw data structure containing the ruleset for each algorithm. The results of this test are shown in Table 4. D. Preprocessing time test results The last test attempts to measure the amount of time needed to process the various rulesets and create the internal data structures used by each classification algorithm. The results of this test are shown in Table 5. <table> <thead> <tr> <th>RULESETS</th> <th>Number of rules</th> <th>Linear</th> <th>HiTrie</th> <th>RFC</th> </tr> </thead> <tbody> <tr> <td>Ruleset 1-50</td> <td>50</td> <td>2603</td> <td>981</td> <td>419</td> </tr> <tr> <td>Ruleset 51-100</td> <td>50</td> <td>2170</td> <td>560</td> <td>422</td> </tr> <tr> <td>Ruleset 1-100</td> <td>100</td> <td>4572</td> <td>1014</td> <td>416</td> </tr> <tr> <td>Ruleset 101-200</td> <td>100</td> <td>4408</td> <td>1141</td> <td>420</td> </tr> <tr> <td>Ruleset 1-200</td> <td>200</td> <td>8949</td> <td>1276</td> <td>428</td> </tr> <tr> <td>Ruleset 1-349</td> <td>349</td> <td>17552</td> <td>2032</td> <td>437</td> </tr> </tbody> </table> <table> <thead> <tr> <th>RULESETS</th> <th>Number of rules</th> <th>Linear</th> <th>HiTrie</th> <th>RFC</th> </tr> </thead> <tbody> <tr> <td>Ruleset 1-50</td> <td>50</td> <td>2192</td> <td>32708</td> <td>1838596</td> </tr> <tr> <td>Ruleset 51-100</td> <td>50</td> <td>2192</td> <td>34028</td> <td>1836964</td> </tr> <tr> <td>Ruleset 1-100</td> <td>100</td> <td>4192</td> <td>64588</td> <td>1841668</td> </tr> <tr> <td>Ruleset 101-200</td> <td>100</td> <td>4192</td> <td>59428</td> <td>1847796</td> </tr> <tr> <td>Ruleset 1-200</td> <td>200</td> <td>8192</td> <td>115068</td> <td>1850148</td> </tr> <tr> <td>Ruleset 1-349</td> <td>349</td> <td>14112</td> <td>155048</td> <td>6074748</td> </tr> </tbody> </table> <table> <thead> <tr> <th>RULESETS</th> <th>Number of rules</th> <th>Linear</th> <th>HiTrie</th> <th>RFC</th> </tr> </thead> <tbody> <tr> <td>Ruleset 1-50</td> <td>50</td> <td>10.6 μs</td> <td>0.84 ms</td> <td>455 ms</td> </tr> <tr> <td>Ruleset 51-100</td> <td>50</td> <td>15.5 μs</td> <td>0.87 ms</td> <td>448 ms</td> </tr> <tr> <td>Ruleset 1-100</td> <td>100</td> <td>16.7 μs</td> <td>1.08 ms</td> <td>857 ms</td> </tr> <tr> <td>Ruleset 101-200</td> <td>100</td> <td>16.1 μs</td> <td>1.61 ms</td> <td>966 ms</td> </tr> <tr> <td>Ruleset 1-200</td> <td>200</td> <td>25.5 μs</td> <td>3.19 ms</td> <td>2.91 s</td> </tr> <tr> <td>Ruleset 1-349</td> <td>349</td> <td>43.5 μs</td> <td>5.43 ms</td> <td>1289 s</td> </tr> </tbody> </table> The outcome of this test shows that the trend is roughly linear in the number of rules for the linear and Hierarchi- cal Tries algorithm; moreover the latter is about 100 times slower than the former one, but the overall time to process the original ruleset containing 349 rules seems to be acceptable (less than 10 ms on the test platform). The RFC algorithm shows instead a rather interesting behavior: the trend is roughly linear on the number of rules up to 200 rules, with a cost that is about three orders of magnitude more expensive than the Hierarchical Tries algorithm; when we compute the data structure with the entire ruleset of 349 rules, the preprocessing time literally explodes to about 20 minutes. This explosion is generally due to two main factors: 1. It is a heuristic algorithm, so each metric normally depends on the particular ruleset used for the test. 2. Some experiments on this algorithm have shown that this behavior is largely due to rules containing a large number of “any” values in their components. V. CONCLUSIONS A continuously growing number of network appliances are deploying packet classifiers to implement Quality of Service, security, traffic engineering functionalities. As a consequence, in the last years several authors have proposed novel algorithms to achieve better results in terms of classification time and memory consumption. Many works provided case studies of such algorithms applied to a large number of real-life rulesets and network traffic traces. However, a fair comparison with common criteria and test cases has not yet been provided. Our main contribution in this work is filling this gap, by providing a homogeneous evaluation of three classification algorithms that have been implemented following the same criteria. Our tests have shown that the Recursive Flow Classification algorithm outperforms, as expected, the other two algorithms in terms of search time. In fact, its heuristics is able to effectively exploit the characteristics of the real-life rulesets considered. However, it is known that this algorithm does not support dynamic updates, and our tests have shown that its preprocessing time is unpredictable. The Hierarchical Tries algorithm shows acceptable performance in terms of classification time, being less than one order of magnitude worse that RFC. Instead it features low memory consumption, outperforming RFC for more than one order of magnitude. In practice, we have shown that the Hierarchical Tries algorithm is preferable over RFC when memory consumption and preprocessing time are more critical than classification time alone. Finally, our tests confirm that the linear algorithm, despite the worst classification time with large rulesets, is the one that assures the lowest memory consumption, the fastest preprocessing phase, and the most flexible support for dynamic updates. VI. REFERENCES
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/1494576/46995/05Contel-Classifiers.pdf", "len_cl100k_base": 6158, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18768, "total-output-tokens": 7042, "length": "2e12", "weborganizer": {"__label__adult": 0.00045990943908691406, "__label__art_design": 0.0002949237823486328, "__label__crime_law": 0.0007147789001464844, "__label__education_jobs": 0.0004973411560058594, "__label__entertainment": 0.00016224384307861328, "__label__fashion_beauty": 0.0002243518829345703, "__label__finance_business": 0.0003905296325683594, "__label__food_dining": 0.0004360675811767578, "__label__games": 0.001079559326171875, "__label__hardware": 0.006252288818359375, "__label__health": 0.0010089874267578125, "__label__history": 0.0005497932434082031, "__label__home_hobbies": 0.00010663270950317384, "__label__industrial": 0.0008988380432128906, "__label__literature": 0.0003116130828857422, "__label__politics": 0.000514984130859375, "__label__religion": 0.0006122589111328125, "__label__science_tech": 0.430908203125, "__label__social_life": 0.0001074671745300293, "__label__software": 0.024261474609375, "__label__software_dev": 0.5283203125, "__label__sports_fitness": 0.00052642822265625, "__label__transportation": 0.0012073516845703125, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28246, 0.03612]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28246, 0.50789]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28246, 0.90144]], "google_gemma-3-12b-it_contains_pii": [[0, 826, false], [826, 6157, null], [6157, 12249, null], [12249, 18856, null], [18856, 23615, null], [23615, 28246, null]], "google_gemma-3-12b-it_is_public_document": [[0, 826, true], [826, 6157, null], [6157, 12249, null], [12249, 18856, null], [18856, 23615, null], [23615, 28246, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28246, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28246, null]], "pdf_page_numbers": [[0, 826, 1], [826, 6157, 2], [6157, 12249, 3], [12249, 18856, 4], [18856, 23615, 5], [23615, 28246, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28246, 0.14925]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
d7a1f41d17535d3e0ae415c06ced82cce4ecc9e0
Unit 9: Logic Synthesis and Verification - Course contents - Logic synthesis basics - Binary-decision diagram (BDD) - Verification - Logic optimization - Technology mapping - Readings - Chapter 11 Logic Synthesis & Verification - **Logic synthesis** programs transform Boolean expressions or register-transfer level (RTL) description (in Verilog/VHDL/C) into logic gate networks (netlist) in a particular library. - Three different tasks - two-level combinational synthesis - multilevel combinational synthesis - sequential synthesis - Optimization goals: minimize area, delay, and power, etc - **Verification**: Checks the equivalence of a specification and an implementation. Logic Synthesis & Verification - **Technology-independent** optimization - Works on Boolean expression equivalent. - Estimates size based on # of literals. - Uses don’t-cares, common factor extraction (factorization), etc. to optimize logic. - Uses simple delay models. - **Technology-dependent** optimization: technology mapping/library binding - Maps Boolean expressions into a particular cell library. - May perform some optimizations in addition to simple mapping. - Uses more accurate delay models based on cell structures. Boolean Functions - $B = \{0, 1\}, Y = \{0, 1, D\}$ - A Boolean function $f: B^m \rightarrow Y^n$ - $f = \overline{x_1} \overline{x_2} + \overline{x_2} x_3 + \overline{x_2} x_3 + x_1 x_2 + x_2 x_3 + x_1 x_3$ - Input variables: $x_1, x_2, \ldots$ - The value of the output partitions $B^m$ into three sets - the on-set - the off-set - the dc-set (don’t-care set) Minterms and Cubes - A **minterm** is a product of all input variables or their negations. - A minterm corresponds to a single point in $B^n$. - A **cube** is a product of the input variables or their negations. - The fewer the number of variables in the product, the bigger the space covered by the cube. ``` x_1 \cdot x_2 \cdot \neg x_3 x_1 \cdot x_3 \neg x_3 ``` Implicant and Cover - An **implicant** is a cube whose points are either in the on-set or the dc-set. - A **prime implicant** is an implicant that is not included in any other implicant. - A set of prime implicants that together cover all points in the on-set (and some or all points of the dc-set) is called a prime cover. - A prime cover is **irredundant** when none of its prime implicants can be removed from the cover. - An irredundant prime cover is **minimal** when the cover has the minimal number of prime implicants. Cover Examples - \( f = \overline{x_1} \overline{x_3} + \overline{x_2} x_3 + x_1 x_2 \) - \( f = \overline{x_1} x_2 + x_2 \overline{x_3} + x_1 x_3 \) Canonical Forms - A **canonical form** of a Boolean function is a unique representation of the function. - It can be used for verification purposes. - The **truth table** or the **sum of minterms** are canonical forms. - They grow exponentially with the number of input variables. - A prime irredundant cover is not a canonical form. - **Reduced ordered binary decision diagram (ROBDD)**: a canonical form that is interesting from a practical point of view. Logic Synthesis in Practice - Specify the logic-level behavioral description of the circuit in some hardware-description language. - Extract from this description the Boolean expressions related to the logic and represent them in some suitable internal form. - Manipulate these expressions to obtain an optimized representation (two-level or multilevel). - Perform technology mapping, a mapping from the abstract optimized representation to a netlist of cells from a library. Binary-Decision Diagram (BDD) Principles - **Restriction** resulting in the positive and negative cofactors of a Boolean function: \[ f_{x_i} = f(x_1, \ldots, x_{i-1}, '1', x_{i+1}, \ldots, x_m) \] \[ f_{x_i} = f(x_1, \ldots, x_{i-1}, '0', x_{i+1}, \ldots, x_m) \] \[ f = x_1 \overline{x}_2 x_3 + \overline{x}_1 x_2 \overline{x}_3 + \overline{x}_1 x_2 x_3 + x_1 \overline{x}_2 x_3 + x_1 x_2 \overline{x}_3 + x_1 x_2 x_3 \] \[ f_{x_1} = x_2 x_3 + x_2 \overline{x}_3 + x_2 x_3 \] \[ f_{\overline{x}_1} = x_2 x_3 + x_2 \overline{x}_3 + x_2 x_3 \] - **Shannon expansion** (already known to Boole) states: \[ f = x_i \cdot f_{x_i} + \overline{x}_i \cdot f_{\overline{x}_i} \] - A complete expansion can be obtained by successively applying Shannon expansion on all variables of a function until either of the constant functions ‘0’ or ‘1’ are reached. Example Ordered Binary-Decision Diagram (OBDD) - The complete Shannon expansion can be visualized as a tree (solid lines correspond to the positive cofactors and dashed lines to negative cofactors). \[ f = x_1 x_2 x_3 + x_1 x_2 \bar{x}_3 + x_1 \bar{x}_2 x_3 + x_1 \bar{x}_2 \bar{x}_3 + x_1 x_2 x_3 + x_1 x_2 \bar{x}_3 + x_1 x_2 x_3 \] Creating A Reduced OBDD (ROBDD) - An OBDD is a directed tree \( G(V,E) \). - Each vertex \( v \in V \) is characterized by an associated variable \( \phi(v) \), a high subtree \( \eta(v) \) (\( \text{high}(v) \)) and a low subtree \( \lambda(v) \) (\( \text{low}(v) \)). - Procedure to reduce an OBDD: - Merge all identical leaf vertices and appropriately redirect their incoming edges; - Proceed from bottom to top, process all vertices: if two vertices \( u \) and \( v \) are found for which \( \phi(u) = \phi(v) \), \( \eta(u) = \eta(v) \), and \( \lambda(u) = \lambda(v) \), merge \( u \) and \( v \) and redirect incoming edges; - For vertices \( v \) for which \( \eta(v) = \lambda(v) \), remove \( v \) and redirect its incoming edges to \( \eta(v) \). ROBDD Properties - The ROBDD is a canonical representation, given a fixed ordering of the variables. - The ROBDD is a compact representation for many Boolean functions used in practice. - Variable ordering can greatly affect the size of an ROBDD. - E.g., the parity function of $k$ bits: $f = \bigoplus_{j=1}^{k} x_{2j-1} \oplus x_{2j}$ A BDD Package - A BDD package refers to a software program that can manipulate ROBDDs. It has the following properties: - Interaction with BDDs takes place through an abstract data type (functionality is independent from the internal representation used). - It supports the conversion of some external representation of a Boolean function to the internal ROBDD representation. - It can store multiple Boolean functions, sharing all vertices that can be shared. - It can create new functions by combining existing ones (e.g., $h = f \cdot g$). - It can convert the internal representation back to an external one. BDD Data Structures - A triple $(\phi, \eta, \lambda)$ uniquely identifies an ROBDD vertex. ```c struct vertex { char *phi; struct vertex *eta, *lambda; ... } ``` - A unique table (implemented by a hash table) that stores all triples already processed. ```c struct vertex *oldOrNew(char *phi, struct vertex *eta, *lambda) { if ("a vertex $v = (\phi, \eta, \lambda)$ exists") return v; else { v ← "new vertex pointing at (\phi, \eta, \lambda)"; return v; } } ``` Building an ROBDD ```c struct vertex *robdd_build(struct expr f, int i) { struct vertex *η, *λ; struct char *ϕ; if (equal(f, '0')) return v₀; else if (equal(f, '1')) return v₁; else ϕ ← π(i); η ← robdd_build(f₀, i + 1); λ ← robdd_build(f₁, i + 1); if (η = λ) return η; else return oldloc.new(ϕ, η, λ); } ``` - The procedure directly builds the compact ROBDD structure. - A simple symbolic computation system is assumed for the derivation of the cofactors. - \( π(i) \) gives the \( i \)th variable from the top robd1d_build Example ``` robd1d_build(x₁ + x₂ + x₃ + x₁.x₂, 1) \xrightarrow{\text{robdd_build}} (x₁ + x₂) ⇒ \xrightarrow{\text{robdd_build}} (\neg x₃, 3) \xrightarrow{\text{robdd_build}} (\neg x₁.x₂, 4) \xrightarrow{\text{robdd_build}} (x₁, x₂, x₃, 2) ``` ``` \xrightarrow{\text{robdd_build}} (\neg x₂ + x₃, 3) \xrightarrow{\text{robdd_build}} (\neg x₂, 4) \xrightarrow{\text{robdd_build}} (x₃, 0, v₀) \xrightarrow{\text{robdd_build}} (\neg x₁, 2, 4) \xrightarrow{\text{robdd_build}} (x₂, v₁) ``` ``` v₀ = (x₃, v₀, v₁) v₁ = (x₂, v₁) ``` Diagram: ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` ``` ``` V₅ V₆ V₃ V₄ V₂ V₁ V₀ ``` Separate algorithms could be designed for each separate operator on ROBDDs, such as AND, NOR, etc. However, the universal if-then-else operator ‘ite’ is sufficient. \( z = \text{ite}(f, g, h) \), \( z \) equals \( g \) when \( f \) is true and equals \( h \) otherwise: \( z = \text{ite}(f, g, h) = f \cdot g + \overline{f} \cdot h \) Examples: \[ \begin{align*} z &= f \cdot g = \text{ite}(f, g, '0') \\ z &= f + g = \text{ite}(f, '1', g) \end{align*} \] The ite operator is well-suited for a recursive algorithm based on ROBDDs (\( \phi(v) = x \)): \[ v = \text{ite}(F, G, H) = (x, \text{ite}(F_x, G_x, H_x), \text{ite}(F_{\overline{x}}, G_{\overline{x}}, H_{\overline{x}})) \] **The ite Algorithm** ```c struct vertex *apply_ite(struct vertex *F, *G, *H, int i) { char x; struct vertex *eta, *lambda; if (F == v1) return G; else if (F == v0) return H; else if (G == v1 && H == v0) return F; else { x = π(i); eta = apply_ite(F_x, G_x, H_x, i + 1); lambda = apply_ite(F_{\overline{x}}, G_{\overline{x}}, H_{\overline{x}}, i + 1); if (eta == lambda) return eta; else return old_or_new(x, eta, lambda); } } ``` Comments on the \texttt{ite} Algorithm - The algorithm processes the variables in the order used in the BDD package. - $\pi(i)$ gives the $i^{th}$ variable from the top; $\pi^{-1}(x)$ gives the index position of variable $x$ from the top. - Computation of the restrictions: suppose that $F$ is the root vertex of the function for which $F_x$ should be computed: \[ F_x = \eta(F) \text{ if } \pi^{-1}(\phi(F)) = i \] - The calculation of $F_x$ is done in an analogous way. - The time complexity of the algorithm is $O(|F|^*|G|^*|H|)$. ROBDD Example: Computing $\overline{G}$ from $G$ $\overline{G} = \text{ite}(G, 0, 1)$ **ROBDD Example: Computing $H$ from $F, G, \overline{G}$** $$H = F \oplus G = \text{ite}(F, G, \overline{G})$$ --- **Composition** - The composite problem is - the ROBDDs of two functions $f$ and $g$ are known - the output of $g$ is connected to an input of $f$ - compute the ROBDD of the composed function $h$, where $$h = f(x_1, ..., x_i-1, g, x_i+1, ..., x_n).$$ - Using Shannon expansion, one finds that $$h = g \cdot f_{x_i} + \overline{g} \cdot f_{\overline{x_i}} = \text{ite}(g, f_{x_i}, f_{\overline{x_i}})$$ - Now, the restrictions have to be calculated by dedicated algorithms. Positive Cofactor ```c struct vertex *positive_cofactor(struct vertex *F, int r, i) { char x; struct vertex *η, *λ; if (F = v1) return v1; else if (F = v0) return v0; else if (r = i) return η(F); else { x ← π(η); η ← positive_cofactor(F, r, l + 1); λ ← positive_cofactor(F, r, l + 1); if (η = λ) return η; else return old_or_new(x, η, λ); } } ``` Positive Cofactor Example: Computing $F_{x_3}$ $G = v_{10} \quad G = v_8 \quad F_{x_3} = v_{13} \quad F = v_6 \quad H = v_{12}$ - $η$ positive_cofactor($v_3$, 3, 2) → $η$ positive_cofactor($v_1$, 3, 3) → $v_1$ - $λ$ positive_cofactor($v_5$, 3, 2) → $λ$ positive_cofactor($v_4$, 3, 3) → $v_0$ - $η$ positive_cofactor($v_1$, 3, 3) → $η$ positive_cofactor($v_1$, 3, 3) → $v_1$ - $v_0 = (x_2, v_0, v_1)$ - $v_{12} = (x_1, v_1, v_7)$ Variable Ordering - Reorder adjacent variables only has a local effect on the ROBDD. Variable Ordering (cont’d) - Finding the ordering that minimizes the ROBDD size for some function is intractable. - The optimal ordering may change as ROBDDs are being manipulated. - So, an ROBDD package will try to reorder the variables at distinct moments. - It could move one variable to the top and back to the bottom and remember the best position. It could then repeat the procedure for the other variables. - Another “invisible” feature of an ROBDD package is garbage collection. The Verification Problem - The issue is to compare a specification $f$ to an implementation $g$. - They can both be represented by ROBDDs ($F$ resp. $G$). - In case of a fully specified function, verification is trivial (pointer comparison) because of the strong canonicity of the ROBDD data structure. - Strong canonicity: the representations to identical functions are the same. - If there is a dc-set, use two functions $f$ and $d$. The implementation $g$ is correct when $d + f \cdot g + \overline{f} \cdot \overline{g}$ is a tautology (the expression evaluates to ‘1’). ROBDDs and Satisfiability - A Boolean function is satisfiable if an assignment to its variables exists for which the function becomes ‘1’ - Any Boolean function whose ROBDD is unequal to ‘0’ is satisfiable. - Suppose that choosing a Boolean variable $x_i$ to be ‘1’ costs $c_i$. Then, the minimum-cost satisfiability problem asks to minimize: \[ \sum_{i=1}^{n} c_i \mu(x_i) \] where $\mu(x_i) = 1$ when $x_i = ‘1’$ and $\mu(x_i) = 0$ when $x_i = ‘0’$. - Solving minimum-cost satisfiability amounts to computing the shortest path in an ROBDD, which can be solved in linear time. - Weights: $w(v, \eta(v)) = c_i$, $w(v, \lambda(v)) = 0$, variable $x_i = \phi(v)$. Applications to Combinatorial Optimization - **Zero-one integer linear programming** can be formulated as a minimum-cost satisfiability problem. - Consider the (standard form) constraint: \( x_1 + x_2 + x_3 + x_4 = 3 \). - It can be written as: \[ (x_1 + x_2) \cdot (x_1 + x_3) \cdot (x_2 + x_3) \cdot (x_2 + x_4) \cdot (x_3 + x_4) \cdot (x_1 + x_2 + x_3 + x_4) \] - The first 6 sums in the product: at least 3 of the 4 variables are 1. - The last sum: at least one of the variables is 0. - Many combinatorial optimization problems can also be directly formulated in terms of the satisfiability problem. Set Covering - Given a set \( S = \{s_1, \ldots, s_m\} \) and a set \( K = \{K_1, \ldots, K_n\} \) where each \( K_j (1 \leq j \leq n) \) is a subset of \( S \), find a subset \( \Gamma \) of \( K \) such that the union of the elements \( \Gamma \) covers \( S \). - The cost of a cover is the sum of the costs \( c_j \) of the elements \( K_j \) of \( \Gamma \). - Multiple cost functions are possible. E.g., \( c_j = 1 \) or \( c_j = |K_j| \). - The problem is NP-complete for most cost functions. A covering problem can be formulated as a satisfiability problem by associating variables $x_j$ with the sets $K_j$: $(x_1 + x_2 + x_6) \cdot (x_1 + x_4 + x_5 + x_6) \cdot (x_2 + x_3 + x_5) \cdot (x_2 + x_4 + x_6)$. This type of covering is called unate. A binate covering problem has an expression where complemented variables are allowed. \[ \Gamma = \{K_3, K_6\} \] is the optimal solution when $c_j = |K_j|$. $K_3$ is redundant in $\Gamma = \{K_1, K_2, K_3\}$. Example Simplification Rules in Covering $K_3$ is essential $s_2$ dominates $s_3$ $K_5$ dominates $K_4$ Technology-Independent Logic Optimization - **Two-level**: minimize the # of product terms. - \( F = x_1x_2x_3 + x_1x_2x_3 + x_1x_2x_3 + x_1x_2x_3 + x_1x_2x_3 \Rightarrow F = x_2 + x_1x_3. \) - **Multi-level**: minimize the #'s of literals, variables. - E.g., equations are optimized using a smaller number of literals. - Methods/CAD tools: The Quine-McCluskey method (exponential-time exact algorithm), Espresso (heuristics for two-level logic), MIS (heuristics for multi-level logic), Synopsys, etc. Two-Level Logic Synthesis - Any Boolean function can be realized in two levels: AND-OR (sum of products), NAND-NAND, etc. - Direct implementation of two level logic using PLAs (programmable logic arrays) is not as popular as in the nMOS days. - Classic problems, solved e.g. by the Quine-McCluskey algorithm. - Popular cost function: the number of literals in the sum of products expression. - The goal is to find a minimal irredundant prime cover. Optimality in Two-Level Logic Synthesis A local and a global minimum The Quine-McCluskey Algorithm - Calculate all prime implicants (of the union of the on-set and dc-set). - Find the minimal cover of all minterms in the on-set by prime implicants. - Construct the covering matrix. - Simplify the covering matrix by detecting essential columns, row and column dominance. - What is left is the cyclic core of the covering matrix. - The covering problem can then be solved by a branch-and-bound algorithm. - Other methods do not first enumerate all prime implicants; they use an implicit representation by means of ROBDDs. ### The Quine-McCluskey Algorithm - \( F(a, b, c, d) = \sum_m(2, 3, 7, 9, 11, 13) + \sum_d(1, 10, 15) \) - **Step 1:** Group minterms to find prime implicants by applying \( xy + xy' = x \). - **Step 2:** Select a minimum set of prime implicants (minimum \# of literals) to implement the original function. - **Exponential-time exact algorithm, huge amounts of memory!** <table> <thead> <tr> <th>Step 1–1</th> <th>Step 1–2</th> <th>Step 1–3</th> <th>Step 2</th> </tr> </thead> <tbody> <tr> <td>1 0 0 0 1 v</td> <td>(1, 3) 0 0 1 v</td> <td>(1, 3, 9, 11) 0 0 1 v</td> <td>prime imp. 2 3 7 9 11 13</td> </tr> <tr> <td>2 0 0 1 0 v</td> <td>(1, 9) 0 0 1 v</td> <td>(2, 3, 10, 11) 0 0 1 v</td> <td>( 11 )</td> </tr> <tr> <td>3 0 1 1 1 v</td> <td>(2, 3) 0 0 1 v</td> <td>(3, 7, 11, 15) 0 1 1 v</td> <td>( \uparrow \uparrow \downarrow \downarrow )</td> </tr> <tr> <td>9 1 0 0 1 v</td> <td>(2, 10) 0 1 0 v</td> <td>(9, 11, 13, 15) 1 1 1 v</td> <td>( * \downarrow \uparrow \downarrow \downarrow )</td> </tr> <tr> <td>10 1 0 1 0 v</td> <td>(3, 7) 0 1 1 v</td> <td>( \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> <tr> <td>7 0 1 1 1 v</td> <td>(3, 11) 0 1 1 v</td> <td>( (1, 11, 13, 15, 11, 11) )</td> <td></td> </tr> <tr> <td>11 1 0 1 1 v</td> <td>(9, 11) 1 0 1 v</td> <td>( \uparrow \uparrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> <tr> <td>13 1 1 0 1 v</td> <td>(9, 13) 1 0 1 v</td> <td>( \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> <tr> <td>15 1 1 1 1 v</td> <td>(10, 11) 1 0 1 v</td> <td>( \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> <tr> <td>15 ( C ) 1 1 1 1 v</td> <td>(11, 15) 1 1 1 v</td> <td>( \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> <tr> <td>13 ( D ) 1 1 1 1 v</td> <td>(13, 15) 1 1 1 v</td> <td>( \downarrow \downarrow \downarrow \downarrow \downarrow \downarrow )</td> <td></td> </tr> </tbody> </table> \( F = b'c + cd + ad \) ### Technology Mapping - **Library-based technology mapping:** standard cell design. - Map a function to a limited set of pre-designed cells - **Lookup table-based technology mapping:** Lucent, Xilinx FPGAs, etc. - Each lookup table (LUT) can implement a very large number of functions (e.g., all functions with 4 inputs and 1 output) - **Multiplexer-based technology mapping:** Actel FPGAs, etc. - Logic modules are constructed with multiplexers. Standard Cell Revisited Pattern Graphs for an Example Library <table> <thead> <tr> <th>cell name (cost)</th> <th></th> </tr> </thead> <tbody> <tr> <td>inv (1)</td> <td>nand2 (2)</td> </tr> <tr> <td>and1 (3)</td> <td>xor2 (4)</td> </tr> <tr> <td>xor (5)</td> <td>nand4 (4)</td> </tr> <tr> <td>nand2 (2)</td> <td>nand3 (3)</td> </tr> <tr> <td>xor2 (4)</td> <td>nand4 (4)</td> </tr> <tr> <td>nand2 (2)</td> <td>nand3 (4)</td> </tr> <tr> <td>and2 (3)</td> <td>xor2 (4)</td> </tr> <tr> <td>xor (5)</td> <td>nand4 (4)</td> </tr> <tr> <td>nand2 (2)</td> <td>nand3 (3)</td> </tr> <tr> <td>and2 (3)</td> <td>xor2 (4)</td> </tr> <tr> <td>xor (5)</td> <td>nand4 (4)</td> </tr> <tr> <td>nand2 (2)</td> <td>nand3 (4)</td> </tr> <tr> <td>and2 (3)</td> <td>xor2 (4)</td> </tr> <tr> <td>xor (5)</td> <td>nand4 (4)</td> </tr> </tbody> </table> Technology Mapping - **Technology Mapping**: The optimization problem of finding a minimum cost covering of the subject graph by choosing from the collection of pattern graphs for all gates in the library. - A **cover** is a collection of pattern graphs such that every node of the subject graph is contained in one (or more) of the pattern graphs. - The cover is further constrained so that each input required by a pattern graph is actually an output of some other pattern graph. Trivial Covering - Mapped into 2-input NANDs and 1-input inverters. - 8 2-input NAND-gates and 7 inverters for an area cost of 23. - Best covering? \[ \begin{align*} t1 &= d + e; \\ t2 &= b + h; \\ t3 &= a \cdot t2 + c; \\ t4 &= t1 \cdot t3 + f \cdot g \cdot h; \end{align*} \] Optimal Tree Covering by Dynamic Programming - If the subject directed acyclic graph (DAG) is a tree, then a polynomial-time algorithm to find the minimum cover exists. - Based on dynamic programming: optimal substructure? overlapping subproblems? - Given: subject trees (networks to be mapped), library cells - Consider a node $n$ of the subject tree - Recursive assumption: For all children of $n$, a best match which implements the node is known. - Cost of a leaf is 0. - Consider each pattern tree which matches at $n$, compute cost as the cost of implementing each node which the pattern requires as an input plus the cost of the pattern. - Choose the lowest-cost matching pattern to implement $n$. --- Tree-Covering by Dynamic Programming - If the subject DAG is not a tree - Partition the subject graph into forest of trees - Cover each tree optimally using the dynamic programming. - Overall solution is only an approximation. - Optimality - An optimal sequence of decisions has the property that whatever the initial state and decision are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision. - The minimum area cover for a tree $T$ can be derived from the minimum area covers for every node below the root of $T$. Best Covering - A best covering with an area of 15. - Obtained by the dynamic programming approach. Conceptual FPGA Architecture Logic modules + Routing resources + I/O cells = FPGAs Xilinx XC4000 FPGA Logic Module Architecture - Each contains two 4-input LUTs, one 3-input LUT, and two DFFs. - Can implement any 2 functions of up to 4 variables, one function of up to 5 variables, or selected functions of up to 9 variables. Lookup Table-Based Technology Mapping - A $k$-input LUT ($k$-LUT) can implement any function of up to $k$ inputs. A mapping with 4 4-input, 1-output LUTs, delay depth = 3 LUTs An optimal mapping with 3 4-input, 1-output LUTs delay depth = 2 To implement $f = ab + \overline{ac}$, set $d_0 = d_1 = s_3 = x$, $d_2 = c$, $d_3 = b$, $s_0 = a$, $s_1 = s_2 = 1$. $$z = (s_3 + s_2)s_1s_0d_0 + (s_3 + s_2)s_1s_0d_1 + (s_3 + s_2)s_1s_0d_2 + (s_3 + s_2)s_1s_0d_3.$$
{"Source-Url": "http://cc.ee.ntu.edu.tw/~ywchang/Courses/EDA04/lec9.pdf", "len_cl100k_base": 7754, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 44595, "total-output-tokens": 8907, "length": "2e12", "weborganizer": {"__label__adult": 0.0005483627319335938, "__label__art_design": 0.0007872581481933594, "__label__crime_law": 0.0005450248718261719, "__label__education_jobs": 0.0019855499267578125, "__label__entertainment": 0.00011432170867919922, "__label__fashion_beauty": 0.00029659271240234375, "__label__finance_business": 0.00046634674072265625, "__label__food_dining": 0.0005917549133300781, "__label__games": 0.001129150390625, "__label__hardware": 0.0265350341796875, "__label__health": 0.0007815361022949219, "__label__history": 0.0003938674926757813, "__label__home_hobbies": 0.0004017353057861328, "__label__industrial": 0.00283050537109375, "__label__literature": 0.00018978118896484375, "__label__politics": 0.0005249977111816406, "__label__religion": 0.0007529258728027344, "__label__science_tech": 0.2034912109375, "__label__social_life": 0.00011098384857177734, "__label__software": 0.0081329345703125, "__label__software_dev": 0.7470703125, "__label__sports_fitness": 0.0007228851318359375, "__label__transportation": 0.001323699951171875, "__label__travel": 0.0002455711364746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22688, 0.08459]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22688, 0.71327]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22688, 0.73784]], "google_gemma-3-12b-it_contains_pii": [[0, 711, false], [711, 1628, null], [1628, 2529, null], [2529, 3146, null], [3146, 4484, null], [4484, 5590, null], [5590, 5930, null], [5930, 7069, null], [7069, 8547, null], [8547, 9779, null], [9779, 10410, null], [10410, 11012, null], [11012, 11908, null], [11908, 12489, null], [12489, 13738, null], [13738, 14855, null], [14855, 15434, null], [15434, 16394, null], [16394, 17020, null], [17020, 19067, null], [19067, 19708, null], [19708, 20472, null], [20472, 21798, null], [21798, 21984, null], [21984, 22473, null], [22473, 22688, null]], "google_gemma-3-12b-it_is_public_document": [[0, 711, true], [711, 1628, null], [1628, 2529, null], [2529, 3146, null], [3146, 4484, null], [4484, 5590, null], [5590, 5930, null], [5930, 7069, null], [7069, 8547, null], [8547, 9779, null], [9779, 10410, null], [10410, 11012, null], [11012, 11908, null], [11908, 12489, null], [12489, 13738, null], [13738, 14855, null], [14855, 15434, null], [15434, 16394, null], [16394, 17020, null], [17020, 19067, null], [19067, 19708, null], [19708, 20472, null], [20472, 21798, null], [21798, 21984, null], [21984, 22473, null], [22473, 22688, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22688, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22688, null]], "pdf_page_numbers": [[0, 711, 1], [711, 1628, 2], [1628, 2529, 3], [2529, 3146, 4], [3146, 4484, 5], [4484, 5590, 6], [5590, 5930, 7], [5930, 7069, 8], [7069, 8547, 9], [8547, 9779, 10], [9779, 10410, 11], [10410, 11012, 12], [11012, 11908, 13], [11908, 12489, 14], [12489, 13738, 15], [13738, 14855, 16], [14855, 15434, 17], [15434, 16394, 18], [16394, 17020, 19], [17020, 19067, 20], [19067, 19708, 21], [19708, 20472, 22], [20472, 21798, 23], [21798, 21984, 24], [21984, 22473, 25], [22473, 22688, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22688, 0.0572]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
b4870c8e7d219e99de794b611f060026b4443134
Model-based Testing: Next Generation Functional Software Testing By Dr. Bruno Legeard Model-based testing (MBT) is an increasingly widely-used technique for automating the generation and execution of tests. There are several reasons for the growing interest in using model-based testing: - The complexity of software applications continues to increase, and the user’s aversion to software defects is greater than ever, so our functional testing has to become more and more effective at detecting bugs; - The cost and time of testing is already a major proportion of many projects (sometimes exceeding the costs of development), so there is a strong push to investigate methods like MBT that can decrease the overall cost of test by designing tests automatically as well as executing them automatically. - The MBT approach and the associated commercial and open source tools are now mature enough to be applied in many application areas, and empirical evidence is showing that it can give a good ROI; - Model-based testing renews the whole process of functional software testing: from business requirements to the test repository, with manual or automated test execution. It supports the phases of designing and generating tests, documenting the test repository, producing and maintaining the bi-directional traceability matrix between tests and requirements, and accelerating test automation. This paper addresses these points by giving a realistic overview of model-based testing and its expected benefits. We discuss what model-based testing is, how you have to organize your process and your team to use MBT and detail a complete example from business requirements to automated test repository using a fully supported MBT process. What is MBT? Model-based testing refers to the processes and techniques for the automatic derivation of abstract test cases from abstract formal models, the generation of concrete tests from abstract tests, and the manual or automated execution of the resulting concrete test cases. Therefore, the key points of model-based testing are the modeling principles for test generation, the test generation strategies and techniques, and the concretization of abstract tests into concrete, executable tests. A typical deployment of MBT in industry goes through the four stages shown in Figure 1: 1. **Design a Test Model.** The model, generally called the *test model*, represents the expected behavior of the system under test (SUT). Standard modeling languages such as UML are used to formalize the control points and observation points of the system, the expected dynamic behavior of the system, the business entities associated with the test, and some data for the initial test configuration. Model elements such as transitions or decisions are linked to the requirements, in order to ensure bi-directional traceability between the requirements and the model, and later to the generated test cases. Models must be precise and complete enough to allow automated derivation of tests from these models; 2. **Select some Test Generation Criteria.** There are usually an infinite number of possible tests that could be generated from a model, so the test analyst chooses some Test Generation Criteria to select the highest priority tests, or to ensure good coverage of the system behaviors. One common kind of test generation criteria is based on structural model coverage, using well known test design strategies such as equivalence partitioning, cause-effect testing, pair-wise testing, process cycle coverage, or boundary value analysis (see [1] for more details on these strategies). Another useful kind of test generation criteria ensures that the generated test cases cover all the requirements, perhaps with more tests generated for requirements that have a higher level of risk. In this way, model-based testing can be used to implement a requirement and risk-based testing approach. For example, for a non-critical application, the test analyst may choose to generate just one test for each of the nominal behaviors in the model and each of the main error cases; but for one of the more critical requirements, she/he could apply more demanding coverage criteria such as all loop-free paths, to ensure that the businesses processes associated with that part of the test model are more thoroughly tested; 3. **Generate the tests.** This is a fully automated process that generates the required number of (abstract) test cases from the test model. Each generated abstract test case is typically a sequence of high-level SUT actions, with input parameters and expected output values for each action. These generated test sequences are similar to the high-level test sequences that would be designed manually in action-word testing [2]. They are easily understandable by humans and are complete enough to be directly executed on the SUT by a manual tester. The test model allows computing the expected results and the input parameters. Data table may be used to link some abstract value from the model with some concrete test value. To make them executable using a test automation tool, a further concretization phase automatically translates each abstract test case into a concrete (executable) scripts [3], using a user-defined mapping from abstract data values to concrete SUT values, and a mapping from abstract operations into SUT GUI actions or API calls. For example, if the test execution is via the GUI of the SUT, then the action words are linked to the graphical object map, using a test robot such as HP QuickTest Professional, IBM Rational Functional Tester or the open-source robot Selenium. If the test execution of the SUT is API-based, then the action words need to be implemented on this API. This can be a direct mapping or a more complex automation layer. The expected results part of each abstract test case is translated into oracle code that will check the SUT outputs and decide on a test pass/fail verdict. The tests generated from the test model may be structured into multiple test suites, and published into standard test management tools such as HPQuality Center, IBM Rational Quality Manager or the open-source tool TestLink. Maintenance of the test repository is done by updating the test model, then automatically regenerating and republishing the test suites into the test management tools; 4. **Execute the Tests.** The generated concrete tests are typically executed either manually or within a standard automated test execution environment, such as HP QuickTest Professional or IBM Rational Functional Tester. Either way, the result is that the tests are executed on the SUT, and we find that some tests pass and some tests fail. The failing tests indicate a discrepancy between the SUT and the expected results designed in the test model, which then needs to be investigated to decide whether the failure is caused by a bug in the SUT, or by an error in the model and/or the requirements. Experience shows that model-based testing is good at finding SUT errors, but is also highly effective at exposing requirements errors [1] even far before executing a single test (thanks to the modeling phase). **Requirements traceability** The automation of bidirectional traceability between requirements and test cases is a key aspect of the added-value of MBT. **Bidirectional traceability** is the ability to trace links between two parts of the software development process with respect to each other. The starting point of the MBT process is, as usual, the informal functional requirements, use cases, descriptions of business processes and all other factors that provide the functional description of the application being tested. To be effective, requirements traceability implies that the requirements repository should be structured enough so that each individual requirement can be uniquely identified. It is desirable to link these informal requirements to the generated tests, and to link each generated test to the requirements that it tests. A best practice in MBT, supported by most of the tools on the market, consists in linking model elements such as decision points and transitions to the relevant requirements. From these links in the test model, test generation tools ensure the automatic generation and maintenance of the traceability matrix between requirements and test cases. Test repository and test management tools The purpose of generating tests from the test model is to produce the test repository. This test repository is typically managed by a test management tool, such as HP Quality Center, IBM Rational Quality Manager or the open-source tool TestLink. The goal of such a tool is to help organize and execute test suites (groups of test cases), both for manual or automated tests. ![Figure 2. Relationship between both repositories (tests and requirements).](image) In the MBT process, the test repository documentation is fully managed by automated generation (from the test model): documentation of the test design steps, requirements traceability links, test scripts and associated documentation are automatically provided for each test case. Therefore, the maintenance of the test repository needs to be done in the test model. Roles in the MBT process The MBT process involves three main kinds of roles (see Figure 3). ![Figure 3. Main roles in the MBT process.](image) 1. The **Test Analyst** interacts with the customers and subject matter experts regarding the requirements to be covered, and then develops the test model. He/she then uses the test generation tool to automatically generate tests and produce a repository of test suites that will satisfy the project test objectives. 2. The **Subject Matter Expert** is the reference person for the SUT requirements and business needs, and dialogues with the test analyst to clarify the specifications and testing needs. 3. The **Test Engineer** is responsible for connecting the generated tests to the system under test so that the tests can be executed automatically. The input for the test engineer is the test repository generated automatically by the Test Analyst from the test model. The test analyst is responsible of the quality of the test repository in terms of coverage of the requirements and fault detection capability. So the quality of his/her interaction with the subject matter expert is crucial. In the other direction, the test analyst interacts with the test engineer to facilitate test automation (implementation of key-words). This interaction process is highly iterative. **Testing nature and levels** MBT is mainly used for functional black-box testing. This is a kind of back-to-back testing approach, where the SUT is tested against the test model, and any differences in behavior are reported as test failures. The model formalizes the functional requirements, representing the expected behavior at a given level of abstraction. Models can also be used for encoding non-functional requirements such as performance or ergonomics, but this is currently a subject of research in the MBT area. However, security requirements can typically be tested using standard MBT techniques for functional behavior. Regarding the testing level, the current mainstream focus of MBT practice is system testing and acceptance testing, rather than unit or module testing. Integration testing is considered at the level of integration of subsystems. In the case of a large chain of systems, MBT may address test generation of detailed test suites for each sub-system, and manage end-to-end testing for the whole chain. **Example: application on actiTIME** Now, we illustrate a full MBT process on a typical web application. This time tracking application, named actiTIME, is freely available on the web ([www.actitime.com](http://www.actitime.com)). In this section, we use this application to demonstrate the various steps of deploying the MBT process. We illustrate it with Test Designer from Smartesting, which is a model-based testing solution dedicated to enterprise IT applications, secure electronic transactions and packaged applications such as SAP or Oracle E-Business Suite. Test cases are generated from a behavior model of the SUT, using requirements coverage and custom scenarios as test selection criteria. Test Designer models are written in a subset of standard UML. Test Designer supports both manual and automated test execution, using an offline approach. The generated test cases can be output to test management systems like HP Quality Center, IBM Rational Quality Manager or the open-source tool TestLink, with bidirectional traceability and full change management for evolving requirements. **actiTIME overview** actiTIME is a time management program developed by Actimind. Details about its features, and free downloads, can be found on the website [www.actitime.com](http://www.actitime.com). Ordinary users have access to their time track for input, review and corrections. They can also manage projects and tasks, do some reporting and of course they can manage their account (see (1) in Figure 4). In our sample model we focus on the user time-tracking features of actiTIME version 1.5; after logging into the system the user can specify how much time he spent on a specific task. A typical scenario is as follows: 1. access a time-track; 2. display the time-entry form; 3. type in the hours spent on assigned tasks; 4. the system warns the user that modifications are not saved yet; 5. save the modifications; 6. in case of overtime, the system displays an error message. actiTIME requirements In actiTIME a user may have administrator rights. Only administrators can add and remove projects. For a specific project, a user can add or remove tasks, enter the number of hours they spent on a task, etc. To precisely define the expected functional requirements of the actiTIME feature that we model, a list of requirements is defined in Table 1. Summary of actiTIME requirements. ![actiTIME user interface](image) Figure 4. actiTIME user interface. Table 1. Summary of actiTIME requirements. <table> <thead> <tr> <th>Requirement Id</th> <th>Requirement description</th> </tr> </thead> <tbody> <tr> <td>ADMIN/ADD_PROJECT</td> <td>An administrator can add a new project into the system. A project is linked to a customer and includes several tasks.</td> </tr> <tr> <td>ADMIN/DELETE_PROJECT</td> <td>An administrator can delete a project from the system.</td> </tr> <tr> <td>LOGIN</td> <td>When the user try to log on with an incorrect username or password an error message is displayed.</td> </tr> <tr> <td>USER/VIEW TIME-TRACK</td> <td>A user can display its time-track for the current week or any week, in order to report its activity</td> </tr> <tr> <td>USER/ENTER_TIME</td> <td>A user can enter the number of hours spent on his assigned tasks for one or several days.</td> </tr> <tr> <td>USER/REMOVE_TIME</td> <td>A user can correct the number of hours spent on a task by removing some time.</td> </tr> <tr> <td>USER/SAVE_TIME</td> <td>After modifying its time-track, the user can save the changes</td> </tr> <tr> <td>USER/SHOW_TIME_TRACKING</td> <td>A user can display its time-track consolidation for any month.</td> </tr> <tr> <td>USER/WORKING_TASK</td> <td>A user can add or remove task from the task list.</td> </tr> </tbody> </table> actiTIME test model The test model represents the expected behavior of the application, covering the requirements of Table 1. It is based on three UML diagrams (see Figure, Figure and Figure): - the class diagram represents the business entities and the user actions to be tested; - the layered state machine represents the dynamic expected behavior; - the instance diagram gives some test data and initial configuration of the application. Figure 5. Class diagram for actITIME test model. Figure 6. High level state machine for actITIME (partial). Figure 7. Object diagram for actiTIME. Figure 7 gives an OCL specification describing the Login operation with an invalid user name. Notice how the requirements are linked to the specification using annotations (@@Req: LOGIN). The annotation @@AIM gives more detail about which part of that refinement is being modeled here. Figure 8. OCL specification for Login (the invalid login case). Test generation with Test Designer Figure 8 shows the GUI of Test Designer for the project actiTIME. A list of the generated test cases (structured by test Suites) is displayed on the left, and the details of one test case are displayed on the right. The details of the requirements and test aims that are covered by a particular test step are shown in the right-hand bottom corner. Figure 9. Smartesting Test Designer user interface. Project actiTIME. Figure 10 shows the generated tests published into a test repository (in this case: HP Quality Center). These tests are ready for manual test execution. Each test is fully documented in the Design Steps Panel. Figure 10. Publication of generated tests into the test manager environment (HP Quality Center) For test automation, complete script code is generated and maintained for each test case (see Figure 11). The remaining (optional) task for the test automation engineer is to implement each key-word used in UML test model so that it is defined as a sequence of lower-level SUT actions. If this is done, the generated test scripts can be executed automatically on the SUT. An alternative approach is to leave the key-words undefined, in which case a human tester must execute the scripts manually. ![Figure 11. Publication of generated scripts into the test manager environment (HP Quality Center)](image) To sum-up, we deployed on the actiTIME application a typical MBT solution for IT applications, using a subset of UML as input language (class diagrams, state diagrams, instance diagrams, and OCL specification language), providing automated test generation and publication features both for manual and automated testing. **Key factors for success when deploying MBT** Here we describe the keys to success when deploying a MBT approach and tools. The key factors for effective use of MBT are the choice of MBT methods that are used, the organization of the team, the qualification of the people involved, and a mature tool-chain. Then, you may obtain the significant benefits that we discuss at the end of this section. 1. **MBT Methods, requirements and risks**: MBT is built on top of current best practices in functional software testing. It is important that the SUT requirements must be clearly defined, so that the test model can be designed from those requirements, and the product risks should be well understood, so that they can be used to drive the MBT test generation. 2. **Organization of the test team**: MBT is a vector for testing industrialization, to improve effectiveness and productivity. This means that the roles (for example between the test analyst who designs the test model, and the test engineer who implements the adaptation layer) are reinforced. 3. **Team Qualification - test team professionalism**: The qualification of the test team is an important pre-requisite. The test analysts and the test engineers and testers should be professional, and have been given appropriate training in MBT techniques, processes and tools. 4. **The MBT tool chain**: This professional efficient testing team should use an integrated tool chain, including a MBT test generator integrated with the test management environment and the test automation tool. **Expected benefits** Model-based Testing is an innovative and high-value approach compared to more conventional functional testing approaches. The main expected benefits of MBT may be summarized as follows: - **Contribution to the quality of functional requirements**: - Modeling for test generation is a powerful means for the detection of “holes” in the specification (undefined or ambiguous behavior); - The test phase may start earlier and find more flaws in the requirements repository than “manual” test design approach. - **Contribution to test generation and testing coverage**: - Automated generation of test cases; - Systematic coverage of functional behavior; - Automated generation and maintenance of the requirement coverage matrix; - Continuity of methodology (from requirements analysis to test generation). - **Contribution to test automation**: - Definition of action words (UML model operations) used in different scripts; - Test script generation; - Generation of the patterns for automation function library; - Independence from the test execution robot. **Conclusion** The idea of model-based testing is to use an explicit abstract model of a SUT and its environment to automatically derive tests for the SUT: the behavior of the model of the SUT is interpreted as the intended behavior of the SUT. The technology of automated model-based test case generation has matured to the point where the large-scale deployments of this technology are becoming commonplace. The prerequisites for success, such as qualification of the test team, integrated tool chain availability and methods, are now identified, and a wide range of commercial and open-source tools are available. Although MBT will not solve all testing problems, it is an important and useful technique, which brings significant progress over the state of the practice for functional software testing effectiveness, increasing productivity and improving functional coverage. **References** **About the author** Dr. Bruno Legeard is Chief Technology Officer of Smartesting, a company dedicated to model-based testing technologies and Professor of Software Engineering at the University of Franche-Comté (France). He started working on model-based testing in the mid 1990's and has extensive experience in applying model-based testing to large information systems, e-transaction applications and embedded software.
{"Source-Url": "http://drops.dagstuhl.de/opus/volltexte/2010/2620/pdf/10111.LegeardBruno.Paper.2620.pdf", "len_cl100k_base": 4349, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 21588, "total-output-tokens": 4961, "length": "2e12", "weborganizer": {"__label__adult": 0.00025844573974609375, "__label__art_design": 0.0002560615539550781, "__label__crime_law": 0.00024044513702392575, "__label__education_jobs": 0.0006170272827148438, "__label__entertainment": 4.202127456665039e-05, "__label__fashion_beauty": 0.00010907649993896484, "__label__finance_business": 0.00015878677368164062, "__label__food_dining": 0.0002167224884033203, "__label__games": 0.0004072189331054687, "__label__hardware": 0.0003979206085205078, "__label__health": 0.0002493858337402344, "__label__history": 0.00010848045349121094, "__label__home_hobbies": 4.4465065002441406e-05, "__label__industrial": 0.00021719932556152344, "__label__literature": 0.0001823902130126953, "__label__politics": 0.00013136863708496094, "__label__religion": 0.0002980232238769531, "__label__science_tech": 0.005290985107421875, "__label__social_life": 7.003545761108398e-05, "__label__software": 0.00882720947265625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00019252300262451172, "__label__transportation": 0.00018966197967529297, "__label__travel": 0.0001291036605834961}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22741, 0.00554]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22741, 0.47908]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22741, 0.90377]], "google_gemma-3-12b-it_contains_pii": [[0, 2333, false], [2333, 4630, null], [4630, 8374, null], [8374, 9706, null], [9706, 12945, null], [12945, 14072, null], [14072, 15935, null], [15935, 16044, null], [16044, 16819, null], [16819, 17197, null], [17197, 19367, null], [19367, 21939, null], [21939, 22741, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2333, true], [2333, 4630, null], [4630, 8374, null], [8374, 9706, null], [9706, 12945, null], [12945, 14072, null], [14072, 15935, null], [15935, 16044, null], [16044, 16819, null], [16819, 17197, null], [17197, 19367, null], [19367, 21939, null], [21939, 22741, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22741, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22741, null]], "pdf_page_numbers": [[0, 2333, 1], [2333, 4630, 2], [4630, 8374, 3], [8374, 9706, 4], [9706, 12945, 5], [12945, 14072, 6], [14072, 15935, 7], [15935, 16044, 8], [16044, 16819, 9], [16819, 17197, 10], [17197, 19367, 11], [19367, 21939, 12], [21939, 22741, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22741, 0.09483]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
c5a91ef9e19301f98c6574c27ca560d04b5a2c85
Aspects of BPM/SOA: Processes, Use Cases and Concerns Manuel Imaz, PhD BlendMind Madrid, España imaz@mac.com Abstract. In this paper we show how BPM/SOA avoid the increasing complexities added by the aspect-oriented programming (AOP) approach, mainly in relation to functional concerns. From the beginnings of object-orientation, some difficulties derived from the uses cases model have been detected, as they are the root of scattering and tangling. This is the question that AOP addresses even if it uses its own jargon: concerns in place of use cases. The present analysis of the problem is based on concepts of Cognitive Semantics (CS) that allow to explain some odd questions, such as the way of presenting the classical UML architecture as a ’4 + 1’–in place of 5–views. Some CS concepts, such as perspective, focusing and profiling help to clarify some phenomena that have been analyzed from a very general notion of view that needs, evidently, to be refined in order to build more useful ideas about software engineering. Key words: Concerns, aspects, use cases, AOP, cognitive semantics. 1 Introduction In an earlier paper [6] we have presented the central role of categorization in Software Engineering as an important cognitive process, similar to abstraction. In fact, in Software Engineering we are constantly categorizing different aspects of reality, from the initial stages –requirements elicitation– up to the final ones. During the requirements elicitation and development we try to determine the needs –the problem domain– and the functions or features –the solution domain– of the system we are going to implement. There are two useful metaphors that may be quite adequate to conceptualize what happens during the requirements development: this is a discovery and invention process, where the needs have to be discovered while functions or features need to be invented. There is a difference between discovery and invention: ‘The distinction is clear even in prescientific times: Fire was a discovery; the fireplace was an invention. That fire hardened clay was a discovery; pottery was an invention”. [2] After the requirements stage, we continue with the specification of the system or application. The way we specify software has evolved through the years, In viewing a scene, what we actually see depends on how closely we examine it, what we choose to look at, which elements we pay most attention to, and where we view it from. The corresponding labels I will use, for broad classes of construal phenomena, are \textit{specificity}, focusing, prominence, and perspective. They apply to conceptions in any domain. [12] p. 66 (bolds in the original) When specifying software it is evident that styles have been changing mainly in function of the focus, that is, what we choose to look at and also in function of the perspective. In a first time, the focus has been put on the procedures, the processes performed on data. This style is known as data flow representation, where the processes are specified as circles or bubbles and data are a means of connecting those bubbles. Following this style the focus was put on data objects or simply objects that travel through the data flows, but including in these objects the specific pieces of processes—called methods—applied to them. This evolution has finally lead us to focus on a broader scene, the business processes with activities and data objects to which these activities are applied. The difference between business processes and data flow diagrams is the way we conceptualize them, as in the former we see a spatially distributed network of fine or medium grained processes while in the latter we consider coarse grained processes or conceptual process packages. In object orientation the packages are the data objects with the pieces of processes that are applied to them. So, in data flow diagrams we consider conceptual packages of processes independently of the spatial situation of such processes in a workflow and the data objects to which they are applied, while in object orientation the different processes applied to a data object are compressed into a conceptual package. In order to show the difference between business processes and data flow and object oriented approaches, it is also necessary to use the \textit{perspective} dimension of language, that is, the viewpoint from which we are observing the scene. Both data flow diagrams and object orientation are observed from the inside of the system to be developed, while business processes are observed from the inside of the enterprise or organization. These distinctions are more precisely defined using the concepts of focusing and prominence, described in the next section. \section{Cognitive Semantics} There are two approaches to semantics. The classical one—or realistic—considers that the meaning of an expression is something out there in the world. The semantic of table, for example, is a matching between the word \textit{table} and a real world object. Cognitive semantics, on the other hand, identifies meanings of expressions with mental entities. [1] Leonard Talmy states that Cognitive Semantics is the study of the way conceptual content is organized in language. In Talmy’s view, a sentence (or other portion of discourse) does not objectively represent its referent scene—it is not something out there in the world, but it evokes in the listener a cognitive representation, defined as an emergent, compounded by various cognitive processes out of the referential meanings of the sentence elements, understanding of the present situation, general knowledge, and so on [16] p. 93, note 2. Historically, science has tried to be consistent with the need for objectivity eliminating the subject from the scientific discourse. The same effort has been assumed by the software engineering community when using a disembodied discourse, but the failure of this intention is unmasked when analyzing in detail some conceptual structures in which the subject surreptitiously reappears, as—for example—the concept of perspective implies an object and a subject and the concept of focusing implies that the subject is using his visual capacity (as Langacker defines it: "what we choose to look at" [12]). Another aspect of cognitive semantics is that the conceptual structure is embodied, that is, the nature of the human mind is largely determined by the form of the human body. But the form of the human body must be understood in a broad sense, meaning the human being in an environment, in a given situation—cultural, social, and so on—as some concepts of CS imply. For example, in the previous section we have mentioned the concepts of focusing, perspective and so on. It is evident that a perspective implies a subject observing a scene from a given point of view, that is, a subject in a given situation. The concept of perspective allows us to make a difference between observing a software system from an internal or external point of view, and conceptualizing the internals of the software system or a general viewpoint that encompasses the business processes running in the enterprise. The concept of perspective is represented in (Fig. 1): When focusing on the computer system we need additional concepts in order to use different categories applied to the same system. Besides what we choose to look at—focusing—we need to take into consideration which elements we pay most attention to or prominence, in particular one sort of prominence: profiling. Langacker states that: As the basis for its meaning, an expression selects a certain body of conceptual content. Let us call this its conceptual base. Constrained broadly, an expression’s conceptual base is identified as its maximal scope in all domains of its matrix (or all domains accessed on a given occasion). Constrained more narrowly, its base is identified as the immediate scope in active domains—that is, the portion put “onstage” and foregrounded as the general locus of viewing attention. Within this onstage region, attention is directed to a particular substructure, called the profile. [12] p. 66 (bolds in the original) In our example, one conceptual base is the computer system and the profile may be a process or a data flow—a particular substructure—or an object in another profile. That is, the same conceptual base may be considered in terms of different profiles: data flows and processes or objects. Both ways of categorizing the computer system are different types of conceptual integrations or blends (which will be considered in the next section). On the other hand, a conceptual base such as a business process, may be profiled in terms of tasks, decision points, etc., or may be also profiled as use cases, that is, subsets of the business process in which some actors—users—interact with software components in order to achieve a goal. 3 Metaphors and Blends Metaphor is a cross-domain mapping—conceptualizing one domain in terms of another—and is central to our thinking process. The first domain—the well known—is called the source domain while the new one—less known—is the target domain. The usual idea we have of a metaphor is that of a literary figure whereby we say something using a figurative expression. In fact, the figurative expression is the external manifestation of an underlying cognitive process: that is precisely the conceptual metaphor. An important and well-known metaphor—in relation to ontologies—is the conduit metaphor, first analyzed by Reddy [14]. This metaphor reflects quite singularly the objectivist philosophy: the mind contains thoughts, language transmits ideas, human communication achieves the physical transfer of thoughts and feelings, etc. and it is embodied in many expressions which are manifestations of the metaphor: You have to put each concept into words very carefully. Try to pack more thoughts into fewer words. Reddy’s assertions regarding the underlying cognitive processes are similar to those used currently by cognitive semantics, proposing that texts are instructions to create mental spaces (patterns of thought, in Reddy’s terms) which, as any active complex process, will re-create, re-enact meaning. There is another way of conceptualizing both terms of a metaphor (source and target domains), using the concept of mental space. The concept of mental space refers to partial cognitive structures that emerge when we think and talk ‘allowing a fine-grained partitioning of our discourse and knowledge structures’. [3] Finally, a conceptual integration or blend [4] is an operation that could be applied to a couple of input spaces, which gives as a result a blended space or blend. The blend receives a partial structure from both input spaces but has an emergent structure of its own. One important example of blend is that of imaginary numbers, first showed up in the formulas of the sixteenth-century. The authors Cardan and Bombelli considered imaginary numbers only as notational expedients, with no conceptual basis (they were called sophistic, imaginary, impossible). This is an interesting example not only because it is an illustration of how blends are also created in science taking sometimes many years, but also because its initial status was not ontological at all -instead, it was its practical usefulness that allowed the concept to survive- to end up, after an epistemological elaboration, as a very concrete and useful theory in mathematics. Rolando García asserts that in cases like this one, as well as in many others, there is no ontology without an epistemology.[5] The important point is that the intertwined relations between both philosophical disciplines -Ontology and Epistemology- derived in a new approach called by García as Constructivist Epistemology; meaning that we need to think of scientific explanation as ascribing to the empiric relationships –to external reality– the necessary connections which are verified in the logico-mathematical structures of scientific theories. This constructivist approach to epistemology, when applied to IT domains, results in taking as existent what has been built –results or elaborations– in previous stages of the disciplines. For example, the blend built to framing a class –as in UML, with three containers for a name, the attributes, and the operations– is one of the two input mental spaces used to build a new, concrete class, as the invoice class. Data-flow diagrams (DFD) are based on a metaphor. Even if one process is also categorized as a container and its structure is determined by another data-flow diagram at a lower level, the main metaphor on which the model is based is THE SYSTEM IS AN INDUSTRIAL PLANT. In such a plant, there is a collection of processes interconnected by pipes or assembly lines. The raw material for one process originates from other processes, external sources, or stores containing by-products of yet other processes. [7] pag. 89 The paradigm of object orientation has its own constitutive metaphor: THE SYSTEM IS A SOCIETY OF PEOPLE. Object orientation is full of expressions based on this metaphor. Objects have responsibilities, they collaborate with each other, they have acquaintance of other objects, they communicate, they have a defined behavior, and so on. [7] p. 90 In an invoice object, for example, we have a mental space that corresponds to a frame of three containers: one for a name, another for attributes and a third for operations. Another mental space corresponds to a real world entity –a piece of paper with data– but there are other mental spaces with activities performed on the invoice. So we incorporate into the blend the actions that some agents will perform on the entity we are modeling. In order to get a complete set of mental spaces, we need to analyze the different stages of the invoice in its whole business story or life cycle. So in general, there may be several other mental spaces that provide a source of behavior in terms of operations. As a consequence of creating the blend, there will be –in a class– an emergent structure compared to the input mental spaces. We may see that an invoice class –in contrast to a real-world invoice– will generate objects capable of producing events or sending messages to other objects. This behavior is something that real, inanimate invoices, cannot do. [7] p. 95 4 Perspectives and Conceptualization Restoring the subject in the discourse means to make visible some aspects that normally remain hidden. When talking about a software system we may adopt different perspectives that usually are implicit in the language but when making them explicit they may suggest to us interesting questions. One point is that each perspective may have different views, as it may be seen in the internal perspective. The more frequent perspectives adopted to describe a software system are shown in the following figure (Fig. 1). **Fig. 1.** Different perspectives: the internal, the external and the scenario perspectives When using the concepts defined by Langacker: *specificity, focusing, prominence* and *perspective* we must remember that the sense of vision is not a merely passive, photographic one, but a very complex construction as shown by Francisco Varela [17], p. 332: *A first-group of animals [cats] was allowed to move around normally while harnessed to a yoke; their gross movements were transferred mechanically to a second group of animals conveyed in gondolas. The two groups shared the same visual experience, but the second group was entirely passive. When the animals were released after a few weeks of this treatment, the first group of kittens behaved normally, but those who had been carried around behaved as if they were blind: they bumped into objects and fell over edges. This marvelous study supports the enactive view that objects are not seen by the visual extraction of features, but rather by the visual guidance of action. Similar results have been obtained under various other circumstances and studied even at the single-cell level. (bolds in the original)* So, what is implied in last resort is an embodied concept, jointly determined by a physical perception and bodily actions and with additional cognitive con- The sense of vision is frequently used as a metaphorical concept as when we say ‘I see what you mean by that’. It is in this sense that we will use the concepts defined by the CS. Each perspective implies its own views, as in the internal perspective, which has been represented in Fig. 1 using two views (process and logical) of the set of views defined in UML. While in the three concepts of perspective, focusing and specificity the visual metaphor is quite direct, the concept of profiling deserves some additional comments. Langacker ([12] pp. 66-67) points out that: The profile can also be characterized as what the expression is conceived as designating or referring to within its base (its conceptual referent)... In fact, it is quite common that two or more expressions evoke the same conceptual content yet differ in meaning by virtue of profiling different substructures within this common base. For instance, Monday, Tuesday, Wednesday, etc. all evoke as their base the conception of a seven-day cycle constituting a week, within which they profile different segments. (bolds in the original) As the Langacker’s example shows, both the structure and the substructures are conceptual constructions, based on framing a conceptual base –the week– in terms of another conceptual units –the days. So, we can choose different conceptual frames to profile the elements that make up a software product. Thus, in the internal perspective we can choose a profiling based on different metaphors, in particular the THE SYSTEM IS AN INDUSTRIAL PLANT metaphor, which works as a frame to see processes and connections among them as well as data stores and even, in some cases, external interactors. These are different views when changing the focusing (what we choose to look at). We may also go from the analysis to the design varying the specificity (how closely we examine it) and go, for example, into the processes (viewed in the analysis) to find modules (viewed in the design), which are organized in an hierarchical structure. An alternate profiling of the internal perspective is using the THE SYSTEM IS A SOCIETY OF PEOPLE metaphor, on which some blends are built, in particular classes. The blends are categorized into groups that correspond to different views when changing the focusing: considering the static, structural aspects we get the logical view, while when considering the dynamic aspects we get a process view. In this perspective and profiling there are also, when changing the focusing, other views, such as the physical view and the development view. An interesting point in relation to views is that all of them are orthogonal or complementary, that is, none of the views may be translated into another view. The whole view is the addition of all the previously ones: logical, process, and so on. As each view is the result of catching a different partial sight, the whole view is necessarily the addition of all of them. The external perspective implies observing the system as a whole and the way of conceptualizing it is by using a name or syntagm. When the system software matches a previously existent activity in a domain, the name used is derived from the domain as in the examples of invoicing, general ledger, order management or payroll. This way of conceptualizing brings nothing new to the system to be implemented, except the experience provided by the software engineer and explains the symptoms pointed out by Yourdon ([18], p. 360) in relation to the top-down problem: ‘analysis paralysis’, the ‘six analyst’ phenomenon, or the ‘arbitrary physical partitioning’. The top-down method results from gradually changing the specificity of a perspective. There is also a very frequent use of figurative or metaphorical names to conceptualize a software system. Names such as broker, bus, framework or virus are usual examples of this way of conceptualizing. The advantage of a metaphorical naming is that the source domain contributes with a rich set of features that may be translated into the target domain, that is, the system to be implemented. The scenario perspective includes the possible interactions of users –human and not human– with the system. This perspective needs its own representation, that is –as usually occurs with scenarios– a dialog, script or description of interactions. As in the scenario we find human users, with intentions, goals or concerns about the system to be implemented, usually these goals or concerns are included in the conceptualization of the scenario. The sentence withdraw money is, at the same time, the description of a scenario but also the goal of the user involved in such scenario. 5 Some Problems with Use Cases considered as Object Oriented Constructs The way of presenting in UML the '4 + 1' views already was a symptom. According to Kruchten, these views are the description of an architecture and can be organized around these four views, and then illustrated by a few selected use cases, or scenarios which become a fifth view [10]. This comment shows that there is something heterogeneous between use cases and the other views, even if the author call all of them views. The question would be: why the use cases view is different from the other views to the extent Kruchten uses a different symbol –an ellipse– in place of a rectangle? As we have seen in the Perspectives and Conceptualization section, use cases belong to the scenario perspective and the four object-oriented views belong to another perspective –the internal perspective– of the software product, that is, the OO views and the use case view –according to Kruchten– correspond to two different perspectives of our definition (with their focus and profiles). And each perspective needs a specific representation as the focus leads to perceive different facets of the same product. That explains the heterogeneity between the first four views and the last one, and why the consideration of use cases as object oriented constructs gives rise to some problems that have required specific solutions. Jacobson points out in his paper [8] that to achieve use case modularity it needed two mechanisms: a separation mechanism and a composition one. He focused on the separation mechanism, which allowed to keep most use cases separate, but leaving aside the composition mechanism. For example, it has been recognized that there are basic use cases, each one being independent of the others. However, some use cases—extension use cases—depend on other, more basic, use cases to work. In terms of object orientation, the solution may be to create subtypes—using the inheritance mechanism—from a base use case resulting in an extended use case. But the solution does not allow us to modify the base use case, for this we need a new mechanism—an extension—in order to add new functionalities. Using the extend mechanism we get extension use cases and iterating the same extension mechanism the use case continues to grow but keeping also most use cases separate all the way down to code and even to executables. But this is not a clear and simple mechanism: even recently it has been shown that the Achilles’ heel of use cases is the unclear UML semantics, in particular the definition of the extend relationship. [11] When dealing with object orientation, we can verify that use cases are realized in multiple classes and conversely, each class includes portions of multiple use cases. In the jargon of object orientation these characteristics are called scattering and tangling respectively. [8] These characteristics are usually represented as in the example of Fig. 2. ![Fig. 2. Scattering and Tangling](image) This characteristic is known as crosscutting, meaning that a given concern usually spans layers and tiers of an application. The question is that at the same time that a crosscutting concern affects the entire application—implying the scattering—it should be centralized (included in a separate module) in one location where possible in order to help create a quality and maintainable software. The question is still more complex when the paradigm of object orientation is applied to use cases, as they are not derived from a software product perspective. but from an enterprise perspective. In a use case model it is possible to use object oriented concepts, for example, generalization. At such an abstract level, nothing prevents us from generalizing or specializing use cases the same way we generalize or specialize classes. But the problem arises when we try a use case realization that has to reuse a more abstract use case realization. As Jacobson explain [8]: However, the extension mechanisms provided between use cases didn’t make it to collaborations; I simply couldn’t make a case for this since we had no mainstream programming language supporting the implementation of extensions as we now will have with AOP. Consequently, it is not possible to separate extension use cases from base use cases in design and implementation. The realization of the extension use case has to be dissolved into the realization of the base use case, and the base use case cannot be oblivious of the extension use case. So we do not have a fully seamless transition from use case modeling to design – realization of extension use cases has to be intermingled with the realizations of base use cases. The difficulty comes from mixing heterogeneous conceptualizations: use cases and objects. The ideal solution would have been to get separate modules from extension use cases the same way we produce classes and subclasses as separate components. Here we need two kinds of modules: use case modules and component modules. Jacobson states that Aspect Oriented Programming (AOP) has come to the aid of these problems, allowing to create a new kind of module: use case modules. 6 Requirements and Concerns Some definitions of a requirement state that it is a software capability needed by the user to solve a problem, to achieve an objective. An alternate definition refers to capabilities that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed documentation. [13] In other words, the requirements for a system are the descriptions of what the system should do, even if the term is not a well defined one. As Sommerville [15] points out: The term requirement is not used consistently in the software industry. In some cases, a requirement is simply a high-level, abstract statement of a service that a system should provide or a constraint on a system. At the other extreme, it is a detailed, formal definition of a system function. There are other concepts related to requirements, such as needs and features, aimed to refine the sometimes high-level, abstract statement. The users are in a given environment and have business or technical problems that they need the software engineer’s help to solve, these are the user’s needs. In addition we may consider features as a service provided by the system that fulfills one or more stakeholder’s needs.[13] On the other hand, the AOP community uses the new concept of concern. Among the definitions of concern found in dictionaries we can point out these two: “To engage the attention of” and “Regard for or interest in someone or something”. Both definitions are related to the CS definition of focusing, which we have been using in addition to perspective and profiling. We talk about user or stakeholder concerns referring to what can be stated as requirements, functional and non-functional. The concept of concern is closely related to human intentions, goals or objectives. But in the AOP community they have modified concern by crosscutting concern, indicating that programming languages decompose concerns into separate, independent entities by providing abstractions (e.g., classes, modules or procedures) that can be used for implementing these concerns. Some of these concerns defy these forms of implementation and are called crosscutting concerns because they "cut across" multiples abstractions in a program or component. In 1986 Ivar Jacobson first formulated the concept of use cases—originally called usage scenarios and usage case—as a textual, structural and visual modeling technique. Interestingly, the term usage scenarios points to the idea of perspective—the scenario perspective—and the associated cognitive framework. It was a great idea to include a business perspective—which includes users and their intentions and objectives, their concerns—in addition to the software system perspective exposed in the object oriented constructs. What the previous paradigm—structured analysis and design—lacked was precisely the business perspective—with systems and users—to specify requirements and this old approach implied to directly specify an analysis view from a set of statements more or less well constructed. The success of use cases may be attached to the chosen perspective, as the visualization of a business process is something that may be perceived more directly (recognizing that perception is not a passive sense but one that implies continuous reorganizations and incremental capacities) when observing the enterprise. The advantage of use cases compared to requirements expressed as high-level abstract statements is a more concrete profiling, as we visualize the use of a particular function or service of the application as interactions between some users and the software itself. 7 Aspects and Use Cases In AOP the goal is to reorganize the source code in order to recompose the concerns as modules. This is fundamentally a programming task using metalanguages that allow to encapsulate fragments of code that are distributed among various components. The composition resulting from this reorganization is called an aspect. The reason for building aspects is a better understanding and maintenance of the software application, as each concern may be matched to a module. The problem is explained by Jacobson [8]: We tried to specify and design them as separate units, however, when implementing the use cases, they were integrated to a mass from which it was impossible to identify which use case was being implemented by which piece of code. Or, in other words, the use cases were dissolved into the code, and distilling them from the code was far from easy. (bolds in the original) It is funny to observe that in AOP the original concerns—the requirements—are known as early concerns. This conceptualization is the result, evidently, of a given perspective: in this case, from the construction phase. AOP focus its interest in the code reorganization task, where concerns are recomposed into modules—the aspects—and so the elicitation phase is an early one. Jacobson, in a book that extents his ideas about AOP [9], explains that: *It is well known that aspect orientation helps modularize crosscutting concerns during implementation, but there is a need to modularize crosscutting concerns much earlier, even during requirements. Use-cases are an excellent technique for this purpose. Use-cases are crosscutting concerns, since the realization of use cases touches several classes. In fact, you can model most crosscutting concerns with use-cases, and we demonstrate use-case modeling in the book.* The emergence of BPM/SOA lead us to compare business processes and use cases. When defining a business process as a set of related, structured activities or tasks that produce a specific service—satisfy a particular goal—for a particular customer, we verify that the definition may be equally applied to use cases. A use case is usually defined as a list of steps, typically defining interactions between a role and a system, to achieve a goal. This similarity allows us to state that a use case may be an activity, a subprocess or the business process itself. The cross-cutting phenomenon is the result of having to translate the use cases into constructs of a different perspective, that is in object-oriented constructs such as classes and components. The ideal solution would be to have the possibility of directly executing the set of interactions that make up the use cases. When representing business processes with an appropriate language, it is possible to directly run this representation. This way, we maintain the early concern—the use case—as a module without the need of using a composition mechanism, such as addressed by AOP. Something equivalent to the phenomenon of cross-cutting also appears in a linear narrative, whether technical or not. In a narrative about a given subject there is a linear discourse that has many references to other subjects. The traditional solution in printed articles or books has been the use of a set of mechanisms in terms of calls to the footer, to references, to other sections of text, etc. In a biography (the main concern), the linear narration of the life, for example, of an important computer science personality is cross-cut by multiple areas of interest: childhood and youth, university and work on computability, cryptanalysis and so on. The idea of hypertext has enabled the possibility of showing the main text – the concern – with other areas of interest traversing it. In the case of a biography the text is usually embedded –in a mobile device, for example– with a number of icons to insert the text corresponding to the multiple specific areas (childhood and youth, cryptanalysis, etc). Each insertion corresponds to a new level of specificity. We may have the global picture –the whole concern– and gradually insert different cross-cut areas of interest. This form of presenting the biography allows a good maintainability of the whole concern and the specific areas of interest that cross-cut the concern. This hypertext mechanism would also allow a similar easy maintainability of software, provided that the features of import/export would be included in the programming languages in order to see the specific components (for example, attributes and methods of classes, components of composite components) that realize the main concern (use case) importing and including them in the main text. 8 BPM/SOA and Concerns In Fig. 2 we have a representation of a group of use cases, which are realized as collaborations and, finally, those collaborations realized as a set of components. The usefulness of use cases is due to its way of representing concerns (requirements). In terms of AOP, we can represent early concerns as use cases and finally –at the implementation time– represent the same concerns as aspects. But the advantages of use cases as concern representations disappear when they are scattered into groups of components. There is a hard work to represent, then lose in translation and finally recompose, at implementation time, the concerns. There is no ideal solution to the problem, but the way of representation of concerns as business process diagrams is a great advantage over the classical representation of use cases. Business process diagrams, when created with a Business Process Management (BPM) tool and an adequate notation as BPMN 2.0, do not vanish as use cases do in translation, but remain intact and are executables as such until a new version is created. At this point it is important to make the difference between BPM solutions that, as the classical code generation tools, generate all the code necessary to execute the process, and the BPM/SOA architecture where the process activities are associated with services. The services are implemented with pieces of software derived from legacy systems or software built specifically with this purpose. On the other hand, there are also SOC solutions, that is Service Oriented Computing. The question is that this approach aims at implementing distributed applications based on the interactions of services, as an assembling of services that enhance the reusing of components. But SOC is not based on business processes and the concerns must be treated as AOP proposes to get aspects. A use case may be a task, a subprocess or a whole business process. The aim was to indicate the usefulness of use cases as a result of adopting a different perspective. But business processes are better understood by users and they are persistent in the same representation, the BPM language, (and portable to other platforms, for example) through all the stages of the development. But the question of scattering remains. The difference is that each task in the business process may be implemented as a service, that is, a component and then it is not necessary to recompose the concern in order to ensure a good understandability and maintenance. The service may be implemented as a component and the component may be composed, in turn, as a collaboration of other components, for example classes. In this case, the service may be allocated to different components, each one containing code fragments of other services. There is a granularity size difference between concerns –that may cover a whole business process– and services and so the complexity of the underlying components is also decreased. Some solutions have been suggested such as partial classes in order to spread classes over separate files and matching each file to a different service, for example. The management of services greatly simplifies the maintenance of the whole business process that is not longer implemented as a whole block of software. 9 Conclusions We have seen that, in relation to the scattering and tangling phenomena, the heterogeneity of the four object-oriented views and the use cases view –as belonging to different perspectives– and their representations is the cause of the cross-cutting phenomenon and the solutions proposed by AOP. This is the main reason why use cases crosscut the other representations (classes, components and so on): use cases must be translated into other representations, belonging to a different perspective. Different representations in the same perspective do not crosscut as the practice of UML confirms because they are complementary: we depict the structure of objects in the logical view and afterwards the behavior of them in the process view. Or, after depicting the components and its behavior, we can show where they will be executed in the physical view. The representations of the same perspective are additive: the whole view –what has been called the architecture– is the integration of all representations. As a way of avoiding the increasing complexities of developing software systems using AOP, the BPM/SOA approach greatly simplifies the development as it eliminates the need for creating aspects from the functional concerns. The concerns are directly represented as business process diagrams that remain –in contrast to use cases– throughout the entire process of development and eventually are executed. The business process representation has languages and tools that simplify the maintenance and the visibility of processes, with the ability to see the components of activities and even the execution of processes and activities. The question of non-functional concerns (as security) remain but in a service oriented approach these concerns may be encapsulated in specific services with which the business services will interact. The separation of concerns is re- alized as a set of independent loose coupled services, greatly decreasing –or even eliminating– the need to use AOP and increasing the reusability because they are reusable business services that comprise people, processes, and systems and not merely technical ones. Acknowledgements The author is grateful to Mauricio Milchberg for his revision of the manuscript and valuable comments. References
{"Source-Url": "http://41jaiio.sadio.org.ar/sites/default/files/026_ASSE_2012.pdf", "len_cl100k_base": 7819, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35972, "total-output-tokens": 9286, "length": "2e12", "weborganizer": {"__label__adult": 0.0002880096435546875, "__label__art_design": 0.0005693435668945312, "__label__crime_law": 0.0002796649932861328, "__label__education_jobs": 0.0008902549743652344, "__label__entertainment": 6.121397018432617e-05, "__label__fashion_beauty": 0.00012117624282836914, "__label__finance_business": 0.00018846988677978516, "__label__food_dining": 0.0002803802490234375, "__label__games": 0.0003995895385742187, "__label__hardware": 0.00036072731018066406, "__label__health": 0.0002818107604980469, "__label__history": 0.0002073049545288086, "__label__home_hobbies": 5.7637691497802734e-05, "__label__industrial": 0.0002694129943847656, "__label__literature": 0.0004372596740722656, "__label__politics": 0.00021278858184814453, "__label__religion": 0.0003871917724609375, "__label__science_tech": 0.0089111328125, "__label__social_life": 7.468461990356445e-05, "__label__software": 0.006183624267578125, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00020194053649902344, "__label__transportation": 0.00034236907958984375, "__label__travel": 0.00015926361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42569, 0.01637]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42569, 0.81015]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42569, 0.93879]], "google_gemma-3-12b-it_contains_pii": [[0, 2286, false], [2286, 5127, null], [5127, 8322, null], [8322, 11108, null], [11108, 14393, null], [14393, 16275, null], [16275, 19466, null], [19466, 22489, null], [22489, 24510, null], [24510, 27282, null], [27282, 30339, null], [30339, 33433, null], [33433, 36610, null], [36610, 39692, null], [39692, 42569, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2286, true], [2286, 5127, null], [5127, 8322, null], [8322, 11108, null], [11108, 14393, null], [14393, 16275, null], [16275, 19466, null], [19466, 22489, null], [22489, 24510, null], [24510, 27282, null], [27282, 30339, null], [30339, 33433, null], [33433, 36610, null], [36610, 39692, null], [39692, 42569, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42569, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42569, null]], "pdf_page_numbers": [[0, 2286, 1], [2286, 5127, 2], [5127, 8322, 3], [8322, 11108, 4], [11108, 14393, 5], [14393, 16275, 6], [16275, 19466, 7], [19466, 22489, 8], [22489, 24510, 9], [24510, 27282, 10], [27282, 30339, 11], [30339, 33433, 12], [33433, 36610, 13], [36610, 39692, 14], [39692, 42569, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42569, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
ab746a92882df3436de4e6c47a31f8dce559a674
Installing and configuring Apache Kafka Date of Publish: 2018-08-13 Contents Installing Kafka Prerequisites Installing Kafka Using Ambari Configuring Kafka for a Production Environment Preparing the Environment Operating System Settings File System Selection Disk Drive Considerations Java Version Ethernet Bandwidth Customizing Kafka Settings on an Ambari-Managed Cluster Kafka Broker Settings Connection Settings Topic Settings Log Settings Compaction Settings General Broker Settings Kafka Producer Settings Important Producer Settings Kafka Consumer Settings Configuring ZooKeeper for Use with Kafka Enabling Audit to HDFS for a Secure Cluster Installing Kafka Although you can install Kafka on a cluster not managed by Ambari, this chapter describes how to install Kafka on an Ambari-managed cluster. Prerequisites Before installing Kafka, ZooKeeper must be installed and running on your cluster. Note that the following underlying file systems are supported for use with Kafka: - EXT4: supported and recommended - EXT3: supported Caution: Encrypted file systems such as SafenetFS are not supported for Kafka. Index file corruption can occur. Installing Kafka Using Ambari After Kafka is deployed and running, validate the installation. You can use the command-line interface to create a Kafka topic, send test messages, and consume the messages. Procedure 1. Click the Ambari "Services" tab. 2. In the Ambari "Actions" menu, select "Add Service." This starts the Add Service wizard, displaying the Choose Services page. Some of the services are enabled by default. 3. Scroll through the alphabetic list of components on the Choose Services page, and select "Kafka". 4. Click **Next** to continue. 5. On the Assign Masters page, review the node assignments for Kafka nodes. The following screen shows node assignment for a single-node Kafka cluster: 6. If you want Kafka to run with high availability, you must assign more than one node for Kafka brokers, resulting in Kafka brokers running on multiple nodes. Click the "+" symbol to add more broker nodes to the cluster: The following screen shows node assignment for a multi-node Kafka cluster: 7. Click **Next** to continue. 8. On the **Assign Slaves and Clients** page, choose the nodes that you want to run ZooKeeper clients: ![Assign Slaves and Clients](image) 9. Click **Next** to continue. 10. Ambari displays the **Customize Services** page, which lists a series of services: ![Customize Services](image) For your initial configuration you should use the default values set by Ambari. If Ambari prompts you with the message "Some configurations need your attention before you can proceed," review the list of properties and provide the required information. For information about optional settings that are useful in production environments, see Configuring Apache Kafka for a Production Environment. 11. Click **Next** to continue. 12. When the wizard displays the **Review** page, ensure that all HDP components correspond to HDP 2.5 or later: 13. Click **Deploy** to begin installation. 14. Ambari displays the Install, Start and Test page. Monitor the status bar and messages for progress updates: 15. When the wizard presents a summary of results, click "Complete" to finish installing Kafka: What to do next After Kafka is deployed and running, validate the installation. You can use the command-line interface to create a Kafka topic, send test messages, and consume the messages. For more information, see Validate Kafka in the Non-Ambari Cluster Installation Guide. Configuring Kafka for a Production Environment This chapter covers topics related to Kafka configuration, including: - Preparing the environment - Customizing settings for brokers, producers, and consumers - Configuring ZooKeeper for use with Kafka - Enabling audit to HDFS when running Kafka on a secure cluster Preparing the Environment The following factors can affect Kafka performance: - Operating system settings - File system selection - Disk drive configuration - Java version - Ethernet bandwidth Operating System Settings Consider the following when configuring Kafka: - Kafka uses page cache memory as a buffer for active writers and readers, so after you specify JVM size (using `-Xmx` and `-Xms` Java options), leave the remaining RAM available to the operating system for page caching. - Kafka needs open file descriptors for files and network connections. You should set the file descriptor limit to at least 128000. - You can increase the maximum socket buffer size to enable high-performance data transfer. File System Selection Kafka uses regular Linux disk files for storage. We recommend using the EXT4 or XFS file system. Improvements to the XFS file system show improved performance characteristics for Kafka workloads without compromising stability. Caution: - Do not use mounted shared drives or any network file systems with Kafka, due to the risk of index failures and (in the case of network file systems) issues related to the use of MemoryMapped files to store the offset index. - Encrypted file systems such as SafenetFS are not supported for Kafka. Index file corruption can occur. Disk Drive Considerations For throughput, we recommend dedicating multiple drives to Kafka data. More drives typically perform better with Kafka than fewer. Do not share these Kafka drives with any other application or use them for Kafka application logs. You can configure multiple drives by specifying a comma-separated list of directories for the log.dirs property in the server.properties file. Kafka uses a round-robin approach to assign partitions to directories specified in log.dirs; the default value is /tmp/kafka-logs. The num.io.threads property should be set to a value equal to or greater than the number of disks dedicated for Kafka. Recommendation: start by setting this property equal to the number of disks. Depending on how you configure flush behavior (see "Log Flush Management"), a faster disk drive is beneficial if the log.flush.interval.messages property is set to flush the log file after every 100,000 messages (approximately). Kafka performs best when data access loads are balanced among partitions, leading to balanced loads across disk drives. In addition, data distribution across disks is important. If one disk becomes full and other disks have available space, this can cause performance issues. To avoid slowdowns or interruptions to Kafka services, you should create usage alerts that notify you when available disk space is low. RAID can potentially improve load balancing among the disks, but RAID can cause performance bottleneck due to slower writes. In addition, it reduces available disk space. Although RAID can tolerate disk failures, rebuilding RAID array is I/O-intensive and effectively disables the server. Therefore, RAID does not provide substantial improvements in availability. Java Version With Apache Kafka on HDP 2.5, you should use the latest update for Java version 1.8 and make sure that G1 garbage collection support is enabled. (G1 support is enabled by default in recent versions of Java.) If you prefer to use Java 1.7, make sure that you use update u51 or later. Here are several recommended settings for the JVM: ``` -Xmx6g -Xms6g -XX:MetaspaceSize=96m -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:G1HeapRegionSize=16M -XX:MinMetaspaceFreeRatio=50 -XX:MaxMetaspaceFreeRatio=80 ``` To set JVM heap size for the Kafka broker, export KAFKA_HEAP_OPTS; for example: ``` export KAFKA_HEAP_OPTS="-Xmx2g -Xms2g" ./kafka-server-start.sh ``` Ethernet Bandwidth Ethernet bandwidth can have an impact on Kafka performance; make sure it is sufficient for your throughput requirements. Customizing Kafka Settings on an Ambari-Managed Cluster To customize configuration settings during the Ambari installation process, click the “Kafka” tab on the Customize Services page: If you want to access configuration settings after installing Kafka using Ambari: 1. Click Kafka on the Ambari dashboard. 2. Choose Configs. To view and modify settings, either scroll through categories and expand a category (such as "Kafka Broker", as shown in the graphic), or use the "Filter" box to search for a property. Settings in the Advanced kafka-env category are configured by Ambari; you should not modify these settings: To add configuration properties that are not listed by default in Ambari, navigate to the Custom kafka-broker category: ``` #!/bin/bash # Set KAFKA specific environment variables here. # The java implementation to use. export JAVA_HOME=$(java -version 2>&1 | awk '{print $1}') export PATH=$PATH:$JAVA_HOME/bin export PID_DIR=/kafka.pid export LOG_DIR=/kafka/log_dir export KAFKA_KERBEROS_PARAMS=($KAFKA_kerberos_params) # Add kafka sink to classpath and related dependencies if [ -e /usr/lib/ambari-metrics-kafka-sink/ambami-metrics-kafka-sink.jar ]; then export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambami-metrics-kafka-sink.jar fi if [ -e /etc/kafka/conf/kafka-ranger-env.sh ]; then /etc/kafka/conf/kafka-ranger-env.sh fi ``` Kafka Broker Settings The following subsections describe configuration settings that influence the performance of Kafka brokers. Connection Settings Review the following connection setting in the Advanced kafka-broker category, and modify as needed: **zookeeper.session.timeout.ms** Specifies ZooKeeper session timeout, in milliseconds. The default value is 30000 ms. If the server fails to signal heartbeat to ZooKeeper within this period of time, the server is considered to be dead. If you set this value too low, the server might be falsely considered dead; if you set it too high it may take too long to recognize a truly dead server. If you see frequent disconnection from the ZooKeeper server, review this setting. If long garbage collection pauses cause Kafka to lose its ZooKeeper session, you might need to configure longer timeout values. **advertised.listeners** If you have manually set listeners to advertised.listeners=PLAINTEXT://$HOSTNAME:$PORT, after enabling Kerberos, change the listener configuration to advertised.listeners=SASL_PLAINTEXT://$HOSTNAME:$PORT. **Important:** Do not change the following connection settings: zookeeper.connect A comma-separated list of ZooKeeper hostname:port pairs. Ambari sets this value. Do not change this setting. **Topic Settings** For each topic, Kafka maintains a structured commit log with one or more partitions. These topic partitions form the basic unit of parallelism in Kafka. In general, the more partitions there are in a Kafka cluster, the more parallel consumers can be added, resulting in higher throughput. You can calculate the number of partitions based on your throughput requirements. If throughput from a producer to a single partition is $P$ and throughput from a single partition to a consumer is $C$, and if your target throughput is $T$, the minimum number of required partitions is $$\text{max} \left( \frac{T}{P}, \frac{T}{C} \right).$$ Note also that more partitions can increase latency: - End-to-end latency in Kafka is defined as the difference in time from when a message is published by the producer to when the message is read by the consumer. - Kafka only exposes a message to a consumer after it has been committed, after the message is replicated to all in-sync replicas. - Replication of one thousand partitions from one broker to another can take up 20ms. This is too long for some real-time applications. - In the new Kafka producer, messages are accumulated on the producer side; producers buffer the message per partition. This approach allows users to set an upper bound on the amount of memory used for buffering incoming messages. After enough data is accumulated or enough time has passed, accumulated messages are removed and sent to the broker. If you define more partitions, messages are accumulated for more partitions on the producer side. - Similarly, the consumer fetches batches of messages per partition. Consumer memory requirements are proportional to the number of partitions that the consumer subscribes to. **Important Topic Properties** Review the following settings in the Advanced kafka-broker category, and modify as needed: **auto.create.topics.enable** Enable automatic creation of topics on the server. If this property is set to true, then attempts to produce, consume, or fetch metadata for a nonexistent topic automatically create the topic with the default replication factor and number of partitions. The default is enabled. **default.replication.factor** Specifies default replication factors for automatically created topics. For high availability production systems, you should set this value to at least 3. **num.partitions** Specifies the default number of log partitions per topic, for automatically created topics. The default value is 1. Change this setting based on the requirements related to your topic and partition design. **delete.topic.enable** Allows users to delete a topic from Kafka using the admin tool, for Kafka versions 0.9 and later. Deleting a topic through the admin tool will have no effect if this setting is turned off. By default this feature is turned off (set to false). Log Settings Review the following settings in the Kafka Broker category, and modify as needed: **log.roll.hours** The maximum time, in hours, before a new log segment is rolled out. The default value is 168 hours (seven days). This setting controls the period of time after which Kafka will force the log to roll, even if the segment file is not full. This ensures that the retention process is able to delete or compact old data. **log.retention.hours** The number of hours to keep a log file before deleting it. The default value is 168 hours (seven days). When setting this value, take into account your disk space and how long you would like messages to be available. An active consumer can read quickly and deliver messages to their destination. The higher the retention setting, the longer the data will be preserved. Higher settings generate larger log files, so increasing this setting might reduce your overall storage capacity. **log.dirs** A comma-separated list of directories in which log data is kept. If you have multiple disks, list all directories under each disk. Review the following setting in the Advanced kafka-broker category, and modify as needed: **log.retention.bytes** The amount of data to retain in the log for each topic partition. By default, log size is unlimited. Note that this is the limit for each partition, so multiply this value by the number of partitions to calculate the total data retained for the topic. If log.retention.hours and log.retention.bytes are both set, Kafka deletes a segment when either limit is exceeded. **log.segment.bytes** The log for a topic partition is stored as a directory of segment files. This setting controls the maximum size of a segment file before a new segment is rolled over in the log. The default is 1 GB. Log Flush Management Kafka writes topic messages to a log file immediately upon receipt, but the data is initially buffered in page cache. A log flush forces Kafka to flush topic messages from page cache, writing the messages to disk. We recommend using the default flush settings, which rely on background flushes done by Linux and Kafka. Default settings provide high throughput and low latency, and they guarantee recovery through the use of replication. If you decide to specify your own flush settings, you can force a flush after a period of time, or after a specified number of messages, or both (whichever limit is reached first). You can set property values globally and override them on a per-topic basis. There are several important considerations related to log file flushing: - Durability: unflushed data is at greater risk of loss in the event of a crash. A failed broker can recover topic partitions from its replicas, but if a follower does not issue a fetch request or consume from the leader’s log-end offset within the time specified by `replica.lag.time.max.ms` (which defaults to 10 seconds), the leader removes the follower from the in-sync replica ("ISR"). When this happens there is a slight chance of message loss if you do not explicitly set `log.flush.interval.messages`. If the leader broker fails and the follower is not caught up with the leader, the follower can still be under ISR for those 10 seconds and messages during leader transition to follower can be lost. - Increased latency: data is not available to consumers until it is flushed (the `fsync` implementation in most Linux filesystems blocks writes to the file system). - Throughput: a flush operation is typically an expensive operation. - Disk usage patterns are less efficient. - Page-level locking in background flushing is much more granular. `log.flush.interval.messages` specifies the number of messages to accumulate on a log partition before Kafka forces a flush of data to disk. `log.flush.scheduler.interval.ms` specifies the amount of time (in milliseconds) after which Kafka checks to see if a log needs to be flushed to disk. `log.segment.bytes` specifies the size of the log file. Kafka flushes the log file to disk whenever a log file reaches its maximum size. `log.roll.hours` specifies the maximum length of time before a new log segment is rolled out (in hours); this value is secondary to `log.roll.ms`. Kafka flushes the log file to disk whenever a log file reaches this time limit. ### Compaction Settings Review the following settings in the Advanced kafka-broker category, and modify as needed: - **log.cleaner.dedupe.buffer.size**: Specifies total memory used for log deduplication across all cleaner threads. By default, 128 MB of buffer is allocated. You may want to review this and other log.cleaner configuration values, and adjust settings based on your use of compacted topics (`__consumer_offsets` and other compacted topics). - **log.cleaner.io.buffer.size**: Specifies the total memory used for log cleaner I/O buffers across all cleaner threads. By default, 512 KB of buffer is allocated. You may want to review this and other log.cleaner configuration values, and adjust settings based on your usage of compacted topics (`__consumer_offsets` and other compacted topics). ### General Broker Settings Review the following settings in the Advanced kafka-broker category, and modify as needed: - **auto.leader.rebalance.enable**: Enables automatic leader balancing. A background thread checks and triggers leader balancing (if needed) at regular intervals. The default is enabled. - **unclean.leader.election.enable**: This property allows you to specify a preference of availability or durability. This is an important setting: If availability is more important than avoiding data loss, ensure that this property is set to true. If preventing data loss is more important than availability, set this property to false. This setting operates as follows: • If `unclean.leader.election.enable` is set to `true` (enabled), an out-of-sync replica will be elected as leader when there is no live in-sync replica (ISR). This preserves the availability of the partition, but there is a chance of data loss. • If `unclean.leader.election.enable` is set to `false` and there are no live in-sync replicas, Kafka returns an error and the partition will be unavailable. This property is set to `true` by default, which favors availability. If durability is preferable to availability, set `unclean.leader.election` to `false`. **controlled.shutdown.enable** Enables controlled shutdown of the server. The default is enabled. **min.insync.replicas** When a producer sets `acks` to "all", `min.insync.replicas` specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception. When used together, `min.insync.replicas` and `producer.acks` allow you to enforce stronger durability guarantees. You should set `min.insync.replicas` to `2` for replication factor equal to `3`. **message.max.bytes** Specifies the maximum size of message that the server can receive. It is important that this property be set with consideration for the maximum fetch size used by your consumers, or a producer could publish messages too large for consumers to consume. Note that there are currently two versions of consumer and producer APIs. The value of `message.max.bytes` must be smaller than the `max.partition.fetch.bytes` setting in the new consumer, or smaller than the `fetch.message.max.bytes` setting in the old consumer. In addition, the value must be smaller than `replica.fetch.max.bytes`. **replica.fetch.max.bytes** Specifies the number of bytes of messages to attempt to fetch. This value must be larger than `message.max.bytes`. **broker.rack** The rack awareness feature distributes replicas of a partition across different racks. You can specify that a broker belongs to a particular rack through the "Custom kafka-broker" menu option. For more information about the rack awareness feature, see [http://kafka.apache.org/documentation.html#basic_ops_racks](http://kafka.apache.org/documentation.html#basic_ops_racks). Kafka Producer Settings If performance is important and you have not yet upgraded to the new Kafka producer (client version 0.9.0.1 or later), consider doing so. The new producer is generally faster and more fully featured than the previous client. To use the new producer client, add the associated maven dependency on the client jar; for example: ```xml <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.9.0.0</version> </dependency> ``` For more information, see the KafkaProducer javadoc. The following subsections describe several types of configuration settings that influence the performance of Kafka producers. Important Producer Settings The lifecycle of a request from producer to broker involves several configuration settings: 1. The producer polls for a batch of messages from the batch queue, one batch per partition. A batch is ready when one of the following is true: - batch.size is reached. Note: Larger batches typically have better compression ratios and higher throughput, but they have higher latency. - linger.ms (time-based batching threshold) is reached. Note: There is no simple guideline for setting linger.ms values; you should test settings on specific use cases. For small events (100 bytes or less), this setting does not appear to have much impact. - Another batch to the same broker is ready. - The producer calls flush() or close(). 2. The producer groups the batch based on the leader broker. 3. The producer sends the grouped batch to the broker. The following paragraphs list additional settings related to the request lifecycle: **max.in.flight.requests.per.connection (pipelining)** The maximum number of unacknowledged requests the client will send on a single connection before blocking. If this setting is greater than 1, pipelining is used when the producer sends the grouped batch to the broker. This improves throughput, but if there are failed sends there is a risk of out-of-order delivery due to retries (if retries are enabled). Note also that excessive pipelining reduces throughput. **compression.type** Compression is an important part of a producer’s work, and the speed of different compression types differs a lot. To specify compression type, use the compression.type property. It accepts standard compression codecs ('gzip', 'snappy', 'lz4'), as well as 'uncompressed' (the default, equivalent to no compression), and 'producer' (uses the compression codec set by the producer). Compression is handled by the user thread. If compression is slow it can help to add more threads. In addition, batching efficiency impacts the compression ratio: more batching leads to more efficient compression. The acks setting specifies acknowledgments that the producer requires the leader to receive before considering a request complete. This setting defines the durability level for the producer. <table> <thead> <tr> <th>Acks</th> <th>Throughput</th> <th>Latency</th> <th>Durability</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>High</td> <td>Low</td> <td>No Guarantee. The producer does not wait for acknowledgment from the server.</td> </tr> <tr> <td>1</td> <td>Medium</td> <td>Medium</td> <td>Leader writes the record to its local log, and responds without awaiting full acknowledgment from all followers.</td> </tr> <tr> <td>-1</td> <td>Low</td> <td>High</td> <td>Leader waits for the full set of in-sync replicas (ISRs) to acknowledge the record. This guarantees that the record is not lost as long as at least one ISR is active.</td> </tr> </tbody> </table> The new Producer API supports an optional flush() call, which makes all buffered records immediately available to send (even if linger.ms is greater than 0). When using flush(), the number of bytes between two flush() calls is an important factor for performance. - In microbenchmarking tests, a setting of approximately 4MB performed well for events 1KB in size. - A general guideline is to set batch.size equal to the total bytes between flush() calls divided by number of partitions: \[(\text{total bytes between flush() calls}) / (\text{partition count})\] Additional Considerations A producer thread going to the same partition is faster than a producer thread that sends messages to multiple partitions. If a producer reaches maximum throughput but there is spare CPU and network capacity on the server, additional producer processes can increase overall throughput. Performance is sensitive to event size: larger events are more likely to have better throughput. In microbenchmarking tests, 1KB events streamed faster than 100-byte events. Kafka Consumer Settings You can usually obtain good performance from consumers without tuning configuration settings. In microbenchmarking tests, consumer performance was not as sensitive to event size or batch size as was producer performance. Both 1KG and 100B events showed similar throughput. One basic guideline for consumer performance is to keep the number of consumer threads equal to the partition count. Configuring ZooKeeper for Use with Kafka Here are several recommendations for ZooKeeper configuration with Kafka: - Do not run ZooKeeper on a server where Kafka is running. - When using ZooKeeper with Kafka you should dedicate ZooKeeper to Kafka, and not use ZooKeeper for any other components. - Make sure you allocate sufficient JVM memory. A good starting point is 4GB. - To monitor the ZooKeeper instance, use JMX metrics. Configuring ZooKeeper for Multiple Applications If you plan to use the same ZooKeeper cluster for different applications (such as Kafka cluster1, Kafka cluster2, and HBase), you should add a chroot path so that all Kafka data for a cluster appears under a specific path. The following example shows a sample chroot path: c6401.ambari.apache.org:2181:/kafka-root, c6402.ambari.apache.org:2181:/kafka-root You must create this chroot path yourself before starting the broker, and consumers must use the same connection string. Enabling Audit to HDFS for a Secure Cluster To enable audit to HDFS when running Storm on a secure cluster, perform the steps listed at the bottom of Manually Updating Ambari HDFS Audit Settings in the HDP Security Guide.
{"Source-Url": "https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/installing-configuring-kafka/kafka-instaling-configuring.pdf", "len_cl100k_base": 5907, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 32616, "total-output-tokens": 6794, "length": "2e12", "weborganizer": {"__label__adult": 0.00022423267364501953, "__label__art_design": 0.000324249267578125, "__label__crime_law": 0.00022101402282714844, "__label__education_jobs": 0.0007495880126953125, "__label__entertainment": 0.00012302398681640625, "__label__fashion_beauty": 8.440017700195312e-05, "__label__finance_business": 0.00022840499877929688, "__label__food_dining": 0.00021827220916748047, "__label__games": 0.0008921623229980469, "__label__hardware": 0.001068115234375, "__label__health": 0.00015366077423095703, "__label__history": 0.00016927719116210938, "__label__home_hobbies": 8.219480514526367e-05, "__label__industrial": 0.0002532005310058594, "__label__literature": 0.0001678466796875, "__label__politics": 0.0001595020294189453, "__label__religion": 0.0002963542938232422, "__label__science_tech": 0.0165252685546875, "__label__social_life": 0.0001074075698852539, "__label__software": 0.141357421875, "__label__software_dev": 0.8359375, "__label__sports_fitness": 0.0001550912857055664, "__label__transportation": 0.00014722347259521484, "__label__travel": 0.0001666545867919922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27559, 0.00898]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27559, 0.23724]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27559, 0.84846]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 653, null], [653, 1688, null], [1688, 1876, null], [1876, 2175, null], [2175, 3039, null], [3039, 3293, null], [3293, 4603, null], [4603, 7654, null], [7654, 8421, null], [8421, 9176, null], [9176, 10331, null], [10331, 13336, null], [13336, 16164, null], [16164, 19144, null], [19144, 21429, null], [21429, 24163, null], [24163, 25960, null], [25960, 27559, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 653, null], [653, 1688, null], [1688, 1876, null], [1876, 2175, null], [2175, 3039, null], [3039, 3293, null], [3293, 4603, null], [4603, 7654, null], [7654, 8421, null], [8421, 9176, null], [9176, 10331, null], [10331, 13336, null], [13336, 16164, null], [16164, 19144, null], [19144, 21429, null], [21429, 24163, null], [24163, 25960, null], [25960, 27559, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27559, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27559, null]], "pdf_page_numbers": [[0, 69, 1], [69, 653, 2], [653, 1688, 3], [1688, 1876, 4], [1876, 2175, 5], [2175, 3039, 6], [3039, 3293, 7], [3293, 4603, 8], [4603, 7654, 9], [7654, 8421, 10], [8421, 9176, 11], [9176, 10331, 12], [10331, 13336, 13], [13336, 16164, 14], [16164, 19144, 15], [19144, 21429, 16], [21429, 24163, 17], [24163, 25960, 18], [25960, 27559, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27559, 0.01718]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
039b6a36ca5dd5d6a5b396663288d61b1ba15bd8
Application of Agile Methodologies for Member and Team Role Transformation in Projects By Subhashish Sengupta, PMP, ITC Infotech Dr. Debasish Sengupta, Alliance University and Prof. Ray Titus, Alliance University Bangalore, India Abstract This paper explores how the Agile methodology impacts all the process groups of Project Management, in a way that the role of the Project Manager and role of the team transforms as compared to a non-Agile environment. Fundamentally, the role of project manager alters from a controlling and directing approach to a facilitation approach in an Agile environment. The role of the team alters more from a mindset point-of-view, from an individual accountability to more of a mutual accountability perspective. In short, the Agile methodology focuses more on the team and not on the individuals, as in a non-Agile set-up. Keywords: Project, Project Management, Agile Methodology, Waterfall Methodology Introduction A project is a ‘temporary endeavour’ undertaken to create a unique product or service. ‘Temporary’ denotes that all projects are time-bound and hence have a start and finish date; and, ‘Unique’ denotes that the product or service developed as result of the project is distinguishable from other products or services. Project is different from an operation. Although project and operations share some similarities like both consist of activities, both are limited by resources and both need to be planned, executed and controlled, however while operations are continuing and repetitive, projects are temporary and unique (Choudhuri). Projects also have a third characteristic besides being ‘temporary’ and ‘unique’. The third characteristic of a project is progressive elaboration. ‘Project management is a group of interrelated processes, implemented in a progressively elaborative manner, in which to produce the deliverable’ (USBR). “A project is a one-shot, time-limited, goal-directed, major undertaking, requiring the commitment of varied skills and resources” (BEE). A project has the following attributes (Baume & P.Martin, 2002): • has a clear purpose that can be achieved in a limited time; • has a clear end when the outcome has been achieved; • is resourced to achieve specific outcomes; • has someone acting as sponsor who expects the outcomes to be delivered on time; and • is a one-off activity that would not normally be repeated. Literatures on Project strategies have viewed projects from three different tracks (Artto, Kujala, Dietrich, & Martinsuo, 2008): 1. In the first track, projects are seen more as subordinate of the parent organization and the project strategy is a derivation from the larger business strategies of the firm. 2. In the second track, projects are viewed as independent organization in themselves that are loosely connected to the parent organization. In this case, project have their own strategies that may not be dependent on the organizational context. 3. In the third track, projects are viewed as organization that adapt to ongoing changes as strategic entities of their own. The first track is the most dominant one where projects are viewed as subordinate of the parent organization. **Project Management** Project Management is the process of achieving project objectives (schedule, budget and performance) through a set of activities that start and end at certain points in time and produce quantifiable and qualifiable deliverables (Kay, 2013). Project management has been practiced for thousands of years, dating back to the Egyptian epoch. Although management of projects has been going on for thousands of years, the practice has been widely recognized as a discipline in its own right for only about ten years. It was in the mid-1950s that the organizations commenced formal project management tools (Lewis, 2002). Project Management as a discipline developed from different fields of application including construction, engineering, telecommunications, and defence. The 1950s marked the beginning of the modern project management era. According to Azzopardi (2009), four periods are identifiable in terms of evolution of Project Management (Modesto & Tichapondwa, 2009): **Prior to 1958** – The evolution of technology, such as automobiles (allowed effective resource allocation and mobility) and telecommunications (increased the speed of communication) shortened the project schedule. **1958 – 1979: Application of Management Science** – Significant advances in technology like computer technology and space technology (moon mission) saw a increased use of Project Management. **1980 – 1994: Production Centre Human Resources** – This period saw rapid strides in software technology and advanced space technological applications. This in turn gave project management a huge fillip. **1995 – Present: Creating a New Environment** – Internet and more interactive technologies evolved during this phase. Most project management software packages today have internet-connectivity feature. The success of project management lies in the ability of bringing the tasks, resources and people who are primal to the achievement of business goals and objectives, in a given time constraint and within the monetary allowance. Projects and Programs are linked directly to the strategic goals and initiatives of the organization supported (Mulcahy, June 12, 2013). Project Management has six phases - Initiation phase, Definition phase, Design phase, Development phase, Implementation phase, and Follow-up phase. Dividing a project into phases helps in leading it better (Baars, 2006). The objective of the Project Initiation Phase is to specify what the project should accomplish. This phase is significant from the perspective of specifying the client’s needs adequately, for if there is an error in articulating the same, then poorly formulated goals and objectives will stand out as a significant source of concern. This phase requires a comprehensive discussion on the deliverables as well as on the major barriers, potential problems and roles and responsibilities of project initiation are summarized (SoM, 2004). The objective of Project definition phase is to define the process of defining the project’s purpose and the development of alternative means to satisfy it. The project definition process consists of three stages: determining project purposes, translating those purposes into criteria for assessing alternative designs or solutions, and generating alternative design concepts (Whelton, 2004). The Design Phase generally begins with informal conceptualization and vetting of a project idea among colleagues within. Once the same has been done then a project concept paper is prepared to articulate the idea and also enable those doing the appraisal of the project to judge the feasibility of the idea. The design phase concludes with a project appraisal that is an internal examination of the merits of the project and whether it fits the strategic goals and objectives of the organization (Gawler, 2005). The objective of the development phase is to chart-out the project organizational structure, detailed project planning and design, contract establishment and detailed design. This phase commences following the approval of the business case and the allocation of organizational resources (TMR-QLD, 2010). During the Implementation phase the project is mobilized and executed. During this phase the a stock is also taken of the actual progress of the project against planned and if certain modifications are necessitated to bring the project on track the same are also executed (ITAD, 1999). The follow-up phase does everything required to bring the project to a successful completion. Issues like duration of follow-up phase, ownerships of the bugs, resolution of the errors, training of the users, feedback etc. are of high importance (Streveler, 2009). Each phase in this project management cycle is important and has a central theme: Initiation Phase (Idea); Definition Phase (What?); Design Phase (How?); Development Phase (How to implement?); Implementation Phase (Implementation); and, Follow-up Phase (Maintenance) (Baars, 2006). Waterfall Methodology of Project Management Waterfall methodology is a sequential design process. Each of the eight stages (conception, initiation, analysis, design, construction, testing, implementation, and maintenance) are completed, the developers move on to the next step (Base36, 2012). There is no room for errors as waterfall methodology does not allow developers to go back to a step that has been completed and hence requires careful planning. Since this methodology insists on extensive and meticulous record keeping, it allows new developers to join in between with ease, in the eventuality of attrition. The waterfall methodology allows client to have a complete idea about the size, cost and timeline of the project. But this also becomes a roadblock in a way as this methodology relies too much on initial requirement specifications. Hence in case of an error in requirement specification, the project may suffer in a major way. Additionally, waterfall methodology is not conducive to the evolving needs of the client. Waterfall methodology is also rigid in a way since one it does not allow developers to go back to a step after the same has been completed, and, second the testing and debugging only happens at the end. This offers very less flexibility. Agile Methodology of Project Management Agile Methodology follows an incremental approach, compared to the sequential approach of the waterfall methodology, and hence is seen as an answer to the disadvantages of the latter. Agile Methodology gives top priority to the client or the customer satisfaction throughout the delivery. There are 12 principles of Agile Software development (Cleland & Ireland, 2008): 1. The utmost importance is to satisfy the customer through early and continuous delivery of valuable software. 2. To welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. 3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter time scale. 4. Business people and developers must work together daily throughout the project. 5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. 6. The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. 7. Working software is the primary measure of progress. 8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. 9. Continuous attention to technical excellence and good design enhances agility. 10. Simplicity—the art of minimizing the amount of work done and avoiding unnecessary work. 11. The best architectures, requirements, and designs emerge from self-organizing teams. 12. At regular intervals, the team reflects on how to become more effective, and then tunes and adjusts its behavior accordingly. In Agile Methodology, the developers start off with a simplistic project design, and then begin to work on small modules. The work on these modules is done in weekly or monthly sprints, and at the end of each sprint, project priorities are evaluated and tests are run. These sprints allow for bugs to be discovered, and customer feedback to be incorporated into the design before the next sprint is run. **Agile Versus Waterfall Methodology** Agile simply a better way of doing Project Management? The discussion to follow attempts to answer this question, with the help of research literature. Agile Methodology unlike Waterfall approach does not rely too heavily on initial planning. It expects and allows changes. At the end of each sprint, project priorities are evaluated. This allows clients to add their feedback so that they ultimately get the product they desire. The difference between Waterfall and Agile Methodology is primarily in terms of the flexibility and client focus. Agile is much more flexible and focused entirely on satisfying the client needs. While waterfall methodology does not allow a developer to step back to a stage that has been completed, the agile methodology is much more adaptive to change and changes can be incorporated without necessarily rewriting the entire program. Flexibility in the Agile Methodology also comes from the fact that bugs are detected and removed in the entire developmental cycle rather than only at the end as in the waterfall method. Hence in the case of the latter, the entire program may have to rewritten that may have considerable cost and time implications, in turn affecting the client satisfaction. And although both waterfall and agile methodologies allow for departmentalization, it is definitely better in Agile (Wordpress, 2008). Waterfall is structured, one big project, a sequential process, suited for situations where change is uncommon, internal, and, a process that requires clearly defined requirements upfront; whereas, Agile is flexible, many small projects, highly collaborative, best for those who want continuous improvements, involves customers, and, a process in which requirements are expected to evolve and change (Chan, 2013). According to the 2011 CHAOS report from the Standish Group, Agile projects are three times more successful than non-agile projects. The report goes so far as to say, “The agile process is the universal remedy for software development project failure. Software applications developed through the agile process have three times the success rate of the traditional waterfall method and a much lower percentage of time and cost overruns” (Cohn, 2012). **Research Objectives** Agile process methodologies by their style are: Extreme Programming, which is socially centric; Scrum, which is engineering centric; and RUP, which is tool and management centric. Agile methodology calls for different kinds of efforts, composition of team, upfront planning, sequencing and feedback (Stevens, 2013). Research also shows that most IT executives find it difficult in understanding the Project Management process (CHAOS, 2012). Researchers have emphasized an organizational policy for project management since it is essential in establishing the roles and responsibilities of every member of the management team according to the abilities of the employees, but especially according to the hierarchical position they hold within the organization. A project Manager’s role has been equated to the role held by the Minister-Secretary of State, who holds the highest function hierarchically within the organization, but who is also politically appointed; therefore, once the person is replaced by a new person who takes on the tasks and prerogatives of the project manager, the ability of project implementation, but also the project management organizational ability are modified (Florescu, 2012). Traditionally there has been lot of emphasis on the role of the project manager in project management and on meeting time, budget, and project performance (or scope) goals. But that seems no longer sufficient to guarantee the achievement of organizational objectives (Shenhar & Dvir, 2007). When project managers and project teams are engaged in day-to-day project execution their focus and attention, rather, is operational, and their mind-set is on “getting the job done. They typically are not focused on the business aspects. While this mind-set does contributes to project teams doing their work efficiently, left alone, it may lead to disappointing business results and even failure—when the job was not done effectively (Patanakul & Shenhar, 2012). Ironically, however, the traditional approach is still widely ingrained, and is still accepted as the common way of running a project. Inventing contemporary project management approaches in IT projects has become very significant. ‘Information technology plays a continuously increasing role in economy and successful IT projects are very important for companies. Mismanaged software (development and/or implementation) projects are very common and result in failure’ (Standish Group International, 2009). Agile is a project management technique that is more collaborative in nature (Fernandez & Fernandez, 2008) and is designed to be customized to fit the development project at hand. It answers the challenge posed to IT to deliver more in less time and with proven business value (Hernandez, 2011). Agile is a methodology of Project Management that originated from the game of Rugby, where the entire team is focused on one goal. It has 3 Roles (product owner role, scrum master role, and, scrum team role), 3 Artifacts (product backlog, sprint backlog and, release backlog) and 3 ceremonies (sprint planning, daily scrum and, demo & retrospective). Project Management comprises of 5 process groups (initiation, planning, execution, monitoring & control and closure) and 9 Knowledge Areas (project integration management, scope management, time management, cost management, risk management, human resource management, communication management, quality management and procurement management) (ProjectManagementInstitute, 2008). This paper explores how the Agile methodology impacts all the process groups of Project Management, in a way that the role of the Project Manager and role of the team transforms as compared to a non-Agile environment. Fundamentally, the role of project manager alters from a controlling and directing approach to a facilitation approach in an Agile environment. The role of the team alters more from a mindset point-of-view, from an individual accountability to more of a mutual accountability perspective. In short, the Agile methodology focuses more on the team and not on the individuals, as in a non-Agile set-up. Hence this paper aims to understand: 1. How Agile has impacted the 5 process areas? 2. Because of this impact, how have the roles altered: a) role of manager b) role of team. 3. How has agile impacted the 5 process areas 4. Because of this impact, how have the roles altered a) Role of project manager b) role of team **Traditional Waterfall Model** The role of a project manager in traditional waterfall methodology is clearly defined and has well defined boundaries. In a traditional waterfall model the project manager typically focusses on the following aspects - Keeping track of the progress in the project with the help of regular status meetings with the team, periodic status reports etc. - Keeping track of project risks and coming up with mitigation plans for the risks which are yet to materialize and contingency plans for the risks which have already occurred. - Making sure that there is no scope creep and unwanted scope changes in order to ensure that the team is able to make an on time delivery with the desired quality and within the agreed cost. - Sending communications to the rest of the organization in terms of the status of the project. Apart from the role of the project manager all other roles within the team for instance developer, technical architect, business analysts etc are all well defined with defined boundaries. The flip side of waterfall model is basically these well defined roles and defined boundaries of these roles. This leads to an individualistic mindset within the team and each individual makes an effort to perform his/ her role and does not make an effort to extend their capabilities. Besides the role of a project manager in this model is more of directing the team and using command and control method to manage the project team. **Agile Model** In an Agile based model all roles within the team i.e. Project Manager, Developers, Technical Architect, Business Analysts etc. work jointly as one integrated team instead of working as individuals within the team. The focus shifts from completion of individual tasks to making the delivery from the team as a whole a success. Thus in an Agile setup the responsibility of managing the project shifts from the project manager alone to the entire team as such. The role of an agile project manager is more of a facilitator for the team rather than a person who gives instructions to the team. A project manager in an Agile setup allows the team to be self enabled and fosters creative thinking and decision making capabilities within the team. In other words the project manager takes the copilot seat and allows the team to fly the aircraft on its own and in the process making their own decisions. As and when the team needs the guidance, the project manager provides required direction and motivation to the team and ensures that they are moving in the right direction. Besides, the agile project manager protects the team and removes obstacles in the way of the team to ensure success. Apart from this the project manager finds ways of improving the overall productivity of the team by improvement the processes and practices followed for project development. How has Agile impacted the 5 PM process groups and activities associated with the 5 process groups and knowledge areas? Planning Process Group While in a traditional model the project manager and the project team focuses on long term deliveries in scope for the entire project, in an Agile context the project manager and the project team focusses on short term deliveries and the planning process is for a short term and iterative in nature. The delivery cycle in an Agile based project ranges from a period of one to four weeks commonly called sprints in Agile terminology. Hence in an Agile context the planning exercise is more focused and more precise in nature. Due to this change the scope of the following activities within the planning process group is restricted to the duration of a sprint i.e. 1-4 weeks. - Collect Requirements- Performed by PO or the PO team - Define Scope- Performed by the Agile Team - Create WBS- Performed by the Agile Team - Define Activities- Performed by the Agile Team Sequence Activities- Performed by the Agile Team Estimate Activity Duration- Performed by the Agile Team Develop Schedule- Performed by the Agile Team Estimate Costs- Performed by the Agile Team Plan Quality- Performed by the Agile Team Plan Communications- Performed by the Agile Team All the above activities are performed by the Agile team during Grooming and Sprint Planning Meeting The process of planning is iterative which means that all the above activities will be repeated for a new sprint once an existing sprint has been completed. The advantage of this model is that at the end of each sprint the team conducts a demo and retrospective meeting where they typically demonstrate the running software or modules developed by the team and discuss the following aspects: - What we did well - What can we do better - Challenges/ Disturbances Hence the team takes the learnings from a finished sprint to the new sprint and hence becomes better and better as they progress. Besides since the planning is for a short term any unforeseen changes can be dealt within the current sprint or upcoming sprints. Executing Process Group The direct and manage project execution is handled in a different way in the Agile context. While in the traditional waterfall model, the project manager focusses on directing the project team, assigns tasks to team members and directs the team members to perform their tasks, in an Agile context the team is responsible for day to day management of the tasks and project activities. The direct and manage project execution step in the Agile context is achieved in the form of Daily stand up meetings. The project manager in an Agile context focuses towards removing the impediments which the team faces on a day to day basis by involving the entire Agile team and developing plans along with the team to overcome the impediments. Besides the Agile project manager strives towards making the development processes more efficient. Monitor and Control process group The monitor and control part of project management is handled in a different way in Agile projects. As compared to Projects following waterfall methodology, monitor and control in Agile based projects requires a continuous effort and attention. In the case of projects following waterfall methodology a lot of effort is needed upfront in terms of defining the specifications and acceptance criteria. As compared to this in an Agile setup the effort needed upfront in creation of specifications and acceptance criteria is much less. Rather these specifications and controls are developed and enforced on a day to day basis. Besides Agile deliveries happen in the form of short sprints. Another difference between a waterfall model based project and an Agile based project is that in the former case it is the project manager who is responsible to perform the monitoring and control function whereas in an Agile context it is the project manager along with the team which performs the monitoring and control function on a daily basis in the form of Daily scrum meeting and at the end of each sprint in the form of Demo and Retrospective session. The team constantly keeps track of a burn-down chart which shows them graphically how the team is progressing in terms of planned effort, the actual effort and the overall progress made by the team. Therefore very often in Agile projects the chances of scope creep or overruns is minimal whereas in the case of waterfall model based projects the chances of scope creep and overruns is quite huge. Closure Process Group The close project process in Agile based projects is much simpler as compared to a waterfall model based project. In a waterfall model based project, the final release comprises of releasing the entire product in one go. Hence the final release is a huge effort and complex affair. Besides getting acceptance from the customer is complicated as well as the customer gets to see the end product only once and in between the initiation and closure phase has very little influence on the final outcome of the project. In the case of Agile projects as mentioned earlier, the entire delivery of the project is broken down into several sprints which each sprint lasting for around 4 weeks. At the end of each sprint the team completes a working software which is a part of the overall delivery needed from the project. Besides, the team presents the working software to the customer in a demo at the end of each sprint and hence if the customer needs any changes it can be incorporated in the next sprint. Hence in the case of Agile based projects, the final release is actually the same as any other release during the lifecycle of the project. Besides since there is a close interaction with customer during the entire lifecycle of the project and the customer has a say in influencing the final outcome of the project, it leads to a smooth closure of the project, better customer acceptance and smooth sign-offs. All this leads to better customer satisfaction levels. Conclusion Thus it can be comprehensively concluded that Agile methodology impacts all the process groups of Project Management, in a way that the role of the Project Manager and role of the team transforms as compared to a non-Agile environment. Fundamentally, the role of project manager alters from a controlling and directing approach to a facilitation approach in an Agile environment. The role of the team alters more from a mindset point-of-view, from an individual accountability to more of a mutual accountability perspective. In short, the Agile methodology focuses more on the team and not on the individuals, as in a non-Agile set-up. Bibliography About the Authors **Shubhashish Sengupta, PMP** ITC Infotech Ltd. Bangalore, India Shubhashish Sengupta is a certified Project Management Professional (PMP®). He has over 15 years of work experience in the IT industry and his key strengths are in Project Development & Management, Delivery management, Project Analysis & Design. He currently works as Senior project Manager at ITC Infotech Ltd. and is based at Bangalore (India). He has played a leading role in coming-up with recommendations on implementing Agile in the Indian IT scenario. He has received several awards in his career and recently he was awarded the ‘Star performer of the Year’ award by his current employer. Besides his present role, Subhashish also has interests in practice-oriented research, especially in the area of Project Management. Email: subhashishsen@gmail.com Linkedin Profile URL: [http://in.linkedin.com/in/subhashishpmp/](http://in.linkedin.com/in/subhashishpmp/) **Dr. Debashish Sengupta** Alliance University Bangalore, India Dr. Debashish Sengupta currently works with Alliance School of Business, Alliance University, Bangalore (India) as a senior faculty member. He is the author of a Crossword bestseller book – *Employee Engagement* (2011). He has also authored three other books. He has been a book reviewer for the prestigious *Emerald Group Publishing, London (U.K).* He is an avid researcher and has more than 70 research publications to his credit till date. He occasionally writes columns, articles and case studies for reputed business dailies and for leading business magazines. He writes a professional blog on employee engagement - [http://www.peopleengagement.blogspot.in](http://www.peopleengagement.blogspot.in) Dr. Sengupta is among the selected 26 authors invited from all over the world to be invited by Institute for Employee Wellbeing, Bellevue University, Nebraska, U.S. for writing invitational posts on *Employee Happiness*. Dr. Sengupta is a much sought speaker at various business forums and a resource person in several MDPs, corporate training programs. He has also been involved in some not-for-profit business consulting in the area of strategic HR and employee engagement. Email: debashishsenguptaresearch@gmail.com Ray Titus is the Professor and Marketing & Strategy at the Alliance University, School of Business located at Bangalore, India. He also serves as the Area Chairperson at the Department of Marketing. Ray’s entry into academia followed a decade long stint in the Industry where he served in Operations, Marketing, and Project roles. As an Industry Professional he’s overseen strategic growth infinitives that included product and category expansions and the launch of an independent strategic business unit. As an academic in the classroom Prof. Ray teaches courses on Marketing Strategy, Consumer Behaviour, and Social Media Marketing. Ray is also a visiting Professor at the SP Jain Center of Management, Dubai and Singapore, and the Asian Institute of Technology, Thailand. Prof Titus’ research interests lie in the area of consumption behaviour, marketing value propositions, and new & social media landscape. As a Marketing Trainer and Consultant, Ray has closely worked with leading Indian and Multinational firms. He also actively engages with the industry through Management Development Programs. Prof. Ray publishes his professional blog ‘Buyer Behaviour’ which is listed among the ‘Top 100 academic Blogs every professional investor must read’ by Currency Trading and ‘15 Must Read Indian Blogs about Investing & Business’ by INForum India. Ray is also a business columnist whose expert opinion features in leading business newspapers and magazines. e.mail: raytitus@gmail.com Blog: http://www.buyerbehaviour.org Twitter handle: https://twitter.com/buyerbehaviour
{"Source-Url": "http://pmworldlibrary.net/wp-content/uploads/2014/01/pmwj18-jan2014-senguptas-titus-agile-methodologies-FeaturedPaper.pdf", "len_cl100k_base": 6188, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 39870, "total-output-tokens": 8432, "length": "2e12", "weborganizer": {"__label__adult": 0.0009832382202148438, "__label__art_design": 0.0023403167724609375, "__label__crime_law": 0.0010690689086914062, "__label__education_jobs": 0.2034912109375, "__label__entertainment": 0.00031757354736328125, "__label__fashion_beauty": 0.0005164146423339844, "__label__finance_business": 0.10870361328125, "__label__food_dining": 0.0012798309326171875, "__label__games": 0.00211334228515625, "__label__hardware": 0.000919818878173828, "__label__health": 0.0019588470458984375, "__label__history": 0.0014009475708007812, "__label__home_hobbies": 0.0009298324584960938, "__label__industrial": 0.0030517578125, "__label__literature": 0.0023784637451171875, "__label__politics": 0.0007443428039550781, "__label__religion": 0.0012922286987304688, "__label__science_tech": 0.036834716796875, "__label__social_life": 0.0008597373962402344, "__label__software": 0.02874755859375, "__label__software_dev": 0.595703125, "__label__sports_fitness": 0.0011110305786132812, "__label__transportation": 0.0017547607421875, "__label__travel": 0.00124359130859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36573, 0.01492]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36573, 0.12259]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36573, 0.92969]], "google_gemma-3-12b-it_contains_pii": [[0, 2402, false], [2402, 5012, null], [5012, 8182, null], [8182, 11059, null], [11059, 14326, null], [14326, 17852, null], [17852, 20028, null], [20028, 21299, null], [21299, 22189, null], [22189, 24165, null], [24165, 27235, null], [27235, 29558, null], [29558, 31889, null], [31889, 32750, null], [32750, 34995, null], [34995, 36573, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2402, true], [2402, 5012, null], [5012, 8182, null], [8182, 11059, null], [11059, 14326, null], [14326, 17852, null], [17852, 20028, null], [20028, 21299, null], [21299, 22189, null], [22189, 24165, null], [24165, 27235, null], [27235, 29558, null], [29558, 31889, null], [31889, 32750, null], [32750, 34995, null], [34995, 36573, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36573, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36573, null]], "pdf_page_numbers": [[0, 2402, 1], [2402, 5012, 2], [5012, 8182, 3], [8182, 11059, 4], [11059, 14326, 5], [14326, 17852, 6], [17852, 20028, 7], [20028, 21299, 8], [21299, 22189, 9], [22189, 24165, 10], [24165, 27235, 11], [27235, 29558, 12], [29558, 31889, 13], [31889, 32750, 14], [32750, 34995, 15], [34995, 36573, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36573, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
751d5794725bbbcb4e866516eff706c5e5cf1e37
Drawing Diagrams with R by Paul Murrell R provides a number of well-known high-level facilities for producing sophisticated statistical plots, including the “traditional” plots in the graphics package (R Development Core Team, 2008), the Trellis-style plots provided by lattice (Sarkar, 2008), and the grammar-of-graphics-inspired approach of ggplot2 (Wickham, 2009). However, R also provides a powerful set of low-level graphics facilities for drawing basic shapes and, more importantly, for arranging those shapes relative to each other, which can be used to draw a wide variety of graphical images. This article highlights some of R’s low-level graphics facilities by demonstrating their use in the production of diagrams. In particular, the focus will be on some of the useful things that can be done with the low-level facilities provided by the grid graphics package (Murrell, 2002, 2005b,a). Starting at the end An example of the type of diagram that we are going to work towards is shown below. We have several “boxes” that describe table schema for a database, with lines and arrows between the boxes to show the relationships between tables. To forestall some possible misunderstandings, the sort of diagram that we are talking about is one that is designed by hand. This is not a diagram that has been automatically laid out. The sort of diagram being addressed is one where the author of the diagram has a clear idea of what the end result will roughly look like—the sort of diagram that can be sketched with pen and paper. The task is to produce a pre-planned design, using a computer to get a nice crisp result. That being said, a reasonable question is “why not draw it by hand?”, for example, using a free-hand drawing program such as Dia (Larsson, 2008). The advantage of using R code to produce this sort of image is that code is easier to reproduce, reuse, maintain, and fine-tune with accuracy. The thought of creating this sort of diagram by pushing objects around the screen with a mouse fills me with dread. Maybe I’m just not a very GUI guy. Before we look at drawing diagrams with the core R graphics facilities, it is important to acknowledge that several contributed R packages already provide facilities for drawing diagrams. The Rgraphviz (Gentry et al., 2008) and igraph (Csardi and Nepusz, 2006) packages provide automated layout of node-and-edge graphs, and the shape and diagram packages (Soetaert, 2008b,a) provide functions for drawing nodes of various shapes with lines and arrows between them, with manual control over layout. In this article, we will only be concerned with drawing diagrams with a small number of elements, so we do not need the automated layout facilities of Rgraphviz or igraph. Furthermore, while the shape and diagram packages provide flexible tools for building node-and-edge diagrams, the point of this article is to demonstrate low-level grid functions. We will use a node-and-edge diagram as the motivation, but the underlying ideas can be applied to a much wider range of applications. In each of the following sections, we will meet a basic low-level graphical tool and demonstrate how it can be used in the generation of the pieces of an overall diagram, or how the tool can be used to combine pieces together in convenient ways. Graphical primitives One of the core low-level facilities of R graphics is the ability to draw basic shapes. The typical graphical primitives such as text, circles, lines, and rectangles are all available. In this case, the shape of each box in our diagram is not quite as simple as a rectangle because it has rounded corners. However, a rounded rectangle is also one of the graphical primitives that the grid package provides.\footnote{From R version 2.9.0; prior to that, a simpler rounded rectangle was available via the grid.roundRect() function in the RGraphics package.} The code below draws a rounded rectangle with a text label in the middle. ```r > library(grid) > grid.roundrect(width=.25) > grid.text("ISBN") ``` ![ISBN](isbn.png) Viewports A feature of the boxes in the diagram at the beginning of this article is that the text is carefully positioned relative to the rounded rectangle; the text is left-aligned within the rectangle. This careful positioning requires knowing where the left edge of the rectangle is on the page. Calculating those positions is annoyingly tricky and only becomes more annoying if at some later point the position of the box is adjusted and the positions of the text labels have to be calculated all over again. Using a grid viewport makes this sort of positioning very simple. The basic idea is that we can create a viewport where the box is going to be drawn and then do all of our drawing within that viewport. Positioning text at the left edge of a viewport is very straightforward, and if we need to shift the box, we simply shift the viewport and the text automatically tags along for the ride. All of this applies equally to positioning the text vertically within the box. In the code below, we create a viewport for the overall box, we draw a rounded rectangle occupying the entire viewport, then we draw text 2 mm from the left hand edge of the viewport and 1.5 lines of text up from the bottom of the viewport. A second line of text is also added, 0.5 lines of text from the bottom. ```r > pushViewport(viewport(width=.25)) > grid.roundrect() > grid.text("ISBN", x=unit(2, "mm"), y=unit(1.5, "lines"), just="left") > grid.text("title", x=unit(2, "mm"), y=unit(0.5, "lines"), just="left") > popViewport() ``` Coordinate systems The positioning of the labels within the viewport in the previous example demonstrates another useful feature of the grid graphics system: the fact that locations can be specified in a variety of coordinate systems or units. In that example, the text was positioned horizontally in terms of millimetres and vertically in terms of lines of text (which is based on the font size in use). As another example of the use of these different units, we can size the overall viewport so that it is just the right size to fit the text labels. In the following code, the height of the viewport is based on the number of labels and the width of the viewport is based on the width of the largest label, plus a 2 mm gap either side. This code also simplifies the labelling by drawing both labels in a single grid.text() call. ```r > labels <- c("ISBN", "title") > vp <- viewport(width=max(stringWidth(labels))+ unit(4, "mm"), height=unit(length(labels), "lines") > pushViewport(vp) > grid.roundrect() > grid.text(labels, x=unit(2, "mm"), y=unit(2:1 - 0.5, "lines"), just="left") > popViewport() ``` Clipping Another feature of the boxes that we want to produce is that they have shaded backgrounds. Looking closely, there are some relatively complex shapes involved in this shading. For example, the grey background for the “heading” of each box has a curvy top, but a flat bottom. These are not simple rounded rectangles, but some unholy alliance of a rounded rectangle and a normal rectangle. It is possible, in theory, to achieve any sort of shape with R because there is a general polygon graphical primitive. However, as with the positioning of the text labels, determining the exact boundary of this polygon is not trivial and there are easier ways to work. In this case, we can achieve the result we want using clipping, so that any drawing that we do is only visible on a restricted portion of the page. R does not provide clipping to arbitrary regions, but it is possible to set the clipping region to any rectangular region. The basic idea is that we will draw the complete rounded rectangle, then set the clipping region for the box viewport so that no drawing can occur in the last line of text in the box and then draw the rounded rectangle again, this time with a different background. If we continue doing this, we end up with bands of different shading. The following code creates an overall viewport for a box and draws a rounded rectangle with a grey fill. The code then sets the clipping region to start one line of text above the bottom of the viewport and draws another rounded rectangle with a white fill. The effect is to leave just the last line of the original grey rounded rectangle showing beneath the white rounded rectangle that has had its last line clipped. ```r > pushViewport(viewport(width=.25)) > grid.roundrect(gp=gpar(fill="grey")) > grid.clip(y=unit(1, "lines"), just="bottom") > grid.roundrect(gp=gpar(fill="white")) > popViewport() ``` ### Drawing curves Another basic shape that is used in the overall diagram is a nice curve from one box to another. In addition to the basic functions to draw straight lines in R, there are functions that draw curves. In particular, R provides a graphical primitive called an *X-spline* (Blanc and Schlick, 1995). The idea of an X-spline is that we define a set of control points and a curve is drawn either through or near to the control points. Each control point has a parameter that specifies whether to create a sharp corner at the control point, or draw a smooth curve through the control point, or draw a smooth curve that passes nearby. The following code sets up sets of three control points and draws an X-spline relative to each set of control points. The first curve makes a sharp corner at the middle control point, the second curve makes a smooth corner through the middle control point, and the third curve makes a smooth corner near the middle control point. The control points are drawn as grey dots for reference (code not shown). ```r > x1 <- c(0.1, 0.2, 0.2) > y1 <- c(0.2, 0.2, 0.8) > grid.xspline(x1, y1) > x2 <- c(0.4, 0.5, 0.5) > y2 <- c(0.2, 0.2, 0.8) > grid.xspline(x2, y2, shape=-1) > x3 <- c(0.7, 0.8, 0.8) > y3 <- c(0.2, 0.2, 0.8) > grid.xspline(x3, y3, shape=1) ``` Determining where to place the control points for a curve between two boxes is another one of those annoying calculations, so a more convenient option is provided by a curve graphical primitive in **grid**. The idea of this primitive is that we simply specify the start and end points of the curve and R figures out a set of reasonable control points to produce an appropriate X-spline. It is also straightforward to add an arrow to either end of any straight or curvy line that R draws. The following code draws three curves between pairs of end points. The first curve draws the default “city-block” line between end points, with a smooth corner at the turning point, the second curve is similar, but with an extra corner added, and the third curve draws a single wide, smooth corner that is distorted towards the end point. The third curve also has an arrow at the end. ```r > xla <- 0.1; xlb <- 0.2 > yla <- 0.2; ylb <- 0.8 > grid.curve(xla, yla, xlb, ylb) > x2a <- 0.4; x2b <- 0.5 > y2a <- 0.2; y2b <- 0.8 > grid.curve(x2a, y2a, x2b, y2b, inflect=TRUE) > x3a <- 0.7; x3b <- 0.8 > y3a <- 0.2; y3b <- 0.8 > grid.curve(x3a, y3a, x3b, y3b, ncp=8, angle=135, square=FALSE, curvature=2, arrow=arrow(angle=15)) ``` ### Graphical functions The use of graphical primitives, viewports, coordinate systems, and clipping, as described so far, can be used to produce a box of the style shown in the diagram at the start of the article. For example, the following code produces a box containing three labels, with background shading to assist in differentiating among the labels. ```r > labels <- c("ISBN", "title", "pub") > vp <- > viewport(width=max(stringWidth( > labels))+ > unit(4, "mm"), > height=unit(length(labels), "lines")) > pushViewport(vp) > grid.roundrect() > grid.clip(y=unit(1, "lines"), just="bottom") > grid.roundrect(gp=gpar(fill="grey")) > grid.clip(y=unit(2, "lines"), just="bottom") > grid.roundrect(gp=gpar(fill="white")) > grid.clip() > grid.text(labels, x=unit(rep(2, 3), "mm"), > y=unit(3:1 -.5, "lines"), >..."lines")) ``` > popViewport() However, in the sort of diagram that we want to produce, there will be several such boxes. Rather than write separate code for each box, it makes sense to write a general function that will work for any set of labels. Such a function is shown in Figure 1 and the code below uses this function to draw two boxes side by side. > tableBox(c("ISBN", "title", "pub"), x=0.3) > tableBox(c("ID", "name", "country"), x=0.7) This function represents the simplest way to efficiently reuse graphics code and to provide graphics code for others to use. However, there are benefits to be gained from going beyond this procedural programming style to a slightly more complicated object-oriented approach. **Graphical objects** In order to achieve the complete diagram introduced at the start of this article, we need one more step: we need to draw lines and arrows from one box to another. We already know how to draw lines and curves between two points; the main difficulty that remains is calculating the exact position of the start and end points, because these locations depend on the locations and dimensions of the boxes. The calculations could be done by hand for each individual curve, but as we have seen before, there are easier ways to work. The crucial idea for this step is that we want to create not just a graphical function that encapsulates how to draw a box, but define a graphical object that encapsulates information about a box. The code in Figure 2 defines such a graphical object, plus a few other things that we will get to shortly. The first thing to concentrate on is the boxGrob() function. This function creates a "box" graphical object. In order to do this, all it has to do is call the grob() function and supply all of the information that we want to record about "box" objects. In this case, we just record the labels to be drawn within the box and the location where we want to draw the box. This function does not draw anything. For example, the following code creates two "box" objects, but produces no graphical output whatsoever. > box1 <- boxGrob(c("ISBN", "title", "pub"), x=0.3) > box2 <- boxGrob(c("ID", "name", "country"), x=0.7) The grid.draw() function can be used to draw any graphical object, but we need to supply the details of how "box" objects get drawn. This is the purpose of the second function in Figure 2. This function is a method for the drawDetails() function; it says how to draw "box" objects. In this case, the function is very simple because it can call the tableBox() function that we defined in Figure 1. The important detail is that the boxGrob() function specified a special class, cl="box", for "box" objects, which meant that we could define a drawDetails() method specifically for this sort of object and control what gets drawn. With this drawDetails() method defined, we can draw the boxes that we created earlier by calling the grid.draw() function. This function will draw any grid graphical object by calling the appropriate method for the drawDetails() generic function (among other things). The following code calls grid.draw() to draw the two boxes. > grid.draw(box1) > grid.draw(box2) At this point, we appear to have achieved only a more complicated equivalent of the previous graphics function. However, there are a number of other functions that can do useful things with grid graphical objects. For example, the grobX() and grobY() functions can be used to calculate locations on the boundary of a graphical object. As with grid.draw(), which has to call drawDetails() to find out how to draw a particular class of graphical object, these functions call generic functions to find out how to calculate locations on the boundary for a particular class of object. The generic functions are called xDetails() and yDetails() and methods for our special "box" class are defined in the last two functions in Figure 2. These methods work by passing the buck. They both create a rounded rectangle at the correct location and the right size for the box, then call grobX() (or grobY()) to determine a location on the boundary of the rounded rectangle. In other words, they rely on code within the grid package that already exists to calculate the boundary of rounded rectangles. With these methods defined, we are now in a position to draw a curved line between our boxes. The key idea is that we can use grobX() and grobY() to specify a start and end point for the curve. For example, we can start the curve at the right hand edge of... > tableBox <- function(labels, x=.5, y=.5) { nlabel <- length(labels) tablevp <- viewport(x=x, y=y, width=max(stringWidth(labels)) + unit(4, "mm"), height=unit(nlabel, "lines")) pushViewport(tablevp) grid.roundrect() if (nlabel > 1) { for (i in 1:(nlabel - 1)) { fill <- c("white", "grey")[i%% 2 + 1] grid.clip(y=unit(i, "lines"), just="bottom") grid.roundrect(gp=gpar(fill=fill)) } } grid.clip() grid.text(labels, x=unit(2, "mm"), y=unit(nlabel:1 - .5, "lines"), just="left") popViewport() } Figure 1: A function to draw a diagram box, for a given set of labels, centred at the specified (x, y) location. > boxGrob <- function(labels, x=.5, y=.5) { grob(labels=labels, x=x, y=y, cl="box") } > drawDetails.box <- function(x, ...) { tableBox(x$labels, x$x, x$y) } > xDetails.box <- function(x, theta) { nlines <- length(x$labels) height <- unit(nlines, "lines") width <- unit(4, "mm") + max(stringWidth(x$labels)) grobx(roundrectGrob(x=x$x, y=x$y, width=width, height=height), theta) } > yDetails.box <- function(x, theta) { nlines <- length(x$labels) height <- unit(nlines, "lines") width <- unit(4, "mm") + max(stringWidth(x$labels)) groby(rectGrob(x=x$x, y=x$y, width=width, height=height), theta) } Figure 2: Some functions that define a graphical object representing a diagram box. The boxGrob() function constructs a "box" object, the drawDetails() method describes how to draw a "box" object, and the xDetails() and yDetails() functions calculate locations on the boundary of a "box" object. box1 by specifying grobX(box1, "east"). The vertical position is slightly trickier because we do not want the line starting at the top or bottom of the box, but we can simply add or subtract the appropriate number of lines of text to get the right spot. The following code uses these ideas to draw a curve from the pub label of box1 to the ID label of box2. The curve has two corners (inflect=TRUE) and it has a small arrow at the end. This call to grid.curve() is relatively verbose, but in a diagram containing many similar curves, this burden can be significantly reduced by writing a simple function that hides away the common features, such as the specification of the arrow head. The major gain from this object-oriented approach is that the start and end points of this curve are described by simple expressions that will automatically update if the locations of the boxes are modified. > grid.curve(grobX(box1, "east"), grobY(box1, "south") + unit(0.5, "lines"), grobX(box2, "west"), grobY(box2, "north") - unit(0.5, "lines"), inflect=TRUE, arrow= arrow(type="closed", angle=15, length=unit(2, "mm"), gp=gpar(fill="black")) Conclusion This article has demonstrated a number of useful low-level graphical facilities in R with an example of how they can be combined to produce a diagram consisting of non-trivial nodes with smooth curves between them. The code examples provided in this article have ignored some details in order to keep things simple. For example, there are no checks that the arguments have sensible values in the functions tableBox() and boxGrob(). However, for creating one-off diagrams, this level of detail is not necessary anyway. One detail that would be encountered quite quickly in practice, in this particular sort of diagram, is that a curve from one box to another that needs to go across-and-down rather than across-and-up would require the addition of curvature=-1 to the grid.curve() call. Another thing that is missing is complete code to produce the example diagram from the beginning of this article, where there are five interconnected boxes and the boxes have some additional features, such as a distinct “header” line at the top. This complete code was excluded to save on space, but a simple R package is provided at http://www.stat.auckland.ac.nz/~paul with code to draw that complete diagram and the package also contains a more complete implementation of code to create and draw "box" graphical objects. One final point is that using R graphics to draw diagrams like this is not fast. In keeping with the S tradition, the emphasis is on developing code quickly and on having code that is not a complete nightmare to maintain. In this case particularly, the speed of developing a diagram comes at the expense of the time taken to draw the diagram. For small, one-off diagrams this is not likely to be an issue, but the approach described in this article would not be appropriate, for example, for drawing a node-and-edge graph of the Internet. Bibliography Paul Murrell Department of Statistics The University of Auckland New Zealand paul@stat.auckland.ac.nz
{"Source-Url": "https://journal.r-project.org/archive/2009/RJ-2009-006/RJ-2009-006.pdf", "len_cl100k_base": 5062, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 21375, "total-output-tokens": 6221, "length": "2e12", "weborganizer": {"__label__adult": 0.0003676414489746094, "__label__art_design": 0.002651214599609375, "__label__crime_law": 0.0004181861877441406, "__label__education_jobs": 0.0016117095947265625, "__label__entertainment": 0.0001804828643798828, "__label__fashion_beauty": 0.0001957416534423828, "__label__finance_business": 0.0004973411560058594, "__label__food_dining": 0.0004029273986816406, "__label__games": 0.0006303787231445312, "__label__hardware": 0.0014944076538085938, "__label__health": 0.0005340576171875, "__label__history": 0.0006041526794433594, "__label__home_hobbies": 0.00019156932830810547, "__label__industrial": 0.0008211135864257812, "__label__literature": 0.00035190582275390625, "__label__politics": 0.00027942657470703125, "__label__religion": 0.0005102157592773438, "__label__science_tech": 0.2239990234375, "__label__social_life": 0.00017726421356201172, "__label__software": 0.1070556640625, "__label__software_dev": 0.65576171875, "__label__sports_fitness": 0.0002923011779785156, "__label__transportation": 0.0005178451538085938, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22981, 0.0323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22981, 0.73146]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22981, 0.87798]], "google_gemma-3-12b-it_contains_pii": [[0, 4055, false], [4055, 8279, null], [8279, 11916, null], [11916, 16441, null], [16441, 18135, null], [18135, 22439, null], [22439, 22981, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4055, true], [4055, 8279, null], [8279, 11916, null], [11916, 16441, null], [16441, 18135, null], [18135, 22439, null], [22439, 22981, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22981, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22981, null]], "pdf_page_numbers": [[0, 4055, 1], [4055, 8279, 2], [8279, 11916, 3], [11916, 16441, 4], [16441, 18135, 5], [18135, 22439, 6], [22439, 22981, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22981, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
f9f6346266ed052952a5ff0ea5ec204f55ad880d
Threads, Concurrency, Mutual Exclusion, and Too Much Milk! CS439: Principles of Computer Systems February 3, 2016 Last Time CPU Scheduling - discussed the possible policies the scheduler may use to choose the next process (or thread!) to run - criteria we use to evaluate policies - throughput, turnaround time, response time, CPU utilization, waiting time - FIFO, Round Robin, SJF, Multilevel Feedback Queues Today’s Agenda • Threads – Differences from processes – User vs. kernel – Creating, dispatching – Independent vs. Cooperating • Too Much Milk – Race conditions, critical sections, and mutual exclusion Threads Processes: What We Think We Know • A process is the abstraction used by the OS to manage resources and provide protection • A process defines an address space – Identifies all addresses that may be touched by the program • A process has a single *thread of control* that executes instructions sequentially – Creates a single sequential execution stream What if we decouple the thread of control information from the process? Threads • A *thread* represents an abstract entity that executes a sequence of instructions – Short for “Thread of Control” – Defines a single sequential execution stream within a process • A thread is bound to a single process • Each process may have multiple threads of control – *Must* have one – Virtualizes the processor Why Threads? • Programmers create *multi-threaded* programs to: – Better represent the structure of the tasks • We think linearly • But the world is concurrent! – Improve performance • One thread can perform computation while another waits for I/O • Threads may be scheduled across different processors in a multi-processor architecture • First, concrete examples of usefulness, then implementation, then interaction (or not) with the OS (which is based on thread type) The Case for Threads: Web Servers Consider a web server that performs these actions: while() get network message (URL) from client get URL data from disk compose response send response How well does this web server perform? The Case for Threads: Web Servers Consider a web server that performs these actions: Create a number of threads, and for each thread do: get network message (URL) from client get URL data from disk compose response send response How well does this web server perform? Overlapping Requests (Concurrency) Thread One, Request One - get network message (URL) from client - get URL data from disk ---disk access latency--- - send data over network Thread Two, Request Two - get network message (URL) from client - get URL data from disk ---disk access latency--- - send data over network => Total time is less than request 1 + request 2 The Case for Threads: Arrays Consider the following code fragment: ```c for(k = 0; k < n; k++) a[k] = b[k] * c[k] + d[k] * e[k]; ``` Is there a missed opportunity? void thread_function(int arg0, int arg1, ...) {...} main() { ... tid = thread_create(thread_function, arg0, arg1, ...); ... } At the point thread_create() is called: – execution continues with the original thread in main function, and – execution starts at thread_function() in new thread in parallel (concurrently). Programmer’s View: Array Example How can this code take advantage of 2 threads? ```c for(k = 0; k < n; k++) a[k] = b[k] * c[k] + d[k] * e[k]; ``` Rewrite this code fragment as: ```c do_mult(p, m) { /*thread function*/ for(k = p; k < m; k++) a[k] = b[k] * c[k] + d[k] * e[k]; } main() { /*args are thread function name and then its args*/ thread_create(do_mult, 0, n/2); thread_create(do_mult, n/2, n); ``` Threads: A Closer Look Threads: A Closer Look Threads (just like processes) go through a sequence of new, ready, running, blocking, and terminated states. Diagram: - New - Ready - Running - Blocking - Terminated Arrows indicate the transition between states. Threads: A Closer Look - Processes define an address space; threads share the address space - Each thread has: - Its own stack - Exclusive use of the CPU registers while it is executing - Each thread does NOT have: - Its own address space - Shared amongst all threads in the process - What does that imply for process data? - So, threads are lightweight: - Creating a thread is cheaper than creating a process - Communication between threads is easier than between processes - Processes must set up a shared resource or pass messages or signals - Context switching between threads is cheaper (same address space) Threads and the Address Space Threads within a single process *share* the address space – All process data can be accessed by any thread • Particularly global data • Heap is also shared (*What about pointers into the heap?*) – Threads have their own stacks, yes, BUT there is no protection • So any thread can modify another thread’s stack • This is usually a bug Threads and Registers • A thread has exclusive use of the registers while it is executing • When a thread is pre-empted, its register values are saved as part of its state – the new thread gets to use the registers! Metadata Structures - Process Control Block (PCB) contains process-specific information - Owner, PID, heap pointer, priority, active thread, and pointers to thread information - Thread Control Block (TCB) contains thread-specific information - Stack pointer, PC, thread state (running, ...), register values, a pointer to PCB, ... iClicker Question Threads have their own ...? A. Address Space B. PCB C. Stack # Threads vs. Processes <table> <thead> <tr> <th>Threads</th> <th>Processes</th> </tr> </thead> <tbody> <tr> <td>A thread has no code or data segment or heap of its own. Each has its own stack &amp; registers</td> <td>A process has code/data/heap &amp; other segments of its own. A process also has its own registers.</td> </tr> <tr> <td>A thread cannot live on its own, it must live within a process. There can be more than one thread in a process---the original thread calls main and has the process’s stack.</td> <td>There must be at least one thread in a process.</td> </tr> <tr> <td>If a thread dies, its stack is reclaimed</td> <td>If a process dies, its resources are reclaimed &amp; all threads die</td> </tr> <tr> <td>Each (kernel) thread can run on a different physical processor</td> <td>Each process can run on a different physical processor</td> </tr> <tr> <td>Inexpensive creation and context switch</td> <td>Expensive creation and context switch</td> </tr> </tbody> </table> Thread Types Kernel-Level Threads - A *kernel-level thread* is a thread that the OS knows about - Every process has at least one kernel-level thread - Kernel manages and schedules threads (as well as processes) - System calls used to create, destroy, and synchronize threads - Switching between kernel-level threads of the same process requires a small “context switch” - Values of registers, program counter, and stack counter must be switched - Memory management information remains since threads share an address space - Also known as kernel threads Kernel-Level Threads: Context Switches between threads of the same process Similar to processes: - Thread is running - Thread blocks, is interrupted, *or voluntarily yields* - Mode switch to kernel mode - OS code saves thread state (to TCB) - OS code chooses new thread to run - OS code loads its state (from TCB) - Mode switch to user mode - Thread is running *Except* TCB is smaller than PCB So Why Use Kernel Threads? • I/O: the OS can choose another thread in the same process when a thread does I/O • Non-blocking calls are good in theory, but difficult to program in practice • Kernel-level threads can exploit parallelism • Different processors of a symmetric multiprocessor • Different cores on a multicore CPU • Used by systems: Linux, Solaris, Windows, pthreads (usually) • Also used by recent implementations of Java User-Level Threads - A user-level thread is a thread the OS does not know about - OS only schedules the process not the threads within a process - Programmer uses a thread library to manage threads (create, delete, synchronize, and schedule) - User-level code can define scheduling policy - Threads yield to other threads or voluntarily give up the processor - Switching threads does not involve a either a context switch or a “context switch” User-Level Threads: Context Switches (sort of) Similar to processes and kernel-level threads: - Thread is running - Thread blocks, is interrupted by a signal or voluntarily yields - Switch to kernel - Library code saves thread state (to TCB) - Library code chooses new thread to run - Library code loads its state (from TCB) - Thread is running What happens if the thread blocks? So Who Uses User-Level Threads? - Some Ruby implementations - May become extinct - But we’ve been saying this for awhile now ### Kernel-Level Threads <table> <thead> <tr> <th>Advantages</th> <th>Disadvantages</th> </tr> </thead> <tbody> <tr> <td>System calls do not block the process</td> <td>Can be difficult to make efficient</td> </tr> <tr> <td>Switching between threads within the same process is inexpensive</td> <td></td> </tr> <tr> <td>(registers, PC, and SP are changed, memory management info does not)</td> <td></td> </tr> <tr> <td>Only one scheduler</td> <td></td> </tr> </tbody> </table> ### User-Level Threads <table> <thead> <tr> <th>Advantages</th> <th>Disadvantages</th> </tr> </thead> <tbody> <tr> <td>Even faster to create and switch (no system calls or context switches necessary); may be an order of magnitude faster</td> <td>All user-level threads in a process block on system calls (can use non-blocking versions, if they exist)</td> </tr> <tr> <td>Customizable scheduler</td> <td>User-level scheduler can fight with kernel-level scheduler (OS may run a process with only idle threads!)</td> </tr> </tbody> </table> ## Comparison of Thread Types <table> <thead> <tr> <th>Model</th> <th>Kernel-Level</th> <th>User-Level</th> </tr> </thead> <tbody> <tr> <td>Managed by (creation, deletion, synchronization)</td> <td>Operating system (through system calls)</td> <td>User (through library calls)</td> </tr> <tr> <td>Scheduled by</td> <td>Operating system</td> <td>User</td> </tr> <tr> <td>Requires “context switch”</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>Blocks on system calls</td> <td>Single thread; others may be scheduled</td> <td>All threads in the process</td> </tr> </tbody> </table> iClicker Question Which of these entities is the most expensive to switch between? A. Processes B. User-level threads C. Kernel-level threads One Abstraction, Many Flavors • Single-threaded processes – What we have been assuming – Each has exactly one kernel-level thread – Add protection • Multi-threaded processes with user-level threads – Threads are created in user-space – Have exactly one kernel-level thread – Thread management through procedure calls – Scheduled by user-space scheduler – TCBs in user-space ready list • Multi-threaded processes with kernel-level threads – Threads are created by the OS and thus the OS knows they exist - process requests the threads – Thread management through system calls – TCBs & PCBs on in-kernel ready list – Scheduled by OS • In-kernel threads (New!) – Threads that are part of the OS (init, idle) *Note that kernel-level and user-level threads are UNRELATED to kernel-mode and user-mode execution.* One More Flavor: Independent vs. Cooperating Threads • Independent threads have no shared state with other threads – Simple to implement – Deterministic – Reproducible – Scheduling order doesn’t matter • Cooperating threads share state – Non-deterministic – Non-reproducible – Give us concurrency! Threads and the Scheduler (or, Why Multi-threaded Programming is Hard) Given two threads, A and B, how might their executions be scheduled? Concurrency Quiz If two threads execute this program concurrently, how many different final values of the global variable \( X \) are there? Initially, \( X == 0 \). A. 0 B. 1 C. 2 D. More than 2 ```c void increment() { int tmp = X; tmp = tmp + 1; X = tmp; } ``` Schedules/Interleavings - Model of concurrent execution - Interleave statements from each thread into a single thread - If **any** interleaving yields incorrect results, some synchronization is needed ``` Thread 1 tmp1 = X; tmp1 = tmp1 + 1; X = tmp1; ``` ``` Thread 2 tmp2 = X; tmp2 = tmp2 + 1; tmp1 = tmp1 + 1; X = tmp1; X = tmp2; ``` If X==0 initially, X == 1 at the end. WRONG result! Too Much Milk Too Much Milk! You • Arrive home • Look in the fridge; out of milk • Go to store • Buy milk • Arrive home; put milk away Your Roommate • Arrive home • Look in fridge; out of milk • Go to store • Buy milk • Arrive home; put milk away • Oh, no! Too Much Milk! • What do we want to happen? – Only one person buys milk at a time AND – Someone buys milk if you need it *These are the correctness properties for this problem.* • What happened? – Lack of communication! Race Conditions • What would the result have been if: – your roommate had arrived home for the first time after you had come back from the store? – you arrived home after your roommate came back from the store? – you were at the store when your roommate came back, but your roommate waited to look in the fridge until after you were back from the store? • Instances where the result changes based on scheduling are *race conditions* • What guarantees do we have about how our people/threads will be scheduled? • How can we solve this problem? Too Much Milk: Solution #1 You (Thread A) if(noMilk && noNote) { leave note; buy milk; remove note; } Your Roommate (Thread B) if(noMilk && noNote) { leave note; buy milk; remove note; } Does this work? A. Yes B. No Too Much Milk: Solution #2 You (Thread A) leave note A if(noNote B) if(noMilk) buy milk; remove note A Your Roommate (Thread B) leave note B if(noNote A) if(noMilk) buy milk; remove note B Does this work? A. Yes B. No Too Much Milk: Solution #3 **You (Thread A)** leave note A while(note B) do nothing; if(noMilk) buy milk; remove note A **Your Roommate (Thread B)** leave note B if(noNote A) if(noMilk) buy milk; remove note B Does this work? A. Yes B. No Why is it correct? Your Roommate (Thread B) leave note B if(noNote A) if(noMilk) buy milk; remove note B At this if, either there is a note A or not. If not, it is safe for B to check and buy milk, if needed. (Thread A has not started yet.) If yes, then thread A is checking and buying milk as needed or is waiting for B to quit, so B quits by removing note B. Why is it correct? You (Thread A) leave note A while(note B) do nothing; if(noMilk) buy milk; remove note A At this while, either there is a note B or not. If not, it is safe for A to buy since B has either not started yet or quit. If yes, A waits until there is no longer a note B, and either finds milk that B bought or buys it if needed. Why is it correct? So Thread B buys milk (which Thread A finds) or not, but either way it removes note B. Since Thread A loops, it waits for B to buy milk or not, and then if B did not buy it, it buys the milk. So it’s correct, but... is it good? 1. It is too complicated. It was hard to convince ourselves this solution worked. 2. It is asymmetrical---thread A and thread B are different. *What would we need to do to add new threads?* 3. A is *busy waiting*, or consuming CPU resources despite the fact it is not doing any useful work. Terminology - **Atomic Operation**: an operation that is uninterruptible - More next time - **Synchronization**: Using atomic operations to ensure cooperation between threads - More next time - **Mutual Exclusion**: Exactly one thread (or process) is doing a particular activity at a time. Usually related to critical sections. - More next - **Critical Section**: A piece of code that only one thread can execute at a time - More now More Terminology (How to think about synchronization code) ... entry section //code to attempt entry into //the critical section critical section //code that requires isolation //(e.g., with mutual exclusion) exit section //cleanup code after //execution of the critical section non-critical section //everything else ... Critical Sections and Correctness Four properties are required for correctness: 1. *Safety*: only one thread in the critical section 2. *Liveness*: if no threads are executing a critical section, and a thread wishes to enter a critical section, that thread must be guaranteed to eventually enter the critical section 3. *Bounded waiting*: if a thread wishes to enter a critical section, then there exists a bound on the number of other threads that may enter the critical section before that thread does 4. *Failure atomicity*: it’s okay for a thread to die in the critical section Safety and Liveness for Critical Sections • Only one thread is concurrently in the critical section A. Safety B. Liveness C. Both • A thread that wants to enter the critical section will eventually succeed A. Safety B. Liveness C. Both • Bounded waiting: If a thread $i$ is in entry section, then there is a bound on the number of times that other threads are allowed to enter the critical section (only 1 thread is allowed in at a time) before thread $i$’s request is granted. A. Safety B. Liveness C. Both Aside: Safety and Liveness, More Generally Properties defined over the execution of a program • Safety: “nothing bad happens” – Holds in every finite execution prefix • Windows never crashes • No patient is ever given the wrong medication • A program never terminates with the wrong answer • Liveness: “something good eventually happens” – No partial execution is irremediable • Windows always reboots • Medications are eventually distributed to patients • A program eventually terminates Mutual Exclusion • Exactly one thread (or process) is doing a particular activity at a time. Usually related to critical sections. – Active thread excludes its peers • Some computer resources cannot be accessed by multiple threads at the same time – E.g., a printer can’t print two documents at once • For shared memory architectures, data structures are often mutually exclusive – Two threads adding to a linked list can corrupt the list Formalizing “Too Much Milk” • Shared variables – “Look in the fridge for milk” – check a variable – “Put milk away” – update a variable • Safety property – At most one person buys milk • Liveness – Someone buys milk when needed Formalizing “Too Much Milk” You (Thread A) leave note A while(note B) do nothing; if(noMilk) buy milk; remove note A Entry Section Your Roommate (Thread B) leave note B if(noNote A) if(noMilk) buy milk; remove note B Critical Section Exit Section Too Much Milk: Lock Solution You (Thread A) Lock->Acquire(); if(noMilk) buy milk; Lock->Release(); Your Roommate (Thread B) Lock->Acquire(); if(noMilk) buy milk; Lock->Release(); Summary • Threads share the same address space – Processes have *separate* address spaces – Child processes start with a *copy* of their parent’s address space • Each thread has its own thread of control – Program counter, register values, execution stack • It is easy for threads to inadvertently disrupt each other since they share the entire address space (!) • Communication among threads is typically done through shared variables • Operating system can switch from any thread at any time (assuming kernel threads) • Critical sections identify pieces of code that cannot be executed in parallel by multiple threads – Typically code that accesses or modifies shared variables Announcements • Homework 2 due Friday in section • Project 0 due Friday, 2/12 – Follow style guidelines – Keep pair programming log – Fill in README – Group registration due 2/5 • Project 1 will be posted on Monday
{"Source-Url": "http://www.cs.utexas.edu/~ans/classes/cs439/lectures/05_threads_and_too_much_milk_20160203.pdf", "len_cl100k_base": 4996, "olmocr-version": "0.1.50", "pdf-total-pages": 58, "total-fallback-pages": 0, "total-input-tokens": 83768, "total-output-tokens": 6969, "length": "2e12", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.0003218650817871094, "__label__crime_law": 0.00039505958557128906, "__label__education_jobs": 0.00492095947265625, "__label__entertainment": 6.93202018737793e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.0001766681671142578, "__label__food_dining": 0.0004086494445800781, "__label__games": 0.0008406639099121094, "__label__hardware": 0.0021495819091796875, "__label__health": 0.0004680156707763672, "__label__history": 0.0002942085266113281, "__label__home_hobbies": 0.0002256631851196289, "__label__industrial": 0.0007200241088867188, "__label__literature": 0.00026869773864746094, "__label__politics": 0.0002911090850830078, "__label__religion": 0.0006513595581054688, "__label__science_tech": 0.0247650146484375, "__label__social_life": 0.0001913309097290039, "__label__software": 0.005229949951171875, "__label__software_dev": 0.95556640625, "__label__sports_fitness": 0.000583648681640625, "__label__transportation": 0.0009403228759765624, "__label__travel": 0.0002440214157104492}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20932, 0.00633]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20932, 0.51041]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20932, 0.90729]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 414, null], [414, 627, null], [627, 635, null], [635, 1068, null], [1068, 1405, null], [1405, 1897, null], [1897, 2140, null], [2140, 2419, null], [2419, 2790, null], [2790, 2961, null], [2961, 3296, null], [3296, 3743, null], [3743, 3766, null], [3766, 4005, null], [4005, 4641, null], [4641, 5016, null], [5016, 5235, null], [5235, 5571, null], [5571, 5652, null], [5652, 6849, null], [6849, 6862, null], [6862, 7411, null], [7411, 7807, null], [7807, 8248, null], [8248, 8697, null], [8697, 9079, null], [9079, 9207, null], [9207, 10690, null], [10690, 11264, null], [11264, 11412, null], [11412, 12254, null], [12254, 12568, null], [12568, 12709, null], [12709, 12994, null], [12994, 13386, null], [13386, 13400, null], [13400, 13645, null], [13645, 13875, null], [13875, 14428, null], [14428, 14677, null], [14677, 14912, null], [14912, 15166, null], [15166, 15537, null], [15537, 15883, null], [15883, 16095, null], [16095, 16425, null], [16425, 16868, null], [16868, 17259, null], [17259, 17842, null], [17842, 18369, null], [18369, 18889, null], [18889, 19335, null], [19335, 19574, null], [19574, 19826, null], [19826, 20015, null], [20015, 20709, null], [20709, 20932, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 414, null], [414, 627, null], [627, 635, null], [635, 1068, null], [1068, 1405, null], [1405, 1897, null], [1897, 2140, null], [2140, 2419, null], [2419, 2790, null], [2790, 2961, null], [2961, 3296, null], [3296, 3743, null], [3743, 3766, null], [3766, 4005, null], [4005, 4641, null], [4641, 5016, null], [5016, 5235, null], [5235, 5571, null], [5571, 5652, null], [5652, 6849, null], [6849, 6862, null], [6862, 7411, null], [7411, 7807, null], [7807, 8248, null], [8248, 8697, null], [8697, 9079, null], [9079, 9207, null], [9207, 10690, null], [10690, 11264, null], [11264, 11412, null], [11412, 12254, null], [12254, 12568, null], [12568, 12709, null], [12709, 12994, null], [12994, 13386, null], [13386, 13400, null], [13400, 13645, null], [13645, 13875, null], [13875, 14428, null], [14428, 14677, null], [14677, 14912, null], [14912, 15166, null], [15166, 15537, null], [15537, 15883, null], [15883, 16095, null], [16095, 16425, null], [16425, 16868, null], [16868, 17259, null], [17259, 17842, null], [17842, 18369, null], [18369, 18889, null], [18889, 19335, null], [19335, 19574, null], [19574, 19826, null], [19826, 20015, null], [20015, 20709, null], [20709, 20932, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20932, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20932, null]], "pdf_page_numbers": [[0, 115, 1], [115, 414, 2], [414, 627, 3], [627, 635, 4], [635, 1068, 5], [1068, 1405, 6], [1405, 1897, 7], [1897, 2140, 8], [2140, 2419, 9], [2419, 2790, 10], [2790, 2961, 11], [2961, 3296, 12], [3296, 3743, 13], [3743, 3766, 14], [3766, 4005, 15], [4005, 4641, 16], [4641, 5016, 17], [5016, 5235, 18], [5235, 5571, 19], [5571, 5652, 20], [5652, 6849, 21], [6849, 6862, 22], [6862, 7411, 23], [7411, 7807, 24], [7807, 8248, 25], [8248, 8697, 26], [8697, 9079, 27], [9079, 9207, 28], [9207, 10690, 29], [10690, 11264, 30], [11264, 11412, 31], [11412, 12254, 32], [12254, 12568, 33], [12568, 12709, 34], [12709, 12994, 35], [12994, 13386, 36], [13386, 13400, 37], [13400, 13645, 38], [13645, 13875, 39], [13875, 14428, 40], [14428, 14677, 41], [14677, 14912, 42], [14912, 15166, 43], [15166, 15537, 44], [15537, 15883, 45], [15883, 16095, 46], [16095, 16425, 47], [16425, 16868, 48], [16868, 17259, 49], [17259, 17842, 50], [17842, 18369, 51], [18369, 18889, 52], [18889, 19335, 53], [19335, 19574, 54], [19574, 19826, 55], [19826, 20015, 56], [20015, 20709, 57], [20709, 20932, 58]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20932, 0.04483]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
acfc03b608b47ff2c962bb872c0a78b3184a5e7b
1 Introduction In this lecture we will implement operations on heaps. The theme of this lecture is reasoning with invariants that are partially violated, and making sure they are restored before the completion of an operation. We will only briefly review the algorithms for inserting and deleting the minimal node of the heap; you should read the notes for Lecture 15 on priority queues and keep them close at hand. Temporarily violating and restoring invariants is a common theme in algorithms. It is a technique you need to master. 2 The Heap Structure We use the following header struct to represent heaps. ```c struct heap_header { int limit; /* limit = capacity+1 */ int next; /* 1 <= next && next <= limit */ elem[] data; /* \length(data) == limit */ }; typedef struct heap_header* heap; ``` Since the significant array elements start at 1, as explained in the previous lecture, the `limit` must be one greater than the desired capacity. The `next` index must be between 1 and `limit`, and the element array must have exactly `limit` elements. 3 Minimal Heap Invariants Before we implement the operations, we define a function that checks the heap invariants. The shape invariant is automatically satisfied due to the representation of heaps as arrays, but we need to carefully check the ordering invariants. It is crucial that no instance of the data structure that is not a true heap will leak across the interface to the client, because the client may then incorrectly call operations that require heaps with data structures that are not. First, we check that the heap is not null and that the length of the array matches the given limit. The latter must be checked in an annotation, because, in C and C0, the length of an array is not available to us at runtime except in contracts. Second, we check that next is in range, between 1 and limit. ```c bool is_safe_heap(heap H) { return H != NULL && (1 <= H->next && H->next <= H->limit) && is_array_expected_length(H->data, H->limit); } ``` This is not sufficient to know that we have a valid heap! The specification function `is_safe_heap` is the minimal specification function we need to be able to access the data structure; we want to make sure anything we pass to the user additionally satisfies the ordering invariant. This invariant acts as the precondition of some of our helper functions. We first use the client’s function `higher_priority(x,y)`, to express a more useful concept for our implementation: that the element in index i can be correctly placed as the parent of the element in index j in the heap. ```c bool ok_above(heap H, int i, int j) { //@requires is_safe_heap(H); //@requires 1 <= i && i < H->next; //@requires 1 <= j && j < H->next; { return !higher_priority(H->data[j], H->data[i]); } } ``` A second helper function that uses `is_safe_heap` swaps an element with its parent: void swap_up(heap H, int i) //@requires is_safe_heap(H); //@requires 2 <= i && i < H->next; { elem tmp = H->data[i]; H->data[i] = H->data[i/2]; H->data[i/2] = tmp; } 4 The Heap Ordering Invariant It turns out to be simpler to specify the ordering invariant in the second form, which stipulates that each node except the root needs to be greater or equal to its parent. To check this we iterate through the array and compare the priority of each node \(data[i]\) with its parent, except for the root \((i = 1)\) which has no parent. bool is_heap(heap H) //@requires is_safe_heap(H); { for (int i = 2; i < H->next; i++) //@loop_invariant 2 <= i; if (!ok_above(H, i/2, i)) return false; return true; } bool is_pq(struct heap_header* H) { return is_safe_heap(H) && is_heap(H); } 5 Creating Heaps We start with the simple code to test if a heap is empty or full, and to allocate a new (empty) heap. A heap is empty if the next element to be inserted would be at index 1. A heap is full if the next element to be inserted would be at index \(limit\) (the size of the array). bool pq_empty(heap H) Restoring Invariants ```c //@requires is_heap(H); { return H->next == 1; } bool pq_full(heap H) //@requires is_heap(H); { return H->next == H->limit; } To create a new heap, we allocate a struct and an array and set all the right initial values. heap pq_new(int capacity) //@requires capacity > 0; //@ensures is_heap( esult) && pq_empty( esult); { heap H = alloc(struct heap_header); H->limit = capacity+1; H->next = 1; H->data = alloc_array(elem, capacity+1); return H; } 6 Insert and Sifting Up The shape invariant tells us exactly where to insert the new element: at the index $H\rightarrow next$ in the data array. Then we increment the $next$ index. void pq_add(heap H, elem e) //@requires is_heap(H) && !pq_full(H); //@ensures is_heap(H); { H->data[H->next] = e; (H->next)++; ... } By inserting $e$ in its specified place, we have, of course, violated the ordering invariant. We need to sift up the new element until we have restored the invariant. The invariant is restored when the new element is bigger than or equal to its parent or when we have reached the root. We still need to sift up when the new element is less than its parent. This suggests the following code: ```c int i = H->next - 1; while (i > 1 && !ok_above(H,i/2,i)) { swap_up(H->data, i); i = i/2; } ``` Here, `swap` is the standard function, swapping two elements of the array. Setting `i = i/2` is moving up in the array, to the place we just swapped the new element to. At this point, as always, we should ask why accesses to the elements of the priority queue are safe. By short-circuiting of conjunction, we know that `i > 1` when we ask whether `H->data[i/2]` is okay above `H->data[i]`. But we need a loop invariant to make sure that it respects the upper bound. The index `i` starts at `H->next - 1`, so it should always be strictly less than `H->next`. ```c while (i > 1 && !ok_above(H,i/2,i)) //@loop_invariant 1 <= i && i < H->next; { swap_up(H->data, i); i = i/2; } ``` One small point regarding the loop invariant: we just incremented `H->next`, so it must be strictly greater than 1 and therefore the invariant `1 ≤ i` must be satisfied. But how do we know that swapping the element up the tree restores the ordering invariant? We need an additional loop invariant which states that `H` is a valid heap except at index `i`. Index `i` may be smaller than its parent, but it still needs to be less or equal to its children. We therefore postulate a function `is_heap_expect_up` and use it as a loop invariant. ```c while (i > 1 && !ok_above(H,i/2,i)) //@loop_invariant 1 <= i && i < H->next; //@loop_invariant is_heap_except_up(H, i); ``` The next step is to write this function. We copy the `is_heap` function, but check a node against its parent only when it is different from the distinguished element where the exception is allowed. bool is_heap_except_up(heap H, int n) //@requires is_safe_heap(H); //@requires 1 <= n & n < H->next; { for (int i = 2; i < H->next; i++) //@loop_invariant 2 <= i; if (!(i == n || ok_above(H, i/2, i))) return false; return true; } We observe that is_heap_except_up(H, 1) is equivalent to is_heap(H). That’s because the loop over i starts at 2, so the exception i ≠ n is always true. Now we try to prove that this is indeed a loop invariant, and therefore our function is correct. Rather than using a lot of text we verify this properties on general diagrams. Other versions of this diagram are entirely symmetric. On the left is the relevant part of the heap before the swap and on the right is the relevant part of the heap after the swap. The relevant nodes in the tree are labeled with their priority. Nodes that may be above a or below c, c₁, c₂ and to the right of a are not shown. These do not enter into the invariant discussion, since their relations between each other and the shown nodes remain fixed. Also, if x is in the last row the constraints regarding c₁ and c₂ are vacuous. We know the following properties on the left from which the properties shown on the right follow as shown: 1. \( a \leq b \) (order) \( a \neq x \) allowed exception 2. \( b \leq c \) (order) \( x \leq c \) from (5) and (2) 3. \( x \leq c_1 \) (order) \( x \leq b \) from (5) 4. \( x \leq c_2 \) (order) 5. \( x < b \) (since we swap) \( b \leq c_1 \) ?? \( b \leq c_2 \) ?? (For this and similar examples, we’ll assume that we’re using a min-heap.) So we see that simply stipulating the (temporary) invariant that every node is greater or equal to its parent except for the one labeled \( x \) is not strong enough. It is not necessarily preserved by a swap. But we can strengthen it a bit. You might want to think about how before you move on to the next page. The strengthened invariant also requires that the children of the potentially violating node \( x \) are greater or equal to their grandparent! Let’s reconsider the diagrams. We have more assumptions on the left now ((6) and (7)), but we have also two additional proof obligations on the right (\( a \leq c \) and \( a \leq b \)). 1. \( a \leq b \) (order) \( a \neq x \) allowed exception 2. \( b \leq c \) (order) \( a \leq c \) from (1) and (2) 3. \( x \leq c_1 \) (order) \( a \leq b \) (1) 4. \( x \leq c_2 \) (order) 5. \( x < b \) (since we swap) \( x \leq c \) from (5) and (2) \( x \leq b \) from (5) 6. \( b \leq c_1 \) (grandparent) \( b \leq c_1 \) (6) 7. \( b \leq c_2 \) (grandparent) \( b \leq c_2 \) (7) Success! We just need to add an additional function that checks this loop invariant: bool grandparent_check(heap H, int n) //@requires is_safe_heap(H); //@requires 1 <= n & n < H->next; { if (n == 1) return true; if (n*2 >= H->next) return true; // No children if (n*2 + 1 == H->next) // Left child only return ok_above(H, n/2, n*2); return ok_above(H, n/2, n*2) && ok_above(H, n/2, n*2 + 1); } Using this additional invariant, we have loop that provably restores the is_heap invariant. while (i > 1 && !ok_above(H,i/2,i)) //@loop_invariant 1 <= i & i < H->next; //@loop_invariant is_heap_except_up (H, i); //@loop_invariant grandparent_check(H, i); { swap_up(H->data, i); i = i/2; } Note that the strengthened loop invariants (or, rather, the strengthened definition what it means to be a heap except in one place) is not necessary to show that the postcondition of pq_insert (i.e. is_heap(H)) is implied. Postcondition: If the loop exits, we know the loop invariants and the negated loop guard: \[ 1 \leq i < next \quad (LI \ 1) \] \[ is\_heap\_except\_up(H, i) \quad (LI \ 2) \] Either \( i \leq 1 \) or \( \text{ok\_above}(H, i/2, i) \) \quad Negated loop guard We distinguish the two cases. Case: \( i \leq 1 \). Then \( i = 1 \) from (LI 1), and \( is\_heap\_except\_up(H, 1) \). As observed before, that is equivalent to \( is\_heap(H) \). Case: \( \text{ok\_above}(H, i/2, i) \). Then the only possible index \( i \) where \( is\_heap\_except\_up(H, i) \) makes an exception and does not check whether \( \text{ok\_above}(H, i/2, i) \) is actually no exception, and we have \( is\_heap(H) \). 7 Deleting the Minimum and Sifting Down Recall that deleting the minimum swaps the root with the last element in the current heap and then applies the sifting down operation to restore the invariant. As with insert, the operation itself is rather straightforward, although there are a few subtleties. First, we have to check that $H$ is a heap, and that it is not empty. Then we save the minimal element, swap it with the last element (at $next - 1$), and delete the last element (now the element that was previously at the root) from the heap by decrementing $next$. ```c elem pq_rem(heap H) //@requires is_pq(H) && !pq_empty(H); //@ensures is_pq(H); { int i = H->next; elem min = H->data[1]; (H->next)--; if (H->next > 1) { H->data[1] = H->data[H->next]; /* H is no longer a heap! */ sift_down(H, 1); } return min; } ``` Next we need to restore the heap invariant by sifting down from the root, with `sift_down(H, 1)`. We only do this if there is at least one element left in the heap. But what is the precondition for the sifting down operation? Again, we cannot express this using the functions we have already written. Instead, we need a function `is_heap_except_down(H, n)` which verifies that the heap invariant is satisfied in $H$, except possibly at $n$. This time, though, it is between $n$ and its children where things may go wrong, rather than between $n$ and its parent as in `is_heap_except_up(H, n)`. In the pictures below this would be at $n = 1$ on the left and $n = 2$ on the right. We change the test accordingly. ```c /* Valid heap except at n, looking down the tree */ bool is_heap_except_down(heap H, int n) //@requires is_safe_heap(H); //@requires 1 <= n && n < H->next; { for (int i = 2; i < H->next; i++) //@loop_invariant 2 <= i; if (!(i/2 == n || ok_above(H, i/2, i))) return false; return true; } ``` With this we can have the right invariant to write our `sift_down` function. The tricky part of this function is the nature of the loop. Our loop index $i$ starts at $n$ (which actually will always be 1 when this function is called). We have reached a leaf if $2 \times i \geq \text{next}$ because if there is no left child, there cannot be a right one, either. So the outline of our function shapes up as follows: ```c void sift_down(heap H) //@requires is_safe_heap(H) && H->next > 1 && is_heap_except_down(H, 1); //@ensures is_heap(H); { int i = 1; while (2*i < H->next) //@loop_invariant 1 <= i && i < H->next; //@loop_invariant is_heap_except_down(H, i); ``` We also have written down three loop invariants: the bounds for $i$, the heap invariant (everywhere, except possibly at $i$, looking down), and the grandparent check, which we anticipate from our previous problems. We want to return from the function if we have restored the invariant, that is if the element in index $i$ is okay above all of this children. However, there may be either 1 or 2 children (the loop guard checks that there will be at least one). So we have to guard this access by a bounds check. Clearly, when there is no right child, checking the left one is sufficient. ```c while (2*i < H->next) //@loop_invariant 1 <= i && i < H->next; //@loop_invariant is_heap_except_down(H, i); //@loop_invariant grandparent_check(H, i); { int left = 2*i; int right = left+1; if (ok_above(H, i, left) && (right >= H->next || ok_above(H, i, right))) return; ... ``` If this test fails, we have to determine the smaller of the two children. If there is no right child, we pick the left one, of course. Once we have found the smaller one we swap the current one with the smaller one, and then make the child the new current node $i$. ```c void sift_down(heap H) //@requires is_safe_heap(H) && H->next > 1 && is_heap_except_down(H, 1); //@ensures is_heap(H); { int i = 1; while (2*i < H->next) //@loop_invariant 1 <= i && i < H->next; //@loop_invariant is_heap_except_down(H, i); //@loop_invariant grandparent_check(H, i); { int left = 2*i; int right = left+1; if (ok_above(H, i, left) && (right >= H->next || ok_above(H, i, right))) return; if (right >= H->next || ok_above(H, left, right)) { swap_up(H, left); i = left; } else { //@assert right < H->next && ok_above(H, right, left); swap_up(H, right); i = right; } } //@assert i < H->next && 2*i >= H->next; //@assert is_heap_except_down(H, i); return; } ``` Before the second return, we know that $is_heap_except_down(H, i)$ and $2 \cdot i \geq next$. This means there is no node $j$ in the heap such that $j/2 = i$ and the exception in $is_heap_except_down$ will never apply. $H$ is indeed a heap. At this point we should give a proof that \texttt{is\_heap\_except\_down} is really an invariant. This is left as Exercise \ref{exercise:heapsort}. 8 Heapsort We rarely discuss testing in these notes, but it is useful to consider how to write decent test cases. Mostly, we have been doing random testing, which has some drawbacks but is often a tolerable first cut at giving the code a workout. It is \textit{much} more effective in languages that are type safe such as C0, and even more effective when we dynamically check invariants along the way. In the example of heaps, one nice way to test the implementation is to insert a random sequence of numbers, then repeatedly remove the minimal element until the heap is empty. If we store the elements in an array in the order we take them out of the heap, the array should be sorted when the heap is empty! This is the idea behind heapsort. We first show the code, using the random number generator we have used for several lectures now, then analyze the complexity. ```c int main() { int n = (1<<9)-1; // 1<<9 for -d; 1<<13 for timing int num_tests = 10; // 10 for -d; 100 for timing int seed = 0xc0c0ffee; rand_t gen = init_rand(seed); int\[] A = alloc_array(int, n); heap H = pq_new(n); print("Testing heap of size "); printint(n); print(" "); printint(num_tests); print(" times "); for (int j = 0; j < num_tests; j++) { for (int i = 0; i < n; i++) { pq_insert(H, rand(gen)); } for (int i = 0; i < n; i++) { A[i] = pq_delmin(H); } assert(pq_empty(H)); /* heap not empty */ assert(is_sorted(A, 0, n)); /* heapsort failed */ } print("Passed all tests!\n"); } ``` return 0; } Now for the complexity analysis. Inserting $n$ elements into the heap is bounded by $O(n \times \log(n))$, since each of the $n$ inserts is bounded by $\log(n)$. Then the $n$ element deletions are also bounded by $O(n \times \log(n))$, since each of the $n$ deletions is bounded by $\log(n)$. So all together we get $O(2 \times n \times \log(n)) = O(n \times \log(n))$. Heapsort is asymptotically as good as mergesort or as good as the expected complexity of quicksort with random pivots. The sketched algorithm uses $O(n)$ auxiliary space, namely the heap. One can use the same basic idea to do heapsort in place, using the unused portion of the heap array to accumulate the sorted array. Testing, including random testing, has many problems. In our context, one of them is that it does not test the strength of the invariants. For example, say we write no invariants whatsoever (the weakest possible form), then compiling with or without dynamic checking will always yield the same test results. We really should be testing the invariants themselves by giving examples where they are not satisfied. However, we should not be able to construct such instances of the data structure on the client side of the interface. Furthermore, within the language we have no way to “capture” an exception such as a failed assertion and continue computation. 9 Summary We briefly summarize key points of how to deal with invariants that must be temporarily violated and then restored. 1. Make sure you have a clear high-level understanding of why invariants must be temporarily violated, and how they are restored. 2. Ensure that at the interface to the abstract type, only instances of the data structure that satisfy the full invariants are being passed. Otherwise, you should rethink all the invariants. 3. Write predicates that test whether the partial invariants hold for a data structure. Usually, these will occur in the preconditions and loop invariants for the functions that restore the invariants. This will force you to be completely precise about the intermediate states of the data structure, which should help you a lot in writing correct code for restoring the full invariants. Exercises Exercise 1 Write a recursive version of \texttt{is_heap}. Exercise 2 Write a recursive version of \texttt{is_heap_except_up}. Exercise 3 Write a recursive version of \texttt{is_heap_except_down}. Exercise 4 Give a diagrammatical proof for the invariant property of sifting down for delete (called \texttt{is_heap_except_down}), along the lines of the one we gave for sifting up for insert. Exercise 5 Say we want to extend priority queues so that when inserting a new element and the queue is full, we silently delete the element with the lowest priority (= maximal key value) before adding the new element. Describe an algorithm, analyze its asymptotic complexity, and provide its implementation. Exercise 6 Using the invariants described in this lecture, write a function \texttt{heapsort} which sorts a given array in place by first constructing a heap, element by element, within the same array and then deconstructing the heap, element by element. [\textbf{Hint:} It may be easier to sort the array in descending order and reverse in a last pass or use so called max heaps where the maximal element is at the top] Exercise 7 Is the array \texttt{H->data} of a heap always sorted?
{"Source-Url": "http://www.cs.cmu.edu:80/~rjsimmon/15122-f14/lec/16-resinvs.pdf", "len_cl100k_base": 5686, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32518, "total-output-tokens": 6589, "length": "2e12", "weborganizer": {"__label__adult": 0.00051116943359375, "__label__art_design": 0.00029015541076660156, "__label__crime_law": 0.0004589557647705078, "__label__education_jobs": 0.0007576942443847656, "__label__entertainment": 6.937980651855469e-05, "__label__fashion_beauty": 0.00018870830535888672, "__label__finance_business": 0.0001246929168701172, "__label__food_dining": 0.00063323974609375, "__label__games": 0.0010728836059570312, "__label__hardware": 0.001201629638671875, "__label__health": 0.0006580352783203125, "__label__history": 0.00024628639221191406, "__label__home_hobbies": 0.00012123584747314452, "__label__industrial": 0.0004639625549316406, "__label__literature": 0.0003037452697753906, "__label__politics": 0.0003285408020019531, "__label__religion": 0.0007181167602539062, "__label__science_tech": 0.0085601806640625, "__label__social_life": 0.0001004934310913086, "__label__software": 0.0023136138916015625, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.0004761219024658203, "__label__transportation": 0.0007367134094238281, "__label__travel": 0.0002608299255371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21297, 0.01415]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21297, 0.40252]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21297, 0.82637]], "google_gemma-3-12b-it_contains_pii": [[0, 1085, false], [1085, 2952, null], [2952, 4091, null], [4091, 5079, null], [5079, 7007, null], [7007, 8124, null], [8124, 9699, null], [9699, 11260, null], [11260, 12747, null], [12747, 13838, null], [13838, 14800, null], [14800, 16160, null], [16160, 17895, null], [17895, 20096, null], [20096, 21297, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1085, true], [1085, 2952, null], [2952, 4091, null], [4091, 5079, null], [5079, 7007, null], [7007, 8124, null], [8124, 9699, null], [9699, 11260, null], [11260, 12747, null], [12747, 13838, null], [13838, 14800, null], [14800, 16160, null], [16160, 17895, null], [17895, 20096, null], [20096, 21297, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21297, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21297, null]], "pdf_page_numbers": [[0, 1085, 1], [1085, 2952, 2], [2952, 4091, 3], [4091, 5079, 4], [5079, 7007, 5], [7007, 8124, 6], [8124, 9699, 7], [9699, 11260, 8], [11260, 12747, 9], [12747, 13838, 10], [13838, 14800, 11], [14800, 16160, 12], [16160, 17895, 13], [17895, 20096, 14], [20096, 21297, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21297, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a9f8271e847beac99211303bd1c3821c7696ede6
NCL and ITU-T's Standardization Effort on Multimedia Application Frameworks for IPTV Marcelo Ferreira Moreno Departamento de Informática PUC-Rio Rio de Janeiro, RJ, Brasil moreno@inf.puc-rio.br Carlos Eduardo C. F. Batista Departamento de Informática PUC-Rio Rio de Janeiro, RJ, Brasil cbatista@inf.puc-rio.br Luiz Fernando Gomes Soares Departamento de Informática PUC-Rio Rio de Janeiro, RJ, Brasil lfgs@inf.puc-rio.br ABSTRACT Multimedia applications aimed at running on IPTV terminal devices must be platform-independent, because content creators and content providers cannot develop a specific application for each existent terminal device platform. Harmonization is a key characteristic for such an application environment, and agreeing upon technical standards to allow interoperability is a challenge that should be addressed carefully. The Multimedia Application Framework Recommendation (MAFR) Series (ITU-T H.760 series) is an effort by ITU-T to identify and harmonize the relevant multimedia application frameworks that are best suitable for IPTV services. Several established technologies from Broadcast, Cable, Web and IPTV markets are being studied and profiled (H.IPTV-MAFR drafts). This paper describes ITU-T Question 13 Study Group 16’s work on MAFR standards and discusses the relevancy of the Nested Context Language (NCL) in this context. Categories and Subject Descriptors General Terms Documentation, Standardization. Keywords NCL, Standardization, ITU-T, IPTV, Multimedia Application Framework 1. INTRODUCTION In IPTV services, content creators and content providers are able to add to their products multimedia applications that, in a certain way, will enrich the end users’ experience on viewing the television programming. Such applications are mainly based on multimedia content, including audio, video, text, pictures, and so on, and can be developed aiming at interactivity, electronic services, gaming etc. The IPTV terminal device market is characterized by having multiple vendors and many of them build products based on their own software and hardware platforms. But multimedia applications must be platform-independent because content creators and content providers cannot develop a specific application for each existent terminal device platform. Therefore, interoperability is mandatory. To promote IPTV equipment interoperability, ITU-T started the Multimedia Application Framework Recommendation Series (H.760 series), [1] as an effort to identify and harmonize the relevant multimedia application frameworks that are best suitable for IPTV services. Several established technologies from Broadcast, Cable, Web and IPTV markets are being studied and profiled (H.IPTV-MAFR drafts). Some new emerging technologies are also under discussion. ITU-T Question 13/Study Group 16 (a.k.a. Q13/16) is the workgroup in charge of that. With the MAFR series, terminal device vendors will have the certainty that their application platforms are compliant with a given market that specifies one or more MAFR technologies as its standardized application framework. Moreover, terminal device vendors will be able to compete in multiple markets and countries, because terminal device migration can be supported, by embedding well-known MAFR standards into the terminal device. They will also be able to define hybrid terminal devices that can be easily ported from a market to another, based on the latest techniques to build component-based software architectures and configurable systems. A given terminal device can be sold in Japan, South Korea and Brazil, for example, since the vendor can develop and make available to the consumer software components that implement their respective multimedia application engines. The Nested Context Language (NCL) is the first standardized technology in the MAFR series, under recommendation H.761 [2]. NCL’s features like spatiotemporal synchronization, content adaptation, multi-device exhibition and its glue-language approach make it an excellent solution for IPTV multimedia services. Moreover, it is a solution capable to promote the harmonization among MAFR technologies. This is the reason why NCL and its presentation engine, called Ginga-NCL, are also standardized in ITU-R BT.1699 [], ITU-T J.200 [] and J.201 [] as a solution for multiple format integration and spatiotemporal synchronization. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Webmedia ’10, October 5–8, 2010, Belo Horizonte, MG, Brazil. Copyright 2010 ACM 1-59113-000-0/00/0010…$10.00. This paper overviews the ongoing work in ITU-T regarding the MAFR recommendation series and identifies the relevancy of NCL in this scenario. The text is organized as follows: section 2 summarizes the MAFR Series of specifications; section 3 outlines the MAFR Common Suite, intended to define a minimum set of closely-related MAFR technologies to promote harmonization; section 4 discusses issues for the development of IPTV Widgets; section 5 focuses on the conformance testing specification; section 6 gives a general idea of the other trending topics being discussed in the MAFR scope; and finally, section 7 concludes the paper. 2. THE MAFR SERIES Recommendation ITU-T H.760 [1] identifies and describes the relevant standards of multimedia application frameworks. It is an overview of standards for declarative application frameworks and procedural application frameworks. Declarative application frameworks include HTML, CSS, DOM, SVG, DVB-HTML, BML, WTVML, CEA-2014, M3M and NCL. EcmaScript and Lua are mentioned as scripting languages to extend some of these declarative languages. Procedural application frameworks are based on GEM. H.760 also contains descriptions for M3M, BIFS and LASeR. An annex recommends language profiles to harmonize web-related technologies like HTML, DVB-HTML, CEA-2014 and BML. H.760 is limited to present an overview of multimedia application frameworks. The technologies actually standardized for ITU-T MAFR for IPTV services are specified in the other documents of the series. The following subsections explain these recommendations and draft recommendations under the MAFR Series. 2.1 ITU-T Recommendation H.761 The ITU-T Recommendation H.761 [2], entitled “Nested Context Language (NCL) and Ginga-NCL for IPTV services”, specifies NCL and its presentation engine, called Ginga-NCL as the first Recommendation produced as part of the MAFR Series. Nested context language (NCL) is a declarative XML-based language, initially designed aiming at hypermedia document specification for the Web, which due to its flexibility, reuse facility, multi-device support, application content adaptability, and mainly, the language intrinsic ability to easily define spatiotemporal synchronization among media assets, including viewer interactions, make it an excellent solution for IPTV systems. NCL is also used in the ISDB-Tb DTV broadcasting standard. NCL is a glue language that holds media objects together in a multimedia presentation, no matter which object types they are. Ginga-NCL is an NCL presentation engine built as a component of an IPTV middleware. As an example, NCL treats an HTML document as one of its possible media objects. In this way, NCL does not substitute but embed XHTML-based documents. The same reasoning applies to other multimedia objects, as for example, a media object containing an MHEG application. Ginga-NCL also supports behavior modifications into running applications, also known as live editing, by accepting event descriptors and editing commands to change any structure of an NCL document under presentation in a terminal device. A particular NCL object type defined in Ginga-NCL is NCLua, an imperative media-object with Lua code, Lua being the scripting language for Ginga-NCL. Because of its simplicity, efficiency and its powerful data description syntax, Lua was considered the natural scripting language for Ginga-NCL. The Lua engine is small and written in ANSI/C making it easily portable to several hardware platforms. An open source reference implementation of Ginga-NCL is also available under the GPLv2 license. This reference implementation was developed in a way that it can easily have incorporated a variety of media-object players, for audio, video, image, text etc., including imperative execution engines. 2.2 ITU-T Recommendation H.762 Recommendation ITU-T H.762 [3] describes the high-level functionalities of the lightweight interactive multimedia environment (LIME) for IPTV. LIME (formerly BML for IPTV) supports functionalities in IPTV terminal devices to provide interactivity and a variety of content such as audio, video, graphics and text. Expected services include additional data such as text to enrich television programs, and two-way portal pages. The main part of LIME consists of the following components: - The "LIME-HTML" profile of XHTML 1.0. This profile is compliant with the "HTML for IPTV services" Recommendation of the multimedia application framework (MAFR) series currently under development. - The LIME-CSS" profile of CSS1 and a part of CSS2. This profile is compliant with the "CSS for IPTV services" Recommendation of the MAFR series currently under development. - The "LIME-DOM" profile of DOM specification. This profile is compliant with the "DOM for IPTV services" Recommendation of the MAFR series currently under development. The scripting language "LIME-Script", which is a subset of ECMAScript but has functional extensions required for IPTV services. LIME-Script is compliant with the "ECMAScript for IPTV services" Recommendation of the MAFR series currently under development. 2.3 H.IPTV-MAFR.6 Draft recommendation H.IPTV-MAFR.6 [4] describes ECMAScript as one of standardized multimedia application frameworks, to provide interoperable use of IPTV services. It gives the core ECMAScript profile as well as enhanced functionalities for IPTV services. ECMAScript supports a scripting programming language, used on the Web and is often referred to as JavaScript or JScri, after the two primary implementations of the specification. ECMAScript is supported in many applications and also included as a component in many presentation engines (PE) such as BML and DVB-HTML, which are used for digital data broadcasting. Some implementations have a completely different set of libraries; making applications written in one dialect of ECMAScript will not necessarily work in another. ECMAScript is an object-oriented programming language for performing computations and manipulating computational objects within a host environment. It was originally designed to be a web scripting language, providing a mechanism to enliven web pages in browsers and to perform server computation as part of a web-based client-server architecture. 2.4 H.IPTV-MAFR.10 Draft Recommendation H.IPTV-MAFR.10 [5] describes Scalable Vector Graphics (SVG), which is a language for describing twodimensional graphics and graphical applications in XML. SVG allows for three types of graphic objects: vector graphic shapes (e.g., paths consisting of straight lines and curves), images and text. Graphical objects can be grouped, styled, altered and composited in previously rendered objects. The feature set includes nested transformations, clipping paths, alpha masks, filter effects and template objects. SVG drawings can be interactive and dynamic. Animations can be defined and triggered either declaratively (i.e., by embedding SVG animation elements in SVG content) or via scripting. Sophisticated applications of SVG are feasible by use of a supplemental scripting language which accesses SVG DOM, which provides complete access to all elements, attributes and properties. SVG Basic and SVG Tiny profiles are targeted to resource-limited devices and are part of the 3GPP platform for third generation mobile phones. SVG Print is a set of guiding principles to produce final-form documents in XML suitable for archiving and printing. 2.5 H.IPTV-MAFR.14 Draft Recommendation H.IPTV-MAFR.14 [7] specifies Lua scripting language. Lua can be viewed as an extension language, because it has no notion of a "main" program: it only works embedded into a host client, called the embedding program or simply the host. This host program may invoke functions to execute a piece of Lua code, may write and read Lua variables, and may register C functions to be called by Lua code. Through the use of C functions, Lua may be augmented to cope with a wide range of different domains, thus creating customized programming languages sharing a syntactical framework. The Lua distribution includes a sample host program called “lua”, which uses the Lua library to offer a complete, stand-alone Lua interpreter. The Lua engine is distributed as free software under the MIT license. Ginga-NCL [2] presentation engine integrates NCL and Lua players into a declarative environment. NCL and Lua frameworks can be used independently in other declarative environments, but if they are used together they shall follow Ginga-NCL specification. Draft Recommendation H.IPTV-MAFR.14 presents the Lua specification as a general-purpose language. However, any conformant implementation of Lua for IPTV Services shall follow the restrictions it states, and shall provide the IPTV Core API, which is also described by the recommendation. MAFR.14 also includes the description of an IPTV Extended API, which is optional, but shall be followed if any of its functionalities is to be implemented. 3. COMMON MAFR SUITE As aforementioned, ITU-T Recommendation H.760 briefly describes the relevant MAFR technologies but does not specify how they are integrated or harmonized [1]. Some harmonization effort can be found in Annex A of H.760, which describes common usage of related technologies such as HTML, DOM, CSS and ECMAScript. However, that effort was preliminary and more discussion on MAFR harmonization has been recently started in ITU-T Q13/16. It was agreed by ITU-T W134/16 that a Common MAFR Suite could be proposed to recommend a minimum set of closely related MAFR technologies that an IPTV terminal device shall support. The technologies in the common suite must be integrated into a package that is lightweight enough to be embedded into baseline terminal devices [8]. A Common MAFR Suite would improve not only equipment interoperability, but also the compatibility between different markets and therefore enable the global interchange of multimedia content. This is aligned with actions that ITU-T Resolution 76 resolves (see Section 5 for details). 3.1 NCL as a candidate technology for a Common MAFR Suite NCL separates document (or application) content and structure. NCL does not define itself any media content. Instead, it defines the glue that holds media objects together in multimedia presentations [2]. An NCL document only defines how media objects are structured and related, in time and space. As a glue language, it does not restrict nor prescribe the media-object content types. In this sense, we may have image objects (GIF, JPEG, etc.), video objects (MPEG, MOV, etc.), audio objects (MP3, WMA etc.), text objects (TXT, PDF etc.), imperative objects (Java Xlet [9], Lua etc.), declarative objects (XHTML, SVG…) etc, defined as NCL media objects. Which media objects are supported depends on the media players that are coupled into the NCL formatter. One of these players is the main video and audio decoder/player, usually implemented in hardware in the IPTV terminal device. In this way, note that the main video and audio are treated like all other media objects that may be related using NCL. ITU-T Recommendation H.761 states that the XHTML-based media object [10] is essential. Hence, NCL does not substitute but embed XHTML-based documents (or objects). As with other media objects, which XHTML-based language will have support in an NCL formatter is an implementation choice, and, therefore, it will depend on which XHTML browser will act as a media player integrated to the NCL formatter. Therefore, it is possible to have BML [11] browsers, DVB-HTML [12] browsers and ACAP-X [13] browsers embedded in an NCL document player. It is even possible to have them all. It is also possible to receive a browser code through datacasting and install it as a plug-in (typically Lua objects). It is also possible to have a harmonization browser implemented, and receiving the complementary part, if needed, as a plug-in, in order to convert the XHTML player into one of the several IPTV browser standards. Given the above, NCL can be viewed as a feasible solution to promote lightweight integration among MAFR technologies. Figure 1 illustrates an example of an MAFR suite with NCL. Note that closely-related MAFR technologies are also included in the figure. More contributions are expected to define the Common MAFR Suite. Currently, only NCL and Lua are mentioned as candidate technologies to integrate the suite. 4. IPTV WIDGETS Widget is the definition used for an interactive element of a graphical user interface (GUI) that displays an information arrangement. Commonly, the term widget is also used to specify an element different from basic GUI components because it provides a single interaction point for the direct manipulation of data in a particular context, but as visual components, widgets can be combined to form an application, or may be used separated, as individual applications. Currently, though, the term is more often used to define lightweight applications such as a monitor for stock market, weather forecast, a calculator, a news aggregator etc. W3C defines widget as an interactive single purpose application for displaying and/or updating local data or data on the Web, packaged in a way to allow a single download and installation on a user’s system [14]. Widgets are used on many environments and with different applicability – they may be found on computers' desktops, on mobile devices, web applications and on Digital TV platforms as well. A widget engine is the software layer that enables users for running and displaying widgets on a graphical user interface, such as the graphical layer of an IPTV terminal device. Such widgets commonly provide relevant information graphically and/or provide easy access to frequently used functions on a system. IPTV Widgets, thus, are lightweight applications that are used frequently by the IPTV terminal user, such as calendars and news aggregators, with an easily accessible graphical user interface, often staying on the display. IPTV Widgets may be classified by their functionality. The classification below is a non-exhaustive list of categories that has been collected from [15], revised and extended to the IPTV domain: - Accessory Widgets: self-contained widgets that do not require support from a content provider or from other applications (e.g.: clocks, calculators, offline games) - Application Widgets: widgets that just present a different interface for a regular application already present in the terminal device (e.g.: mini player, address book, picture frame); - Information Widgets: widgets that displays processed data downloaded from a content provider (e.g.: news readers, information tickers, weather forecasters). - Service Widgets: Information widgets that are related to IPTV services (channel-specific EPG, content recommenders, service provider announcers). Since an IPTV widget may run on different kinds of terminal devices, like set-top boxes, TV sets, and mobile devices, portability is an important issue and should be addressed based on standardized technologies supported by Widget Engines in the terminal device. IPTV Widgets must be developed using the technologies defined in the H.760 series (Multimedia Application Framework), such as HTML, LIME, CSS, ECMA Script, NCL and Lua. A Widget Engine is the entity responsible for instantiating widget(s) in the client side (i.e. the IPTV terminal device). As shown in the above figure (Figure 2), the widget engine instantiates selected widget(s) using a number of technologies in the IPTV terminal device. ![Widgets and Widget Engine](image-url) **Figure 2. Widgets and Widget Engine** Currently, ITU-T Q13/16 is conducting studies on the requirements for widget development, in order to establish a harmonized service model and framework for IPTV services, considering how these requirements are addressed by Recommendation ITU-T H.760 series (H.IPTV-MAFR). These requirements associated with the characteristics of a widget application were used to define a set of guidelines for the development of IPTV widgets. The following guidelines are independent of the MAFR technology chosen to develop a widget: - Packaging – a widget must be packaged using a standard format recognizable by IPTV widget engines. Widget developers must have in mind that this package will be distributed to different devices and locations. - Metadata and Configuration – widget developers shall have the tools to inform the widget engine the configuration information of a widget, comprising: metadata elements about a widget, such as its title, some form of identification, and versioning information; metadata containing authorship information; a bootstrapping mechanism in order to enable the widget user agents to automatically instantiate a widget; environment configuration parameters. - Security – Requirements that a conforming specification needs to address in order to standardize an adequate security model that is permeated on all elements involved on the execution of IPTV widgets. Such a security model must adopt a robust and flexible digital signature scheme and processing model and limit the potential for widgets to perform harmful operations on the terminal device. NCL and Lime widgets are being standardized and harmonization effort is under discussion. The final intention is to define common procedures for widget packaging, signing, configuration and metadata. 5. CONFORMANCE TESTING SPECIFICATIONS ITU-T Resolution 76 "Studies related to conformance and interoperability testing, assistance to developing countries, and a possible future ITU Mark programme" resolves that study groups must take actions to improve interoperability as soon as possible. These actions include, among others: - Development of Recommendations that deal with conformance testing; - Progress of Recommendations to address interoperability testing; - Definition of conformance and interoperability testing requirements for verification of the parameters defined in Recommendations and to ensure full compatibility; - Assistance of national and regional testing entities to ITU-T in implementing conformance and interoperability testing; - Identification of Recommendations that would be candidates for interoperability, which are capable of providing end-to-end interoperable services on a global scale. Resolution 76 and its appreciation by Q13/16 led the workgroup to intensify its discussions on harmonization and interoperability. The work item on Common MAFR Suite (see Section 3) is a result of Q13/16 actions. Another one is the specification of conformance and interoperability tests for IPTV recommendations. The process of conformance testing aims to verify whether the implementation of a particular standard matches the requirements settled by this standard. The correctness of an implementation is defined through the verification of the results generated by tests conducted based on the standards specific criteria. A new recommendation series is under development to specify conformance tests for IPTV recommendations. The MAFR series is included in this work item. 5.1 H.761 Conformance Testing Specification The Conformance Testing Specification for H.761 is under development towards to cover all functional aspects defined by H.761 Recommendation. Currently, it is registered as the Draft New Recommendation H.IPTV-CONF.5 “H.761 conformance testing specification”. H.IPTV-CONF.5 is composed of more than 600 test assertions referring to the functionalities provided by NCL 3.0 and more than 700 test assertions related to the NCLua API. The test cases are being developed for a test suite reference implementation, which will take part of the specification as an electronic annex. Each test assertion (Figure 3) is composed of the following information: - A unique id. - Reference: the source document and clause where the normative statement that the test assertion addresses is defined. Example: ITU-T H.761 - 7.2.3. - Prescription level: indicates how imperative is the referred normative statement. Possible values are: “mandatory” (required), “permitted” (optional) and “preferred” (recommended). - Validation type: “positive” if the assertion instructions take the form of a well-succeeded task or “negative” otherwise. - Target: the element(s) and attributes involved in the assertion. Example: Element <region>, attribute title. - Instructions: The procedure and its expected behavior. Example: “Create a document with nested <region> elements. The regions must be displayed accordingly”. - Normative statement: i) text (excerpted from the specification) that originated the assertion or ii) a normative statement summary or iii) a test objective. Example: “a <regionBase> element, ..., defines a set of <region> elements, each of which may contain another set of nested <region> elements, and so on, recursively”. ![Figure 3. An example H.761 test assertion](image) 6. OTHER TOPICS UNDER DISCUSSION The following subsections present other relevant topics under discussion on the scope of the MAFR Recommendation Series. 6.1 Web-based-terminal middleware (WBTM) Web-based IPTV terminal middleware supports basic, advanced interactive IPTV services for IPTV terminal device. It is required to review the IPTV service requirements and architecture, as well IPTV terminal devices. Detail descriptions for architecture are in Y.1900 series, and IPTV terminals are in H.720 series. Web-based IPTV terminal middleware is needed to define the interfaces on IPTV terminal functional architecture and structure of presentation engine. H.720 defines the terminal middleware as located in the terminal side. It is “the mediating entity between two information elements.” [16] locates the terminal middleware in the terminal device, and for WBTM it could be located on “various engines (e.g. HTML Browser) along with a set of high-level services (e.g. HTML, NCL, CSS, EcmaScript, Lua)”. The WBTM presentation engine is depicted in Figure 4. ![Figure 4. WBTM presentation engine](image) 6.2 3D IPTV The increasing demand for multimedia applications associated to the actual stage of graphical and human-interaction hardware are the key factors allowing development of multimedia applications with 3D graphics for media consumers in general. Transmission of 3D video over TV channels is already being done worldwide, and soon most IPTV terminals will be equipped with 3D video decoding capabilities. Thus, it is also necessary to provide technologies to allow the development of interactive applications featuring 3D content. Currently, some web-related standards already gather features for the development of application with 3D graphics. Technologies like VRML [18], X3D [19] and 3DML [20] can be embeddable on other declarative languages like XHTML and NCL. Studies are being conducted focusing the interoperability aspects of the relevant technologies related to the development of 3D applications and 3D TV, so that harmonization can be pursued for 3D IPTV as well. Q13/16 is discussing 3D IPTV and expects contributions for the first discussions 3D interactivity. 7. CONCLUSION This paper outlined the current status of the standardization effort being conducted in ITU-T Q13/16 regarding Multimedia Application Frameworks and related topics for IPTV services. Several ramifications emerged from the initial work, all related to new technology possibilities, which are being discussed. As mentioned throughout this paper, interoperability is a mandatory characteristic for the IPTV terminal device market. The technologies being considered for the MAFR Series not only fulfill the current requirements for multimedia application development but also have potential for emerging new application model and services for the IPTV environment. In this context, the Nested Context Language and its presentation engine, Ginga-NCL, are rising as an excellent solution for multimedia application framework due to their abilities to support advanced tasks like spatiotemporal synchronization, live editing, multi-device exhibition and content adaptation. But an important feature of NCL comes to attention when the discussion regards integration, harmonization and interoperability: its glue-language approach, which allows for the integration of different content formats and the conception of harmonized specifications. It was from discussions around NCL that some work items like the Common MAFR Suite, Conformance testing specifications and IPTV widgets have started in Q13/16. This paper’s authors are constantly contributing to Q13/16 work since 2007, when the question was under study in the IPTV focus group. One of the authors is an associate rapporteur of Q13/16. The TeleMidia Lab has conducted all the NCL standardization effort in ITU-T and its consequent discussions. With the recent adoption of Ginga-NCL in many south-american countries both for IPTV and terrestrial DTV, some researchers and developers may be interested in participate on these activities. 8. REFERENCES
{"Source-Url": "http://www.telemidia.puc-rio.br/files/biblio/2010_10b_moreno.pdf", "len_cl100k_base": 6148, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22380, "total-output-tokens": 7377, "length": "2e12", "weborganizer": {"__label__adult": 0.0004737377166748047, "__label__art_design": 0.000865936279296875, "__label__crime_law": 0.0005035400390625, "__label__education_jobs": 0.0007567405700683594, "__label__entertainment": 0.0005130767822265625, "__label__fashion_beauty": 0.000213623046875, "__label__finance_business": 0.0005888938903808594, "__label__food_dining": 0.0003387928009033203, "__label__games": 0.00109100341796875, "__label__hardware": 0.0153350830078125, "__label__health": 0.0003919601440429687, "__label__history": 0.00045561790466308594, "__label__home_hobbies": 8.90493392944336e-05, "__label__industrial": 0.0007481575012207031, "__label__literature": 0.000316619873046875, "__label__politics": 0.0003783702850341797, "__label__religion": 0.0005240440368652344, "__label__science_tech": 0.32080078125, "__label__social_life": 8.887052536010742e-05, "__label__software": 0.07330322265625, "__label__software_dev": 0.5810546875, "__label__sports_fitness": 0.0003032684326171875, "__label__transportation": 0.0005788803100585938, "__label__travel": 0.00023829936981201172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32259, 0.03363]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32259, 0.36731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32259, 0.89716]], "google_gemma-3-12b-it_contains_pii": [[0, 4994, false], [4994, 11172, null], [11172, 17437, null], [17437, 22457, null], [22457, 27090, null], [27090, 32259, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4994, true], [4994, 11172, null], [11172, 17437, null], [17437, 22457, null], [22457, 27090, null], [27090, 32259, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32259, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32259, null]], "pdf_page_numbers": [[0, 4994, 1], [4994, 11172, 2], [11172, 17437, 3], [17437, 22457, 4], [22457, 27090, 5], [27090, 32259, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32259, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
52c280c85fb0ef040df6471d4de2984658f01bdf
Object-oriented programming (OOP) is a computer science term used to characterize a programming language that began development in the 1960’s. The term ‘object-oriented programming’ was originally coined by Xerox PARC to designate a computer application that describes the methodology of using objects as the foundation for computation. By the 1980’s, OOP rose to prominence as the programming language of choice, exemplified by the success of C++. Currently, OOPs such as Java, J2EE, C++, C#, Visual Basic.NET, Python and JavaScript are popular OOP programming languages that any career-oriented Software Engineer or developer should be familiar with. OOP is widely accepted as being far more flexible than other computer programming languages. OOPs use three basic concepts as the fundamentals for the programming language: classes, objects and methods. Additionally, Inheritance, Abstraction, Polymorphism, Event Handling and Encapsulation. **Common Questions & Answers** 1) **What is meant by Object Oriented Programming?** OOP is a method of programming in which programs are organized as cooperative collections of objects. Each object is an instance of a class and each class belong to a hierarchy. 2) **What is a Class?** Class is a template for a set of objects that share a common structure and a common behavior. 3) **What is an Object?** Object is an instance of a class. It has state, behavior and identity. It is also called as an instance of a class. 4) **What is an Instance?** An instance has state, behavior and identity. The structure and behavior of similar classes are defined in their common class. An instance is also called as an object. 5) **What are the core OOP’s concepts?** Abstraction, Encapsulation, Inheritance and Polymorphism are the core OOP’s concepts. 6) **What is meant by abstraction?** Abstraction defines the essential characteristics of an object that distinguish it from all other kinds of objects. Abstraction provides crisply-defined conceptual boundaries relative to the perspective of the viewer. Its the process of focusing on the essential characteristics of an object. Abstraction is one of the fundamental elements of the object model. 7) **What is meant by Encapsulation?** Encapsulation is the process of compartmentalizing the elements of an abstraction that defines the structure and behavior. Encapsulation helps to separate the contractual interface of an abstraction and implementation. 8) **What is meant by Inheritance?** Inheritance is a relationship among classes, wherein one class shares the structure or behavior defined in another class. This is called Single Inheritance. If a class shares the structure or behavior from multiple classes, then it is called Multiple Inheritance. Inheritance defines “is-a” hierarchy among classes in which one subclass inherits from one or more generalized superclasses. 9) What is meant by Polymorphism? Polymorphism literally means taking more than one form. Polymorphism is a characteristic of being able to assign a different behavior or value in a subclass, to something that was declared in a parent class. 10) What is an Abstract Class? Abstract class is a class that has no instances. An abstract class is written with the expectation that its concrete subclasses will add to its structure and behavior, typically by implementing its abstract operations. 11) What is an Interface? Interface is an outside view of a class or object which emphasizes its abstraction while hiding its structure and secrets of its behavior. 12) What is a base class? Base class is the most generalized class in a class structure. Most applications have such root classes. In Java, Object is the base class for all classes. 13) What is a subclass? Subclass is a class that inherits from one or more classes 14) What is a superclass? superclass is a class from which another class inherits. 15) What is a constructor? Constructor is an operation that creates an object and/or initializes its state. 16) What is a destructor? Destructor is an operation that frees the state of an object and/or destroys the object itself. In Java, there is no concept of destructors. Its taken care by the JVM. 17) What is meant by Binding? Binding denotes association of a name with a class. 18) What is meant by static binding? Static binding is a binding in which the class association is made during compile time. This is also called as Early binding. 19) What is meant by Dynamic binding? Dynamic binding is a binding in which the class association is not made until the object is created at execution time. It is also called as Late binding. 20) Define Modularity? Modularity is the property of a system that has been decomposed into a set of cohesive and loosely coupled modules. 21) What is meant by Persistence? Persistence is the property of an object by which its existence transcends space and time. 22) What is collaboration? Collaboration is a process whereby several objects cooperate to provide some higher level behavior. 23) In Java, How to make an object completely encapsulated? All the instance variables should be declared as private and public getter and setter methods should be provided for accessing the instance variables. 24) How is polymorphism achieved in java? Inheritance, Overloading and Overriding are used to achieve Polymorphism in java What is a class? Class is concrete representation of an entity. It represents a group of objects, which hold similar attributes and behavior. It provides Abstraction and Encapsulations. What is an Object? What is Object Oriented Programming? Object represents/resembles a Physical/real entity. An object is simply something you can give a name. Object Oriented Programming is a Style of programming that represents a program as a system of objects and enables code-reuse. What is Encapsulation? Encapsulation is binding of attributes and behaviors. Hiding the actual implementation and exposing the functionality of any object. Encapsulation is the first step towards OOPS, is the procedure of covering up of data and functions into a single unit (called class). Its main aim is to protect the data from outside world. What is Abstraction? Hiding the complexity. It is a process of defining communication interface for the functionality and hiding rest of the things. What is Overloading? Adding a new method with the same name in same/derived class but with different number/types of parameters. It implements Polymorphism. What is Overriding? A process of creating different implementation of a method having a same name as base class, in a derived class. It implements Inheritance. What is Shadowing? When the method is defined as Final/sealed in base class and not override able and we need to provide different implementation for the same. This process is known as shadowing, uses shadows/new keyword. What is Inheritance? It is a process of acquiring attributes and behaviors from another object (normally a class or interface). What is an Abstract class? An abstract class is a special kind of class that cannot be instantiated. It normally contains one or more abstract methods or abstract properties. It provides body to a class. What is an Interface? An interface has no implementation; it only has the signature or in other words, just the definition of the methods without the body. What is Polymorphism? Mean by more than one form. Ability to provide different implementation based on different number/type of parameters. What is Pure-Polymorphism? When a method is declared as abstract/virtual method in a base class and which is overridden in a base class. If we create a variable of a type of a base class and assign an object of a derived class to it, it will be decided at a run time, which implementation of a method is to be called. This is known as Pure-Polymorphism or Late-Binding. What is a Constructor? A special function Always called whenever an instance of the class is created. - Same name as class name - No return type - Automatically call when object of class is created - Used to initialize the members of class class Test { int a,b; Test() { a=9; b=8; } }; Here Test() is the constructor of Class Test. What is copy constructor? Constructor which initializes it's object member variables (by shallow copying) with another object of the same class. If you don't implement one in your class then compiler implements one for you. for example: Test t1(10); // calling Test constructor Test t2(t1); // calling Test copy constructor Test t2 = t1; // calling Test copy constructor Copy constructors are called in following cases: - when a function returns an object of that class by value - when the object of that class is passed by value as an argument to a function - when you construct an object based on another object of the same class - When compiler generates a temporary object **What is default Constructor?** Constructor with no arguments or all the arguments has default values. In Above Question Test() is a default constructor. **What is a Destructor?** A special method called by GC. just before object is being reclaimed by GC. **How a base class method is hidden?** Hiding a base class method by declaring a method in derived class with keyword new. This will override the base class method and old method will be suppressed. **What Command is used to implement properties in C#?** get & set access modifiers are used to implement properties in c#. **What is method overloading?** Method overloading is having methods with same name but carrying different signature, this is useful when you want a method to behave differently depending upon a data passed to it. **Can constructors have parameters?** Yes, constructors can have parameters. so we can overload it. **What are Static Assembly and Dynamic Assembly?** Static assemblies can include .NET Framework types (interfaces and classes) as well as resources for the assembly (bitmaps, JPEG files, resource files, and so forth). Static assemblies are stored on disk. Dynamic assemblies run directly from memory and are not saved to disk before execution. **Describe the functionality of an assembly.** It is the smallest unit that has version control. All types and resources in the same assembly are versioned as a unit and support side by side execution. Assemblies contain the metadata and other identities which allow the common language runtime to execute. They are the boundaries providing the type check. They the unit where security permissions are requested and granted. **What is serialization?** Serialization is the process of converting an object into a stream of bytes. De-serialization is the opposite process of creating an object from a stream of bytes. Serialization/De-serialization is mostly used to transport objects (e.g. during remoting), or to persist objects (e.g. to a file or database). There are two separate mechanisms provided by the .NET class library for serialization - XmlSerializer and SoapFormatter and BinaryFormatter. Microsoft uses XmlSerializer for Web Services, and uses SoapFormatter/BinaryFormatter for remoting. **What are C++ storage classes?** - **auto:** the default. Variables are automatically created and initialized when they are defined and are destroyed at the end of the block containing their definition (when they are out of scope). They are not visible outside that block ```cpp auto int x; int y; // Both are same statement, auto is default ``` - **register:** a type of auto variable. A suggestion to the compiler to use a CPU register for performance (generally used in loops) - **static:** a variable that is known only within the function that contains its definition but is never destroyed and retains its value between calls to that function. It exists from the time the program begins execution. ```cpp void example() { static int x = 0; // static variable x++; cout << x << endl; } ``` If this function is called 10 times, the output will be 1, 2, 3, 4, etc., The value of the variable x is preserved through function calls. If this static variable is declared as a member of a class, then it will preserve the value for all the objects of the class. i.e., one copy of this data variable will be shared by all objects of the class. **Note:** if we will declare variable like this int x=0; in the above example, then it will print 1 every it will be called externally: a static variable whose definition and placement is determined when all object and library modules are combined (linked) to form the executable code file. It can be visible outside the file where it is defined. - **extern:** a static variable whose definition and placement is determined when all object and library modules are combined (linked) to form the executable code file. It can be visible outside the file where it is defined. **What is RTTI?** Runtime type identification (RTTI) lets you find the dynamic type of an object when you have only a pointer or a reference to the base type. RTTI is the official way in standard C++ to discover the type of an object and to convert the type of a pointer or reference (that is, dynamic typing). The need came from practical experience with C++. RTTI replaces many homegrown versions with a solid, consistent approach. **What is friend function?** As the name suggests, the function acts as a friend to a class. As a friend of a class, it can access its private and protected members. A friend function is not a member of the class. But it must be listed in the class definition. What is a scope resolution operator? A scope resolution operator (::), can be used to define the member functions of a class outside the class. What do you mean by pure virtual functions? A pure virtual member function is a member function that the base class forces derived classes to provide. Normally these member functions have no implementation. Pure virtual functions are equated to zero. class Shape { public: virtual void draw() = 0; }; What is the difference between declaration and definition? The declaration tells the compiler that at some later point we plan to present the definition of this declaration. E.g.: void stars () //function declaration The definition contains the actual implementation. E.g.: void stars () { for(int j=10; j > =0; j--) //function body cout << *; cout << endl; } What are the advantages of inheritance? It permits code reusability. Reusability saves time in program development. It encourages the reuse of proven and debugged high-quality software, thus reducing problem after a system becomes functional. Association Association is a relationship where all object have their own lifecycle and there is no owner. Let’s take an example of Teacher and Student. Multiple students can associate with single teacher and single student can associate with multiple teachers but there is no ownership between the objects and both have their own lifecycle. Both can create and delete independently. Aggregation Aggregation is a specialize form of Association where all object have their own lifecycle but there is ownership and child object can not belongs to another parent object. Let’s take an example of Department and teacher. A single teacher can not belongs to multiple departments, but if we delete the department teacher object will not destroy. We can think about “has-a” relationship. Composition Composition is again a special form of Aggregation and we can call this as a “death” relationship. It is a strong type of Aggregation. Child objects do not have their lifecycle and if parent object deletes all child objects will also be deleted. Let’s take again an example of relationship between House and rooms. House can contain multiple rooms there is no independent life of room and any room can not belong to two different houses if we delete the house room will automatically delete. Let’s take another example relationship between Questions and options. Single questions can have multiple options and option can not belong to multiple questions. If we delete questions options will automatically delete. How to Implement (Code Example) Class Circle { Point point; } Class Point { int x; int y; } Here Circle is composed of Point.... you can't make a circle without point (Strong dependency) Distinguish between the terms fatal error and non-fatal error. Why might you prefer to experience a fatal error rather than a non-fatal error? A fatal error causes a program to terminate prematurely. A nonfatal error occurs when the logic of the program is incorrect, and the program does not work properly. A fatal error is preferred for debugging purposes. A fatal error immediately lets you know there is a problem with the program, whereas a nonfatal error can be subtle and possibly go undetected. What are virtual functions? Describe a circumstance in which virtual functions would be appropriate Virtual functions are functions with the same function prototype that are defined throughout a class hierarchy. At least the base class occurrence of the function is preceded by the keyword virtual. Virtual functions are used to enable generic processing of an entire class hierarchy of objects through a base class pointer. For example, in a shape hierarchy, all shapes can be drawn. If all shapes are derived from a base class Shape which contains a virtual draw function, then generic processing of the hierarchy can be performed by calling every shape’s draw generically through a base class Shape pointer. Given that constructors cannot be virtual, describe a scheme for how you might achieve a similar effect Create a virtual function called initialize that the constructor invokes. How is it that polymorphism enables you to program “in the general” rather than “in the specific.” Discuss the key advantages of programming “in the general.” Polymorphism enables the programmer to concentrate on the processing of common operations that are applied to all data types in the system without going into the individual details of each data type. The general processing capabilities are separated from the internal details of each type. Discuss the problems of programming with switch logic. Explain why polymorphism is an effective alternative to using switch logic. The main problem with programming using the switch structure is extensibility and maintainability of the program. A program containing many switch structures is difficult to modify. Many, but not necessarily all, switch structures will need to add or remove cases for a specified type. Note: switch logic includes if/else structures which are more flexible than the switch structure. Distinguish between static binding and dynamic binding. Explain the use of virtual functions and the vtable in dynamic binding. Static binding is performed at compile-time when a function is called via a specific object or via a pointer to an object. Dynamic binding is performed at run-time when a virtual function is called via a base class pointer to a derived class object (the object can be of any derived class). The virtual functions table (vtable) is used at run-time to enable the proper function to be called for the object to which the base class pointer "points". Each class containing virtual functions has its own vtable that specifies where the virtual functions for that class are located. Every object of a class with virtual functions contains a hidden pointer to the class’s vtable. When a virtual function is called via a base class pointer, the hidden pointer is dereferenced to locate the vtable, then the vtable is searched for the proper function call. Distinguish between inheriting interface and inheriting implementation. How do inheritance hierarchies designed for inheriting interface differ from those designed for inheriting implementation? When a class inherits implementation, it inherits previously defined functionality from another class. When a class inherits interface, it inherits the definition of what the interface to the new class type should be. The implementation is then provided by the programmer defining the new class type. Inheritance hierarchies designed for inheriting implementation are used to reduce the amount of new code that is being written. Such hierarchies are used to facilitate software reusability. Inheritance hierarchies designed for inheriting interface are used to write programs that perform generic processing of many class types. Such hierarchies are commonly used to facilitate software extensibility (i.e., new types can be added to the hierarchy without changing the generic processing capabilities of the program.) Distinguish between virtual functions and pure virtual functions. A virtual function must have a definition in the class in which it is declared. A pure virtual function does not provide a definition. Classes derived directly from the abstract class must provide definitions for the inherited pure virtual functions in order to avoid becoming an abstract base class.
{"Source-Url": "http://s9f06be5051946fb3.jimcontent.com/download/version/1323971593/module/3693240352/name/Object%20Oriented%20Programming.pdf", "len_cl100k_base": 4268, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 17497, "total-output-tokens": 4786, "length": "2e12", "weborganizer": {"__label__adult": 0.000431060791015625, "__label__art_design": 0.00028252601623535156, "__label__crime_law": 0.00035834312438964844, "__label__education_jobs": 0.0018472671508789065, "__label__entertainment": 4.3451786041259766e-05, "__label__fashion_beauty": 0.00018107891082763672, "__label__finance_business": 0.0001672506332397461, "__label__food_dining": 0.0004193782806396485, "__label__games": 0.000576019287109375, "__label__hardware": 0.0005640983581542969, "__label__health": 0.00035762786865234375, "__label__history": 0.00017702579498291016, "__label__home_hobbies": 0.00011414289474487303, "__label__industrial": 0.0003159046173095703, "__label__literature": 0.00022172927856445312, "__label__politics": 0.0002484321594238281, "__label__religion": 0.0005412101745605469, "__label__science_tech": 0.0015239715576171875, "__label__social_life": 0.00010567903518676758, "__label__software": 0.00290679931640625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.0003485679626464844, "__label__transportation": 0.00054168701171875, "__label__travel": 0.0002351999282836914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21138, 0.00917]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21138, 0.96862]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21138, 0.92746]], "google_gemma-3-12b-it_contains_pii": [[0, 2905, false], [2905, 5053, null], [5053, 7236, null], [7236, 8552, null], [8552, 11064, null], [11064, 13652, null], [13652, 15494, null], [15494, 17808, null], [17808, 21138, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2905, true], [2905, 5053, null], [5053, 7236, null], [7236, 8552, null], [8552, 11064, null], [11064, 13652, null], [13652, 15494, null], [15494, 17808, null], [17808, 21138, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21138, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21138, null]], "pdf_page_numbers": [[0, 2905, 1], [2905, 5053, 2], [5053, 7236, 3], [7236, 8552, 4], [8552, 11064, 5], [11064, 13652, 6], [13652, 15494, 7], [15494, 17808, 8], [17808, 21138, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21138, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
030ecae76a40620afb1889916db4c29504f6f1ea
Diagnostics Using PLC Communications Abstract Many applications use ORMEC PLC communications support such as DATA HIGHWAY and MODBUS for transferring information between a GEN-III controller and a host controller or PLC. This Tech Note shows a way to set up the error handling and diagnostics in such applications. Description The example program use MODBUS to connect the GEN-III to another computer. The GEN-III is the MODBUS slave and the computer is the MODBUS master. The example program focuses on the following areas of the application. - Configuring MODBUS - Fault Handling - Making diagnostic information available to the computer. - Clearing faults and restarting. The techniques shown can also be used on applications that use DATA HIGHWAY. Modbus Configuration The INIT.MODBUS routine configures MODBUS to use the MotionNET port as slave station number 13. This configuration can be adjusted to suit your application. The routine also goes on to MAP 22 elements of an integer array called DIAG to registers 1 through 22. These registers will allow the MODBUS master to access the GEN-III diagnostic information. E-STOP State Once the initialization is complete, the program enters a routine called ESTOP.STATE. In this routine the program looks a the FAULT@ variable. If FAULT@ is false it checks the MotionBASIC® version number and uses one of two procedures to set FAULT@ true in order to make sure the NO-FAULT relay is open. If you know you will be using MotionBASIC® version 2.1a or higher, you can simply set FAULT@=TRUE. Otherwise you must follow the more complex procedure contained in the FAKE.A.FAULT routine. Once in the ESTOP.STATE routine, the program loop until it sees the ESTOP.OK@ input make a transition from false to true. When this happens, the program exits the loop, clears any faults and goes to the RESTART routine which is where your application program restarts after a fault. Fault Response When a fault or program error occurs, the program goes to ERROR.HDLR. Here the program calls GET.FAULTS to put the diagnostic data into the MODBUS diagnostic registers. It then executes a routine called ESTOP which stops the axes and performs any other actions required when the system faults. It then goes into the ESTOP.STATE to await the ESTOP.OK@ transition that clears faults and restarts your application. GET.FAULTS This routine transfers MotionBASIC’s diagnostic data to the MODBUS diagnostic registers. Because MODBUS only supports integer registers, some of the data must be converted into integer form so the master can access the data. Two types of conversion are used: Numbers that can be larger than an integer, such as the line number where the fault occurred, are converted to low word and high word registers using modulo 10,000. Your master program can reconstitute the number by multiplying the high word by 10000 and adding the low word. Set variables are converted to a binary integer. For example the set value \{1,6,8\} is converted to \(2^{1-1} + 2^{6-1} + 2^{8-1} = 162\). Each bit in the resulting integer corresponds to the appropriate bit in the set. This conversion is handled by a routine called SET.TO.INT. PRINT.FAULTS This subroutine is not actually used in the program. It is provided so you can convert the diagnostic register data back to its original form and display it on the MotionPRO™ screen. Host Computer Diagnostic Display The purpose of making the diagnostic data available to the MODBUS master is so the host computer can display GEN-III fault information to the operator. There is quite a lot of information in the diagnostic registers and it is important to display it in a way that does not confuse the operator. The most important information is the MotionBASIC® error code, register 1. This code always tells you the reason the program entered the error handler and is the most useful in figuring out what happened. All other faults and/or axis faults occurred after or as a consequence of the original error. If the error code is 1911 the problem was an axis fault. If more than one axis indicates fault, AXIS.FLT1@, register 5, points to the axis that caused the original problem. The other axes faulted after or as a result of the original fault. The ESTOP.STATE routine will open the NO-FAULT relay if the actual fault did not already do so. This process will set the Machine Fault bit \{8\} in the FAULT@ variable, register 4. Unless your program uses user defined faults, you may want to ignore bit 8 of register 4 in your display. ## Diagnostic Registers The following table shows the diagnostic register assignments: <table> <thead> <tr> <th>Register Number</th> <th>MotionBASIC® Variable</th> <th>Description</th> <th>Data Type</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>ERR</td> <td>MotionBASIC® Error Code Number</td> <td>Integer number</td> </tr> <tr> <td>2</td> <td>ERL</td> <td>Current line number when error occurred</td> <td>Low word, modulo 10000</td> </tr> <tr> <td>3</td> <td>ERL</td> <td></td> <td>High word, modulo 10000</td> </tr> <tr> <td>4</td> <td>FAULT@</td> <td>Set of all Controller Faults</td> <td>Integer bit image</td> </tr> <tr> <td>5</td> <td>AXIS.FLT1@</td> <td>First axis to fault</td> <td>Integer bit image</td> </tr> <tr> <td>6</td> <td>AXIS.FAULT@</td> <td>Set of all faulted axes</td> <td>Integer bit image</td> </tr> <tr> <td>7</td> <td>AFAULT@1(1)</td> <td>Axis 1 fault code</td> <td>Integer number</td> </tr> <tr> <td>8</td> <td>ALARM@1(1)</td> <td>Axis 1 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>9</td> <td>AFAULT@2(2)</td> <td>Axis 2 fault code</td> <td>Integer number</td> </tr> <tr> <td>10</td> <td>ALARM@2(2)</td> <td>Axis 2 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>11</td> <td>AFAULT@3(3)</td> <td>Axis 3 fault code</td> <td>Integer number</td> </tr> <tr> <td>12</td> <td>ALARM@3(3)</td> <td>Axis 3 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>13</td> <td>AFAULT@4(4)</td> <td>Axis 4 fault code</td> <td>Integer number</td> </tr> <tr> <td>14</td> <td>ALARM@4(4)</td> <td>Axis 4 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>15</td> <td>AFAULT@6(6)</td> <td>Axis 6 fault code</td> <td>Integer number</td> </tr> <tr> <td>16</td> <td>ALARM@6(6)</td> <td>Axis 6 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>17</td> <td>AFAULT@7(7)</td> <td>Axis 7 fault code</td> <td>Integer number</td> </tr> <tr> <td>18</td> <td>ALARM@7(7)</td> <td>Axis 7 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>19</td> <td>AFAULT@8(8)</td> <td>Axis 8 fault code</td> <td>Integer number</td> </tr> <tr> <td>20</td> <td>ALARM@8(8)</td> <td>Axis 8 Servodrive alarm code</td> <td>Integer number</td> </tr> <tr> <td>21</td> <td>AFAULT@9(9)</td> <td>Axis 9 fault code</td> <td>Integer number</td> </tr> <tr> <td>22</td> <td>ALARM@9(9)</td> <td>Axis 9 Servodrive alarm code</td> <td>Integer number</td> </tr> </tbody> </table> Program Listing Module: TN012.BAS Routine name: POWERUP Abstract: Program entry point Routines called: *MP.CONFIG, CLEAR.FAULTS, ERROR.HDLR ESTOP.STATE, INIT.MODBUS, INIT.VARIABLES Variables used: None POWERUP: MP.CONFIG INIT.VARIABLES INIT.MODBUS CLEAR.FAULTS ON ERROR GOTO ERROR.HDLR ESTOP.STATE ‘put the machine in ESTOP state END Module: TN012.BAS Routine name: INIT.VARIABLES Abstract: Initialize any program variables Routines called: None Variables used: DIAG() INIT.VARIABLES: ERASE DIAG :DIM DIAG(22) ‘dimension the diagnostic register array RETURN Module: TN012.BAS Routine name: INIT.MODBUS Abstract: Initialize MODBUS communications Routines called: None Variables used: DIAG(), MAP, MOD.CFG, MOD.INIT, TMP INIT.MODBUS: MOD.CFG 0,,0,7 ‘MotionNet port, 9600 baud, no parity, 7 data bits MOD.INIT 13,0,1 ‘station 13, slave, ASCII mode MAP ERASE FOR TMP=1 TO 22 MAP TMP TO DIAG(TMP) ‘map registers to the diagnostic array NEXT TMP RETURN TN012a -4- April 2, 1993 Customer Support Engineering Tech Note #12 Module: TN012.BAS Routine name: ESTOP.STATE Abstract: Wait here for transition of ESTOP.OK@ from false to true to clear faults and restart Routines called: CLEAR.FAULTS, FAKE.A.FAULT, RESTART Variables used: ESTOP.FLAG, ESTOP.OK@, FAULT@ ESTOP.STATE: IF FAULT@<>{} THEN IF MBVER$>="MB2.1a" THEN FAULT@=TRUE ELSE FAKE.A.FAULT ENDIF ESTOP.FLAG=NOT ESTOP.OK@ WHILE NOT (ESTOP.FLAG AND ESTOP.OK@) IF NOT ESTOP.OK@ THEN ESTOP.FLAG=TRUE WEND CLEAR.FAULTS STACK CLEAR RESTART 'restart your main program END Module: TN012.BAS Routine name: RESTART Abstract: Application restart location. This is where your program takes over. Routines called: None Variables used: None RESTART: WHILE TRUE :WEND END Module: TN012.BAS Routine name: ERROR.HDLR Abstract: React to a fault or error Routines called: ESTOP, ESTOP.STATE, GET.FAULTS Variables used: MODBUS@ ERROR.HDLR: IF ERR=1805 THEN MODBUS@=OFF: ON ERROR GOTO 0 GET.FAULTS ESTOP RESUME ESTOP.STATE END TN012a -5- April 2, 1993 Routine name: ESTOP Abstract: Emergency Stop Routines called: None Variables used: AXIS.LIST@, DSP.DONE@(), FAULT@, MODE@() ``` ESTOP: HALT AXIS.LIST@ WAIT UNTIL DSP.DONE@(AXIS.LIST@) OR FAULT@<>{} MODE@(AXIS.LIST@)=0 RETURN ``` Routine name: CLEAR.FAULTS Abstract: Attempt to clear faults Routines called: GET.FAULTS Variables used: AFAULT@, DIAG(), FAULT@, OTL.FWD@, OTL.REV@ ``` CLEAR.FAULTS: OTL.FWD@=0 :OTL.REV@=0 :AFAULT@=0 :FAULT@=0 :WAIT 300 GET.FAULTS :DIAG(1)=0 :DIAG(2)=0 :DIAG(3)=0 RETURN ``` Routine name: FAKE.A.FAULT Abstract: Create a fake fault to make sure NO FAULT relay opens up Routines called: TEMP.ROUTINE Variables used: None ``` FAKE.A.FAULT: ON ERROR GOTO TEMP.ROUTINE 'set up a temporary error handler ERROR 1901 'cause a machine fault RETURN ``` Routine name: TEMP.ROUTINE Abstract: Temporary error handler used by FAKE.A.FAULT Routines called: ERROR.HDLR Variables used: None ``` TEMP.ROUTINE: ON ERROR GOTO ERROR.HDLR 'restore the normal error handler RESUME NEXT ``` Abstract: Transfer fault variables to registers Variables used: AFAULT@(), ALARM@(), AXIS.FAULT@, AXIS.FLT1@ AXIS.LIST@, DIAG(), FAULT@, INT.LO, SET~, TMP GET.FAULTS: unit diagnostic codes DIAG(1)=ERR 'program error code DIAG(2)=ERL MOD 10000 'low word (modulo 10000) of error line number DIAG(3)=FIX(ERL\10000) 'high word (modulo 10000) of error line number SET~=FAULT@ :SET.TO.INT :DIAG(4)=INT.LO 'fault@ binary word SET~=AXIS.FLT1@ :SET.TO.INT :DIAG(5)=INT.LO 'axis.flt1@ binary word SET~=AXIS.FAULT@ :SET.TO.INT :DIAG(6)=INT.LO 'axis.fault@ binary word axis diagnostic codes FOR TMP~ WITHIN AXIS.LIST@ TMP=TMP~ IF TMP>4 THEN TMP=(TMP*2)+4 ELSE TMP=(TMP*2)+5 DIAG(TMP)=AFAULT@(TMP~) 'axis faults DIAG(TMP+1)=ALARM@(TMP~) 'servo drive alarms NEXT TMP~ RETURN SET.TO.INT: INT.LO=0 INT.HI=0 FOR TMP=0 TO 15 TMP~=(TMP+1) TMP1~=(TMP+16) IF TMP~*SET~ THEN INT.LO=INT.LO+2^TMP IF TMP1~*SET~ THEN INT.HI=INT.HI+2^TMP NEXT TMP RETURN Module: TN012.BAS Routine name: PRINT.FAULTS Abstract: Print fault data from registers, this is useful for debugging Routines called: INT.TO.SET Variables used: DIAG(), INT.HI, INT.LO, SET~, TMP, TMP1 PRINT.FAULTS: CLS PRINT "Error code"DIAG(1)":"ERR$(DIAG(1)) PRINT "at line"DIAG(2)+DIAG(3)*10000 INT.HI=0 :INT.LO=DIAG(4) :INT.TO.SET :PRINT "FAULT@="SET~ INT.HI=0 :INT.LO=DIAG(5) :INT.TO.SET :PRINT "AXIS.FLT1@="SET~ INT.HI=0 :INT.LO=DIAG(6) :INT.TO.SET :PRINT "AXIS.FAULT@="SET~ PRINT "Axis","AFAULT@","ALARM@" FOR TMP=1 TO 8 IF TMP<5 THEN TMP1=TMP ELSE TMP1=TMP+1 PRINT TMP1,DIAG((TMP*2)+5),DIAG((TMP*2)+6) NEXT TMP RETURN Module: TN012.BAS Routine name: INT.TO.SET Abstract: Convert INT.LO and INT.HI to a set variable Routines called: None Variables used: INT.HI, INT.LO, SET~, TMP, TMP& INT.TO.SET: SET~={} FOR TMP=0 TO 15 TMP&=2^TMP IF TMP& AND INT.LO THEN SET~+=SET~+(TMP+1) IF TMP& AND INT.HI THEN SET~+=SET~+(TMP+17) NEXT TMP RETURN Traverse Winder Application Abstract Many applications require a set of motions to be executed successively, one immediately following the other. These motions can be either time based or slaved to the motion of another axis. The REPEAT command can be used with both the MOVE FOR and GEAR FOR commands to achieve a continuous sequence of motions. A continuous motion Traverse Winder application is used to demonstrate the use of the REPEAT GEAR command. Description A Traverse Winder system is used to evenly wrap material around a core with a width greater than that of the material being wound (examples: fishing line, spooled wire). A system of this type consists of two axes, the Winder and the Traverse. The Winder axis wraps the material around the core, and the Traverse axis guides the material back and forth along the core's length. In order for the material to be evenly wrapped around the core a fixed relationship must exist between the motions of the Winder and Traverse axes. It is also important that the Traverse axis change it's direction of motion at the end of it's travel at points in the Winder's rotation that are offset from each other.. For our example the both axes will be servo controlled motors, however, the Winder axis could be a pacer encoder. Implementation Assume that the user units have been configured such that the Winder position units are degrees, and the Traverse position units are inches. The following code is a table of operator configurable parameters which define the operation of the system: ``` OPERATOR.CONFIG: CYCLE.OFFSET =120 'offset per cycle in master degrees TRAVERSE.ACCEL =45 'master degrees for acceleration TRAVERSE.DIST! =1.000 'Traverse travel in inches WINDER.REVS =1 'numbers of Winder revs per Traverse CYCLES =10 'cycle = forward index & reverse index RETURN ``` Each cycle of the Traverse axis consists of two passes, forward and reverse, between 0” 1.000”. At the end of each Traverse axis cycle the Winder axis is 120 degrees out of phase from the end of the previous cycle. The following code is the calculations required for the Traverse axis motion. NOTE: The position User Units are scaled up by 1000 for the Traverse axis, and 10 for the Winder axis, for better resolution. CALCULATIONS: INDEX& = TRAVERSE.DIST! * 1000 'index distance scaled for user units WINDER.DIST& = (WINDER.REVS * 360 + CYCLE.OFFSET / 2) * 10 'total Winder distance for Traverse motion ACCEL.DIST& = TRAVERSE.ACCEL * 10 'Winder distance for Traverse acceleration RETURN The following code is the program that is executed to operate the system: MAIN: WINDER = 1 TRAVERSE = 2 MP.CONFIG ' configure controller parameters OPERATOR.CONFIG ' configure system CALCULATIONS ' calculate motion parameters ' clear faults and enable the axes AXIS.SET@ = AXIS.LIST@ AFAULT@ = 0 :FAULT@ = 0 :WAIT 300 :MODE@ = 5 :WAIT 500 AXIS.SET@ = TRAVERSE 'REPEAT GEAR FOR INDEX& IN WINDER.DIST&, ACCEL.DIST& 'REPEAT GEAR FOR -INDEX& IN WINDER.DIST&, ACCEL.DIST& 'MOVE WINDER AT 60 IN 250 This part of the program keeps track of where the axis is in its motion so that the queue is interrupted at the desired point in the index sequence. CYCLE.COUNT = 0 ' count the number of cycles completed WHILE CYCLE.COUNT < CYCLES WAIT UNTIL POS.ACT@ > INDEX& / 2 WAIT UNTIL POS.ACT@ < INDEX& / 2 CYCLE.COUNT = CYCLE.COUNT + 1 WEND In order to interrupt the motion queue a GEAR or MOVE for 0 distance must be commanded. This will stop the motion sequence after the motion in progress. GEAR FOR 0 IN 10 ' interrupt the DSP motion queue WAIT UNTIL DSP.DONE@ HALT WINDER IN 250 RETURN END Performance Considerations The motions in the queue are initiated one immediately after the other, without missing a DSP tick.
{"Source-Url": "https://ormec.com/Portals/ormec/Library/Documents/Controllers/Orion/TechNotes/tn012.pdf", "len_cl100k_base": 4615, "olmocr-version": "0.1.48", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 20114, "total-output-tokens": 4937, "length": "2e12", "weborganizer": {"__label__adult": 0.0006871223449707031, "__label__art_design": 0.00048279762268066406, "__label__crime_law": 0.0005807876586914062, "__label__education_jobs": 0.0007352828979492188, "__label__entertainment": 0.00012552738189697266, "__label__fashion_beauty": 0.000308990478515625, "__label__finance_business": 0.0003724098205566406, "__label__food_dining": 0.0007357597351074219, "__label__games": 0.0015172958374023438, "__label__hardware": 0.06292724609375, "__label__health": 0.0006542205810546875, "__label__history": 0.00026679039001464844, "__label__home_hobbies": 0.0004906654357910156, "__label__industrial": 0.0115509033203125, "__label__literature": 0.0001959800720214844, "__label__politics": 0.0002313852310180664, "__label__religion": 0.000903606414794922, "__label__science_tech": 0.111083984375, "__label__social_life": 6.788969039916992e-05, "__label__software": 0.0479736328125, "__label__software_dev": 0.755859375, "__label__sports_fitness": 0.0007619857788085938, "__label__transportation": 0.0010728836059570312, "__label__travel": 0.000209808349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16531, 0.02812]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16531, 0.3283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16531, 0.6833]], "google_gemma-3-12b-it_contains_pii": [[0, 2356, false], [2356, 4544, null], [4544, 7752, null], [7752, 8731, null], [8731, 9775, null], [9775, 10828, null], [10828, 11767, null], [11767, 12733, null], [12733, 15028, null], [15028, 16531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2356, true], [2356, 4544, null], [4544, 7752, null], [7752, 8731, null], [8731, 9775, null], [9775, 10828, null], [10828, 11767, null], [11767, 12733, null], [12733, 15028, null], [15028, 16531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16531, null]], "pdf_page_numbers": [[0, 2356, 1], [2356, 4544, 2], [4544, 7752, 3], [7752, 8731, 4], [8731, 9775, 5], [9775, 10828, 6], [10828, 11767, 7], [11767, 12733, 8], [12733, 15028, 9], [15028, 16531, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16531, 0.08333]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
f0e61df7b8b89dfef1e9a30f0cce349ba2f85cfe
A Framework for Multi-Tier Type Evolution and Data Migration - Technical Report B-04-01 - Dirk Draheim Institute of Computer Science Freie Universität Berlin 14195 Berlin, Germany draheim@acm.org Matthias Horn IMIS Project Condat Informationssysteme AG 10559 Berlin, Germany mch@condat.de Ina Schulz Institute of Computer Science Freie Universität Berlin 14195 Berlin, Germany ischulz@inf.fu-berlin.de January 2004 Abstract This paper describes a framework that supports the simultaneous evolution of object-oriented data models and relational schemas with respect to a tool-supported object-relational mapping. Thereby the proposed framework accounts for non-trivial data migration induced by type evolution from the outset. The support for data migration is offered on the level of transparent data access. The framework consists of the following integrated parts: an automatic model change detection mechanism, a generator for schema evolution code and a generator for data migration APIs. The framework has been concepted in the IMIS project. IMIS is an information system for environmental radioactivity measurements. Though the indicated domain especially demands a solution like the one discussed in the paper, the achievements are of general purpose for multi-tier system architectures with object-relational mapping. Contents 1 Introduction 3 2 The IMIS System 3 3 The IMIS Development Approach 6 4 The Model Evolution Problem 7 5 The Database Reorganization Process 8 6 The Upgrader Generator 9 7 Discussion 11 8 Related Work 11 9 Conclusion 12 1 Introduction Information systems are data-centric applications. Due to changing requirements, both functional and non-functional, data types change during an information system’s life time. That is we have to face with database reorganization \[13, 12\] and programming language type evolution, which must be in synch. Thereby not only metadata is subject to change, but existing long-lived data, too. Altering schemas is supported by commercial database systems \[14\], however the definition of necessary data changes can pose problems. Necessary data changes can significantly vary in complexity. In the simplest case, i.e. if a schema evolution step can be described by a mere embedding of the old schema into the new schema, the change of data only amounts to a restructuring of the data along the known schema mapping. However often more complex data changes are desired, ranging from a vertically or horizontally splitting of the data of one table into new tables to the problem of computing new column data from old column data. The framework that is described in this paper supports the type evolution in a multi-tier system that is based on an object-oriented transparent data access layer. The system development is centered around a tool-supported, model-based object-relational mapping, i.e. it follows a model-driven approach. In our setting model evolution, type evolution and schema evolution are tightly integrated. The described framework basically consists of a generator for data migration APIs. For each combination of a current model and an intended new model a specialized data migration API is generated. On the hand the generated data migration API is intended to be as complete as possible with respect to a schema mapping that can be automatically inferred from the two models under consideration, on the other hand it provides as many hooks as needed to fully customize the data migration. With this approach guidance for the implementation of the data migration is provided, furthermore the implementation can be done on the level of transparent database access. In practice - at least if complex data changes are necessary - data migration must often still be done by hand coded SQL scripts, whereas, in a typical multi-tier setting, the developer must always be aware of all details of the object-relational mapping. This is the case even in the presence of commercial object-relational mapping tools like TopLink. The described approach has been concepted in the IMIS project. The IMIS Integrated Measurement and Information System (Integriertes Meß- und Informationssystem für Radioaktivität in der Umwelt) is a German federal information system for forecast and decision support that gathers and interprets data on radioactivity in the environment. In the IMIS project the need for model evolution, especially non-trivial model evolution, is particularly high: IMIS stores measurements and sample data - measurement technology and methodology proceed steadily and the requirements of future desired queries is hardly predictable. At the same time nearly all the data stored in the IMIS system is long-lived and is therefore affected by model evolution. The paper proceeds as follows. In section 2 we provide a brief overview of the functionality of the IMIS system. In section 3 we describe the model-driven development approach of the IMIS project. An understanding of the development approach is necessary for the explanation of the schema evolution and data migration framework. Section 4 sets the stage by describing the model evolution problem. The solution to the problem with respect to the special situation with object-relational mapping is described in section 5 in general, followed by a more detailed discussion of the migration API generator in section 6. We defer some of the discussion of the driving forces that lead to the several design decisions of the framework to section 7. Than related work is taken into account in section 8. 2 The IMIS System Following the nuclear accident in Chernobyl the German federal government established a program targeting radiation protection and precaution in 1986. By the end of 1986 the respective federal law StrVG (Strahlenschutz Vorsorge Gesetz) was adopted. Besides other rules the StrVG contains guidelines for the installation of an information system for monitoring and prediction of radioactivity in the environment. 2.1 The IMIS Features The first version of IMIS was developed between 1989 and 1993 and is currently still in use. In this paper we describe the entirely new IMIS system, which has been developed by Condat Informationssysteme AG in Berlin/Germany. The new IMIS has been installed in October 2003 for final continuous test operation. IMIS is installed at several federal and regional institutions at 60 locations and encompasses about 160 client systems. The system operates 24 hours on a central database and stores data about radioactivity in air, precipitation, inland waterways, north and Baltic seas, food, feed, drinking water, ground water, lake water, sewage, waste, plants and soil - measured manually and automatically by more than 2000 measurement stations. Data is supplied by automated processing, e.g. by import of data exchange files, as well as by manual data input. IMIS provides a broad range of features: - manual and automatic data collection - configurable batch processing, e.g. for data import and data export, document generation, data reception and transmission - a generic selection component which provides a decoupling of the technical data model from the terminology presented to the expert end user - data visualization using tables, business diagrams and geographic maps - manual and automatic document generation - document storage and retrieval - integration of the external forecast system PARK (program system for assessment and mitigation of radiological consequences) for data supply, control and result import From the end user’s viewpoint the IMIS system has to be understood as a collection of rather loosely coupled client applications that together provide the aforementioned features. 2.2 The IMIS Data The IMIS database consists of four schemas, i.e. IMIS, a repository schema, IMISGEO and PARK. The schema IMIS consists of approximately 150 tables and basically contains the radioactivity data, master data with references to radioactivity data and data about samples. A sample is a portion of material that has been collected for radiological measurements. A sample is described by various attributes, e.g. the kind of collected material. The location from where a sample has been taken can be specified by coordinates or by an administrative district. Each sample is used for a number of measurements using various methods, e.g. alpha spectrometry. A measurement result consists of a number of readings, e.g. nuclides U-234 or U-235. The repository schema consists of approximately 300 tables and contains configuration and setting data as well as dynamic data not related directly to radioactivity data. The configuration data is used to customize the various functions of IMIS, e.g. the selection and presentation component. For instance, stored messages or journals of automatic processes belong to the dynamic data stored in the repository schema. The schema IMISGEO contains geographical data, e.g. maps for spatial evaluations. This schema is not covered by the evolution mechanism described in this paper. The schema PARK contains prognosis data computed by the external forecast system PARK. The PARK subsystem is only used in emergency mode. PARK prognosis data has a comparatively short lifetime, therefore data migration is not necessary for this schema. It can be emptied prior to schema evolution. Figure 1: The IMIS Integrated Measurement and Information System. 2.3 System Architecture and Configuration The system architecture of the IMIS system is depicted in Figure 1. A central Oracle9i database stores the data for evaluation and further processing. Configuration data for the different functions of IMIS is stored in the same database instance. It is running on a Sun V880 high availability cluster server consisting of two nodes. For data storage two Sun T3 storage subsystems are used. Server and communication processes are hosted on four Sun Fire 280 application servers. They are redundant and can replace each other in case of failure. All servers are located at the German federal office for radiation protection BfS (Bundesamt für Strahlenschutz) in Munich. PCs are used as client systems. The client software follows a straightforward fat client approach. While most of the clients are connected via ISDN to the server LAN, the clients located on site in Munich are connected directly via Ethernet. Most of the new data that is stored into the IMIS database stems from the measurement stations. These provide data by uploading it to an ftp server. From there the data is written by bulk data transfers, in normal operation mode on a daily basis and on a two hour basis in emergency operation mode. Further data is stored into the database by the external PARK system through the PARK controller. Further few data is entered manually by the user. Up to exceptions all the data stored in the IMIS system is long-lived, all the data stays unchanged. There is no heavy transaction load on the IMIS system. That is the IMIS system has the characteristics of an OLAP system, though currently no typical analytical processing takes places on the data. The client applications enable data browsing, they provide different views on the data. However, in future new complex queries may become requirements - yet another possible reason for schema evolution, though this time triggered by the need of physical database redesign with a footprint in the maintained model and applications. IMIS is estimated to store data about approximately one million measurements per year - this is equivalent to several million records. This leads to a forecast of approximately 50 GB measurement data after 10 years, an easily manageable amount of data at first sight - if certain data transforms become necessary due to changing requirements, e.g. for reasons of analytical processing, the actual needed database size has to be reestimated. 3 The IMIS Development Approach All IMIS client and server applications, with exception of the document management system which is based on Zope and Python, are developed using an object-oriented approach using the Java programming language. The development follows a model-driven approach, which is depicted in Figure 2. It is based on the usage of Coad’s case tool Together, a model generator and a database adaptor software component. Together is used to maintain code and models by simultaneous round trip engineering. Both the database adaptor and the model generator has been developed in the IMIS project. Furthermore new modules have been developed for the Together tool, so that additional model information can be added to the UML models by annotations which are stored inside the Java source files. Similarly the mapping from object-oriented model elements to the relational models can be specified by annotations of the UML model within the tool. The model generator [2] makes the model information available to the database adaptor as serialized Java objects in the model.dat-file. The database adaptor realizes a transparent database access layer. It is a generic component that inspects the provided model.dat-file. It exploits the information to generate the SQL statements that are needed by the supported object-oriented access methods. The database generator provides advanced features. For example an object access prediction is implemented that is exploited to prefetch objects from the database in order to mitigate performance drawbacks that are due to the navigational access patterns brought into play by transparent object-oriented access. The access prediction works cost-based by exploiting access statistics. As another example the database adaptor is accompanied by an API for the formulation of arbitrary queries against the database. Prior to the final installation in October 2003 the database schema descriptions (ddl.txt-files) were also generated by the schema generator, as indicated by the shaded box in Figure 2. The new evolution mechanism makes these descriptions obsolete, since it applies model changes incrementally - please take a first look at Figure 5 on page 10. 4 The Model Evolution Problem The evolution of an object model results in changes of the database schema and the stored data. To employ a new software system version with an evolved object model, existing data needs to be transformed from the old database schema to a new one, called database reorganization in the scope of this paper. Therefore we have two tasks after changing an object model: the schema migration and the data migration. We delve into an example that is particularly simple and does not stem from the IMIS application domain. Figure 3 shows the model evolution of a Company class with an address attribute and some further attributes. The modified model has a new Address class with a new street attribute, city attribute and zip attribute. The address attribute is removed from the Company class. Furthermore there exists an association between the Company class and the Address class. This way the schema migration is uniquely defined. However the data migration is more complicated and depends on the semantics of the changes. In the current example new objects of Address type have to be created and linked to the correct Company objects, whereas their attributes have to be computed properly from the old address attributes. In general data migration needs to be defined with the semantic knowledge of the developer. Nonetheless a lot of data migration can be provided by default, i.e. can be generated. In our simple example the framework can assume that the remaining attributes of the Company class, i.e. the non-address attributes, are intended to have the same semantics in the new model as in the old model. Based on this assumption the data migration is conceptually just a copying for these attributes. Of course an elaborated approach has to provide a means to override the default behavior of such simple data migration parts, too. This paper describes our solution for these problems. Our solution is based upon two parts: first we describe the actual process of database reorganization, split in some main steps, and second we introduce an application, the upgrader generator, which automatically compares two object models and generates the needed parts, i.e. SQL scripts and Java code, for the database reorganization. 5 The Database Reorganization Process The way database reorganization is made depends on many circumstances: on which database is used, on how many applications and on how many versions of the applications are supposed to use the data at the same time and so on. The IMIS applications are supposed to run always exclusively on the database. There is only one version at a time running. To deploy an update, including database reorganization, IMIS is shut down for a while and the installation process has exclusive access to the database. Our solution focuses on an efficient and stable process. The process is separated into the following steps: - database cloning - schema migration: modification of the cloned database schema - data migration: transformation and update of the objects We decided to clone the database, because a lot of the data can always supposed to be unchanged and cloning is the most efficient way to bring the data into the new schema. The second step modifies the structure of the cloned database. This is done via generated SQL scripts. Before modification all existing constraints are disabled. This way the dependencies does not need to be analyzed and the modification steps can be performed in arbitrary order. The generated scripts drop, create and modify tables and columns until the schema fits the requirements of the new model. Similarly constraints are subject to modification, too: obsolete constraints are deleted, new constraints are created but not yet enabled. Some data in the database clone is deleted during the modification of its schema - in our preceding example the address attribute values. The transformation and update of the changed objects is the third step. As mentioned earlier, this data migration needs to be performed with the knowledge of the developer about semantics. The relational representation of the objects is transparent to the developer. Therefore knowledge of the model change semantics is provided on object-oriented level in our approach. A generated Java program, called upgrader, performs the data migration. The upgrader program is presented to the developer as an API providing hooks for customizations. After the transformation of the cloned database with the upgrader program, the existing constraints are enabled. Figure 4 illustrates technical details of the data migration process. We use the old database as the object source. One database adaptor works on the old model and provides access to the old data objects. The custom Java data migration code is written with respect to the old database adaptor. The upgrader transforms the objects and sends them via RMI to an inserter process running in a separate virtual machine which inserts the new objects into the new, modified database. By the usage of two virtual machines two different name spaces are enforced, so that possible subtle name conflicts between the old and new application are prevented from the outset. 6 The Upgrader Generator The data reorganization process needs the SQL scripts for schema migration and the upgrader Java program for data migration. Both components have to be created each time a new model version appears. Their structure and behavior depends on the kind of model changes. We developed a tool that compares the two models in quest, analyzes the differences and generates the SQL scripts and the upgrader. This tool is called upgrader generator. As explained in section 3, the object model is represented by a model.dat-file of serialized Java objects. The used meta model is separated into two parts: an object-oriented part that models packages and classes with attributes, associations, inheritance etc. and a relational part that allows for the specification of table and column names, attribute constraints like maximal string lengths and number sizes, primary key specifications etc. The upgrader generator compares the two object models and finds structural differences such as new or removed classes, attributes and associations, new or removed sub- and super classes, changes in the relational part of the model like changed table and column names, changed constraints etc. The developer can provide auxiliary information in special property files. For example, it is possible to rename a class, to rename an attribute, to move a class in the class hierarchy or to move an attribute from one class to another. As a result the structure of the new model is uniquely defined and the necessary SQL scripts can be generated automatically. Because the data migration in general depends on semantics provided by the developer, the generator "guesses" a solution and generates Java code for the upgrader. The developer completes this implementation. The functionality of the upgrader generator is sketched in Figure 5. The generator creates a special abstract upgrade class for each changed class in the model. This class serves as the basis for the transformation of its corresponding objects. There might be changes in the model for which the generator cannot guess a solution, so that an implementation by the developer must be enforced. For example, if the developer has decreased the maximal string length of an attribute, she needs to implement the effect on too long values. In such a case the generator creates abstract methods that enforce an implementation provided by the developer. In other cases the guess made might not be correct. Please recall the example from the beginning - the split of the address attribute into several attributes of a new class. The generator only finds a new Address class and generates a new upgrade class which does not create any new objects by default. The developer needs to overwrite the generated methods to create the correctly filled objects and associate them with the corresponding company objects. Actually the upgrader generator distinguishes between two kinds of classes in the new model, i.e. classes that stem from the old model and entirely new classes. The detection of a class that stems from the old model is a good example for the simple way the upgrader constructs the schema mapping between the old and the new model: it is just based on name equality unless there isn’t any explicit specification in the respective property file that redefines the origin of the class or redefines the class as entirely new. For an entirely new class the upgrader generator generates a hook for factoring objects. Then for each attribute of a given class it generates a hook that is called for every object of that class. The described architecture allows most complex reorganizations of the database. Any inform- ation of the old database may be accessed and auxiliary information sources may be included as well, like libraries, property files etc. The complete schema evolution framework consists of approximately 10 clocs of documented Java code. 7 Discussion The first step in the database reorganization process is the schema migration and there are several ways to do it. One way would be the creation of a new and empty schema. This could easily be done by generating DDL statements from the object model and our model generator implements already this functionality. But creating an empty schema implies that a lot of unchanged data has to be moved from the old to the new schema. A more efficient way is to keep the data in the original schema. That means the schema has to be modified step by step until the structure fits the new model requirements by dropping, adding and modifying tables or columns etc. Only tables with relations to model changes would be touched. But modifying the existing data has its pitfalls. Some object transformation processes may need information of other objects, however these objects may be subject to change, too. With a copy it is not necessary to take dependencies into account, because every information is still accessible in the old schema. The most efficient and easiest way is to duplicate the database - tests have shown that it is at least twenty times faster as an SQL based solution. Modifying the schema can be done by SQL statements. But modifying means also the loss of some data. We try to keep as much data in the schema as possible. For example if only a single attribute of a class has been changed, we modify only the corresponding column and only the values of one attribute has to be updated. In other cases, by example splitting a class into two, the old objects have to be removed and two new tables have to be created for the two new classes which are empty. Changing the model doesn’t lead only to data reorganization, but also to application migration. And this is also the task of the developer. The IMIS client applications are implemented in Java and the data transformation is done on object level by a Java program which the developer has to complete by implementing transformation rules for the objects. 8 Related Work Improving schema evolution and data migration with respect to an object-relational mapping has subtle issues, because object-relational mapping is a practical challenge on its own. An early rigorous analysis of the interdependency between model evolution with respect to an underlying relationship between semantic data models and relational schemas can be found in [7]. Discussing schema evolution and data migration is particularly challenging with respect to an object-oriented type system because of the comparatively rich type construction facilities of a typical object-oriented type system. The object-oriented database ORION [3, 4] takes into account the physical level in the discussion of its data migration solution. ORION offers a solution with dynamic schema evolution. The administrator can trigger online database schema changes, i.e. without the need to restart the database. Thereby ORION follows an adaptional approach: the model under consideration is converted and the database and applications are tailored with respect to the new model. The TSE [10] approach supports schema versioning by means of views. In this approach there exists a base model that is always only augmented. Object deletions are emulated by views. The O2 [1] approach is a combined adaptional and schema versioning approach that targets the goal to minimize the need for application reconstruction. The OTGen [6] generator produces a data migration program from a declarative description of a schema mapping, which is provided by the database administrator. The system Tess [5] picks up and further improves the contributions of the OTGen system. Tess takes a description of an old schema and a new schema and produces a data transform program. A schema mapping is automatically constructed for this purpose. An overview of automatic schema matching is provided by [11]. We want to mention Clio [9, 8] as an exemplary system. The Clio system consists of a correspondence engine for schema matching and a mapping generator for producing view definitions that mitigate between source and target schemas. 9 Conclusion - This paper tackles the multi-tiered problem of combined type evolution and data migration for software systems with an object-oriented application server tier and a relational database tier. - The proposed schema evolution and data migration framework is tightly integrated into a model-based, tool-supported development approach. - The approach provides a model comparison mechanism that automatically reconstructs a schema mapping between a current model and an intended new model. - Customizable upgrade programs for data migration are generated. Thereby the information of the model comparison phase is exploited in order to detect necessary customizations. - Default data migration is provided in the approach that is maximal with respect to the model comparison results. The chosen implementation based on cloning is particularly simple and efficient. - Data migration customizations are implemented on the level of transparent data access. References
{"Source-Url": "http://edocs.fu-berlin.de/docs/servlets/MCRFileNodeServlet/FUDOCS_derivate_000000000407/2004_01.pdf", "len_cl100k_base": 5417, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 26519, "total-output-tokens": 6777, "length": "2e12", "weborganizer": {"__label__adult": 0.000301361083984375, "__label__art_design": 0.0003590583801269531, "__label__crime_law": 0.0003867149353027344, "__label__education_jobs": 0.0007915496826171875, "__label__entertainment": 6.222724914550781e-05, "__label__fashion_beauty": 0.00015974044799804688, "__label__finance_business": 0.00033092498779296875, "__label__food_dining": 0.0003139972686767578, "__label__games": 0.00035762786865234375, "__label__hardware": 0.0009160041809082032, "__label__health": 0.0005807876586914062, "__label__history": 0.0003113746643066406, "__label__home_hobbies": 9.083747863769533e-05, "__label__industrial": 0.0005674362182617188, "__label__literature": 0.00025272369384765625, "__label__politics": 0.0002663135528564453, "__label__religion": 0.000377655029296875, "__label__science_tech": 0.0682373046875, "__label__social_life": 8.660554885864258e-05, "__label__software": 0.01470184326171875, "__label__software_dev": 0.90966796875, "__label__sports_fitness": 0.0002090930938720703, "__label__transportation": 0.00046181678771972656, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30818, 0.02639]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30818, 0.39246]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30818, 0.89926]], "google_gemma-3-12b-it_contains_pii": [[0, 1332, false], [1332, 1562, null], [1562, 5984, null], [5984, 9351, null], [9351, 9417, null], [9417, 13761, null], [13761, 15638, null], [15638, 17160, null], [17160, 19795, null], [19795, 21557, null], [21557, 25675, null], [25675, 29261, null], [29261, 30818, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1332, true], [1332, 1562, null], [1562, 5984, null], [5984, 9351, null], [9351, 9417, null], [9417, 13761, null], [13761, 15638, null], [15638, 17160, null], [17160, 19795, null], [19795, 21557, null], [21557, 25675, null], [25675, 29261, null], [29261, 30818, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30818, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30818, null]], "pdf_page_numbers": [[0, 1332, 1], [1332, 1562, 2], [1562, 5984, 3], [5984, 9351, 4], [9351, 9417, 5], [9417, 13761, 6], [13761, 15638, 7], [15638, 17160, 8], [17160, 19795, 9], [19795, 21557, 10], [21557, 25675, 11], [25675, 29261, 12], [29261, 30818, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30818, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
9e7fedbbefa8790308df795ecf4b202412a9e7ec
[REMOVED]
{"Source-Url": "https://research-repository.st-andrews.ac.uk/bitstream/handle/10023/24216/Akgun_2021_cpaior2021_finding_subgraphs_AAM.pdf;jsessionid=82FFA740052DB03BA15E1EC3DC17FC48?sequence=1", "len_cl100k_base": 8163, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 43349, "total-output-tokens": 13289, "length": "2e12", "weborganizer": {"__label__adult": 0.00044083595275878906, "__label__art_design": 0.000499725341796875, "__label__crime_law": 0.0004665851593017578, "__label__education_jobs": 0.001491546630859375, "__label__entertainment": 0.00011372566223144533, "__label__fashion_beauty": 0.0002574920654296875, "__label__finance_business": 0.00043320655822753906, "__label__food_dining": 0.00043129920959472656, "__label__games": 0.0007958412170410156, "__label__hardware": 0.0011205673217773438, "__label__health": 0.000957012176513672, "__label__history": 0.000499725341796875, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0006575584411621094, "__label__literature": 0.0004127025604248047, "__label__politics": 0.0004067420959472656, "__label__religion": 0.0006704330444335938, "__label__science_tech": 0.128173828125, "__label__social_life": 0.00014340877532958984, "__label__software": 0.0082244873046875, "__label__software_dev": 0.85205078125, "__label__sports_fitness": 0.0004315376281738281, "__label__transportation": 0.0008969306945800781, "__label__travel": 0.0002651214599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45920, 0.03831]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45920, 0.36184]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45920, 0.85962]], "google_gemma-3-12b-it_contains_pii": [[0, 2790, false], [2790, 4708, null], [4708, 6439, null], [6439, 8992, null], [8992, 10820, null], [10820, 14101, null], [14101, 17130, null], [17130, 17694, null], [17694, 20990, null], [20990, 24129, null], [24129, 26927, null], [26927, 29093, null], [29093, 31882, null], [31882, 34198, null], [34198, 37429, null], [37429, 40798, null], [40798, 44388, null], [44388, 45920, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2790, true], [2790, 4708, null], [4708, 6439, null], [6439, 8992, null], [8992, 10820, null], [10820, 14101, null], [14101, 17130, null], [17130, 17694, null], [17694, 20990, null], [20990, 24129, null], [24129, 26927, null], [26927, 29093, null], [29093, 31882, null], [31882, 34198, null], [34198, 37429, null], [37429, 40798, null], [40798, 44388, null], [44388, 45920, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45920, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45920, null]], "pdf_page_numbers": [[0, 2790, 1], [2790, 4708, 2], [4708, 6439, 3], [6439, 8992, 4], [8992, 10820, 5], [10820, 14101, 6], [14101, 17130, 7], [17130, 17694, 8], [17694, 20990, 9], [20990, 24129, 10], [24129, 26927, 11], [26927, 29093, 12], [29093, 31882, 13], [31882, 34198, 14], [34198, 37429, 15], [37429, 40798, 16], [40798, 44388, 17], [44388, 45920, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45920, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
91e55af696c2f0891867430cb2f60092eb3f84f0
Contents 1 Quick Start 1.1 Usage .................................................. 3 1.2 Error Codes ........................................... 7 1.3 Release Notes ......................................... 9 1.4 Older Versions ....................................... 14 1.5 License ................................................ 15 2 Credits .................................................... 17 pydocstyle is a static analysis tool for checking compliance with Python docstring conventions. Pydocstyle supports most of PEP 257 out of the box, but it should not be considered a reference implementation. Pydocstyle supports Python 3.5, 3.6, 3.7 and 3.8. Quick Start 1. Install `pip install pydocstyle` 2. Run `$ pydocstyle test.py` test.py:18 in private nested class `meta`: D101: Docstring missing test.py:27 in public function `get_user`: D300: Use `"""triple double quotes""""` (found `'`-quotes) test:75 in public function `init_database`: D201: No blank lines allowed before function docstring (found 1) ... 3. Fix your code :) Contents: 1.1 Usage 1.1.1 Installation Use pip or easy_install: `pip install pydocstyle` Alternatively, you can use `pydocstyle.py` source file directly - it is self-contained. 1.1.2 Command Line Interface Usage Usage: pydocstyle [options] [<file|dir>...] Options: --version show program's version number and exit -h, --help show this help message and exit -e, --explain show explanation of each error -s, --source show source for each error -d, --debug print debug information -v, --verbose print status information --count print total number of errors to stdout --config=<path> use given config file and disable config discovery --match=<pattern> check only files that exactly match <pattern> regular expression; default is --match='(?!test_).*\.py' which matches files that don't start with 'test_' but end with '.py' --match-dir=<pattern> search only dirs that exactly match <pattern> regular expression; default is --match-dir='[^.].*', which matches all dirs that don't start with a dot --ignore-decorators=<decorators> ignore any functions or methods that are decorated by a function with a name fitting the <decorators> regular expression; default is --ignore-decorators='' which does not ignore any decorated functions. Note: When using --match, --match-dir or --ignore-decorators consider whether you should use a single quote (') or a double quote ("), depending on your OS, Shell, etc. Error Check Options: Only one of --select, --ignore or --convention can be specified. If none is specified, defaults to `--convention=pep257'`. These three options select the "basic list" of error codes to check. If you wish to change that list (for example, if you selected a known convention but wish to ignore a specific error from it or add a new one) you can use `--add-[ignore/select]` in order to do so. --select=<codes> choose the basic list of checked errors by specifying which errors to check for (with a list of comma-separated error codes or prefixes). for example: --select=D101,D2 --ignore=<codes> choose the basic list of checked errors by specifying which errors to ignore out of all of the available error codes (with a list of comma-separated error codes or prefixes). for example: --ignore=D101,D2 --convention=<name> choose the basic list of checked errors by specifying an existing convention. Possible conventions: pep257, numpy, google. --add-select=<codes> add extra error codes to check to the basic list of errors previously set by --select, --ignore or --convention. --add-ignore=<codes> ignore extra error codes by removing them from the Note: When using any of the --select, --ignore, --add-select, or --add-ignore command line flags, it is possible to pass a prefix for an error code. It will be expanded so that any code beginning with that prefix will match. For example, running the command `pydocstyle --ignore=D4` will ignore all docstring content issues as their error codes beginning with “D4” (i.e. D400, D401, D402, D403, and D404). Return Code <table> <thead> <tr> <th>Code</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Success - no violations</td> </tr> <tr> <td>1</td> <td>Some code violations were found</td> </tr> <tr> <td>2</td> <td>Illegal usage - see error message</td> </tr> </tbody> </table> Configuration Files `pydocstyle` supports ini-like configuration files. In order for `pydocstyle` to use it, it must be named one of the following options, and have a [pydocstyle] section. - setup.cfg - tox.ini - .pydocstyle - .pydocstyle.ini - .pydocstyalerc - .pydocstyalerc.ini When searching for a configuration file, `pydocstyle` looks for one of the file specified above in that exact order. If a configuration file was not found, it keeps looking for one up the directory tree until one is found or uses the default configuration. Note: For backwards compatibility purposes, `pydocstyle` supports configuration files named .pep257, as well as section header [pep257]. However, these are considered deprecated and support will be removed in the next major version. Available Options Not all configuration options are available in the configuration files. Available options are: - convention - select - ignore - add_select - add_ignore • match • match_dir • ignore_decorators See the Usage section for more information. Inheritance By default, when finding a configuration file, pydocstyle tries to inherit the parent directory’s configuration and merge them to the local ones. The merge process is as follows: • If one of select, ignore or convention was specified in the child configuration - Ignores the parent configuration and set the new error codes to check. Otherwise, simply copies the parent checked error codes. • If add-ignore or add-select were specified, adds or removes the specified error codes from the checked error codes list. • If match or match-dir were specified - use them. Otherwise, use the parent’s. In order to disable this (useful for configuration files located in your repo’s root), simply add inherit=false to your configuration file. Note: If any of select, ignore or convention were specified in the CLI, the configuration files will take no part in choosing which error codes will be checked. match and match-dir will still take effect. Example ``` [pydocstyle] inherit = false ignore = D100,D203,D405 match = .*\.py ``` In-file configuration pydocstyle supports inline commenting to skip specific checks on specific functions or methods. The supported comments that can be added are: 1. "# noqa" skips all checks. 2. "# noqa: D102,D203" can be used to skip specific checks. Note that this is compatible with skips from flake8, e.g. "# noqa: D102,E501,D203. For example, this will skip the check for a period at the end of a function docstring: ```python >>> def bad_function(): # noqa: D400 ... """Omit a period in the docstring as an exception"" ... pass ``` 1.1.3 Usage with the pre-commit git hooks framework `pydocstyle` can be included as a hook for `pre-commit`. The easiest way to get started is to add this configuration to your `.pre-commit-config.yaml`: ```yaml - repo: https://github.com/pycqa/pydocstyle rev: 5.0.3rc # pick a git hash / tag to point to hooks: - id: pydocstyle options: --ignore=D100,D203,D405 # or multiline --select= D101, D2 ``` See the `pre-commit docs` for how to customize this configuration. Checked-in python files will be passed as positional arguments so no need to use `--match=*.py`. You can also use command line arguments instead of configuration files to achieve the same effect with less files. ```yaml - id: pydocstyle args: --ignore=D100,D203,D405 # or multiline --select= D101, D2 ``` 1.2 Error Codes 1.2.1 Grouping <table> <thead> <tr> <th>Missing Docstrings</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D100</td> <td>Missing docstring in public module</td> </tr> <tr> <td>D101</td> <td>Missing docstring in public class</td> </tr> <tr> <td>D102</td> <td>Missing docstring in public method</td> </tr> <tr> <td>D103</td> <td>Missing docstring in public function</td> </tr> <tr> <td>D104</td> <td>Missing docstring in public package</td> </tr> <tr> <td>D105</td> <td>Missing docstring in magic method</td> </tr> <tr> <td>D106</td> <td>Missing docstring in public nested class</td> </tr> <tr> <td>D107</td> <td>Missing docstring in <code>__init__</code></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Whitespace Issues</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D200</td> <td>One-line docstring should fit on one line with quotes</td> </tr> <tr> <td>D201</td> <td>No blank lines allowed before function docstring</td> </tr> <tr> <td>D202</td> <td>No blank lines allowed after function docstring</td> </tr> <tr> <td>D203</td> <td>1 blank line required before class docstring</td> </tr> <tr> <td>D204</td> <td>1 blank line required after class docstring</td> </tr> <tr> <td>D205</td> <td>1 blank line required between summary line and description</td> </tr> <tr> <td>D206</td> <td>Docstring should be indented with spaces, not tabs</td> </tr> <tr> <td>D207</td> <td>Docstring is under-indented</td> </tr> <tr> <td>D208</td> <td>Docstring is over-indented</td> </tr> <tr> <td>D209</td> <td>Multi-line docstring closing quotes should be on a separate line</td> </tr> <tr> <td>D210</td> <td>No whitespaces allowed surrounding docstring text</td> </tr> <tr> <td>D211</td> <td>No blank lines allowed before class docstring</td> </tr> <tr> <td>D212</td> <td>Multi-line docstring summary should start at the first line</td> </tr> </tbody> </table> Continued on next page Table 1.1 – continued from previous page <table> <thead> <tr> <th>Code</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D213</td> <td>Multi-line docstring summary should start at the second line</td> </tr> <tr> <td>D214</td> <td>Section is over-indented</td> </tr> <tr> <td>D215</td> <td>Section underline is over-indented</td> </tr> </tbody> </table> **Quotes Issues** <table> <thead> <tr> <th>Code</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D300</td> <td>Use &quot;&quot;&quot;&quot;triple double quotes&quot;&quot;&quot;&quot;</td> </tr> <tr> <td>D301</td> <td>Use r&quot;&quot;&quot;&quot; if any backslashes in a docstring</td> </tr> <tr> <td>D302</td> <td>Use u&quot;&quot;&quot;&quot; for Unicode docstrings</td> </tr> </tbody> </table> **Docstring Content Issues** <table> <thead> <tr> <th>Code</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>D400</td> <td>First line should end with a period</td> </tr> <tr> <td>D401</td> <td>First line should be in imperative mood</td> </tr> <tr> <td>D402</td> <td>First line should not be the function’s &quot;signature&quot;</td> </tr> <tr> <td>D403</td> <td>First word of the first line should be properly capitalized</td> </tr> <tr> <td>D404</td> <td>First word of the docstring should not be <em>This</em></td> </tr> <tr> <td>D405</td> <td>Section name should be properly capitalized</td> </tr> <tr> <td>D406</td> <td>Section name should end with a newline</td> </tr> <tr> <td>D407</td> <td>Missing dashed underline after section</td> </tr> <tr> <td>D408</td> <td>Section underline should be in the line following the section’s name</td> </tr> <tr> <td>D409</td> <td>Section underline should match the length of its name</td> </tr> <tr> <td>D410</td> <td>Missing blank line after section</td> </tr> <tr> <td>D411</td> <td>Missing blank line before section</td> </tr> <tr> <td>D412</td> <td>No blank lines allowed between a section header and its content</td> </tr> <tr> <td>D413</td> <td>Missing blank line after last section</td> </tr> <tr> <td>D414</td> <td>Section has no content</td> </tr> <tr> <td>D415</td> <td>First line should end with a period, question mark, or exclamation point</td> </tr> <tr> <td>D416</td> <td>Section name should end with a colon</td> </tr> <tr> <td>D417</td> <td>Missing argument descriptions in the docstring</td> </tr> </tbody> </table> ### 1.2.2 Default conventions Not all error codes are checked for by default. There are three conventions that may be used by pydocstyle: pep257, numpy and google. The pep257 convention (specified in PEP257), which is enabled by default in pydocstyle, checks for all of the above errors except for D203, D212, D213, D214, D215, D404, D405, D406, D407, D408, D409, D410, D411, D412, D413, D415, D416 and D417. The numpy convention added in v2.0.0 supports the numpydoc docstring standard. This checks all of the errors except for D203, D204, D212, D213, D402, D413, D415, D416, and D417. The google convention added in v4.0.0 supports the Google Python Style Guide. This checks for all the errors except D203, D204, D213, D215, D400, D401, D404, D406, D407, D408, D409 and D413. These conventions may be specified using –convention=<name> when running pydocstyle from the command line or by specifying the convention in a configuration file. See the Usage section for more details. **Note:** It makes no sense to check the same docstring for both numpy and google conventions. Therefore, if we successfully detect that a docstring is in the numpy style, we don’t check it for google. The reason numpy style takes precedence over google is that the heuristics of detecting it are better, and we don’t want to enforce users to provide external hints to pydocstyle in order to let it know which style docstrings are written in. 1.2.3 Publicity The D1xx group of errors deals with missing docstring in public constructs: modules, classes, methods, etc. It is important to note how publicity is determined and what its effects are. How publicity is determined Publicity for all constructs is determined as follows: a construct is considered public if - 1. Its immediate parent is public and 2. Its name does not contain a single leading underscore. A construct’s immediate parent is the construct that contains it. For example, a method’s parent is a class object. A class’ parent is usually a module, but might also be a function, method, etc. A module can either have no parent, or it can have a parent that is a package. In order for a construct to be considered public, its immediate parent must also be public. Since this definition is recursive, it means that all of its parents need to be public. The corollary is that if a construct is considered private, then all of its descendants are also considered private. For example, a class called _Foo is considered private. A method bar in _Foo is also considered private since its parent is a private class, even though its name does not begin with a single underscore. Modules are parsed to look if __all__ is defined. If so, only those top level constructs are considered public. The parser looks for __all__ defined as a literal list or tuple. As the parser doesn’t execute the module, any mutation of __all__ will not be considered. How publicity affects error reports The immediate effect of a construct being determined as private is that no D1xx errors will be reported for it (or its children, as the previous section explains). A private method, for instance, will not generate a D102 error, even if it has no docstring. However, it is important to note that while docstring are optional for private construct, they are still required to adhere to your style guide. So if a private module _foo.py does not have a docstring, it will not generate a D100 error, but if it does have a docstring, that docstring might generate other errors. 1.3 Release Notes pydocstyle version numbers follow the Semantic Versioning specification. 1.3.1 Current Development Version New Features • Skip function arguments prefixed with _ in D417 check (#440). Bug Fixes • Update convention support documentation (#386, #393) 1.3.2 5.0.2 - January 8th, 2020 Bug Fixes - Fix DeprecationWarning / SyntaxError “invalid escape sequence” with Python 3.6+ (#445). 1.3.3 5.0.1 - December 9th, 2019 Bug Fixes - Fixed an issue where AttributeError was raised when parsing the parameter section of a class docstring (#434, #436). 1.3.4 5.0.0 - December 9th, 2019 Major Updates - Support for Python 3.4 has been dropped (#402). New Features - Extend support for detecting missing arguments in Google style docstrings to method calls (#384). - Extend support for detecting missing argument description in Numpy style docstrings (#407). - Added support for Python 3.8 (#423). - Allow skipping errors on module level docstring via #noqa (#427). - Whitespace is ignored with set options split across multiple lines (#221). Bug Fixes - Remove D413 from the google convention (#430). - Remove D413 from the pep257 convention (#404). - Replace semicolon with colon in D416 messages. (#409) - D301 (Use r””” if any backslashes in a docstring) does not trigger on backslashes for line continuation or unicode literals \u... and \N... anymore. These are considered intended elements of the docstring and thus should not be escaped by using a raw docstring (#365). - Fix decorator parsing (#411). - Google-style sections no longer cause false errors when used with Numpy-style sections (#388, #424). - D202: Allow a blank line after function docstring when followed by declaration of an inner function or class (#395, #426). - Fix D401 and D404 checks not working for docstrings containing only one word and ending with non-alpha character (#421) 1.3.5 4.0.1 - August 14th, 2019 Bug Fixes - D401: Fixed a false positive where one stem had multiple imperative forms, e.g., init and initialize / initiate (#382). - Fix parser hanging when there’s a comment directly after __all__ (#391, #366). • Fixed RST error in table which resulted in the online documentation missing the violation code table (#396). • Fixed IndentationError when parsing function arguments (#392). 1.3.6 4.0.0 - July 6th, 2019 Major Updates • Support for Python 2.x and PyPy has been dropped (#340). • Added initial support for Google convention (#357). New Features • Added pre-commit hook (#346) Bug Fixes • Fix parsing tuple syntax __all__ (#355, #352). 1.3.7 3.0.0 - October 14th, 2018 Major Updates • Support for Python 3.3 has been dropped (#315, #316). • Added support for Python 3.7 (#324). New features • Violations are now reported on the line where the docstring starts, not the line of the def/class it corresponds to (#238, #83). • Updated description of pep257 and numpy conventions (#300). • __all__ parsing is now done on a best-effort basis - if __all__ can’t be statically determined, it will be ignored (#320, #313). Bug Fixes • Fixed a false-positive recognition of section names causing D405 to be reported (#311, #317). • Fixed a bug where functions that don’t end with a newline will sometimes raise an exception (#321, #336). 1.3.8 2.1.1 - October 9th, 2017 Bug Fixes • Changed wheel configuration to be NOT universal, as #281 added configparser as a dependency for Python 2.7. • Updated usage documentation. 1.3.9 2.1.0 - October 8th, 2017 New Features • Public nested classes missing a docstring are now reported as D106 instead of D101 (#198, #261). • __init__ methods missing a docstring are now reported as D107 instead of D102 (#273, #277). • Added support for Python 3.6 (#270). • Specifying an invalid error code prefix (e.g., --select=D9) will print a warning message to stderr (#253, #279). • Configuration files now support multiple-lined entries (#250, #281). • Improved description of how error selection works in the help section (#231, #283). Bug Fixes • Fixed an issue where the --source flag would result in improperly spaced output (#256, #257, #260). • Fixed an issue where if a first word in a docstring had Unicode characters and the docstring was not a unicode string, an exception would be raised (#258, #264). • Configuration files that were specified by CLI and don’t contain a valid section name will now issue a warning to stderr (#276, #280). • Removed D107 from the numpy convention (#288). 1.3.10 2.0.0 - April 18th, 2017 Major Updates • Support for numpy conventions verification has been added (#129, #226). • Support for Python 2.6 has been dropped (#206, #217). • Support for PyPy3 has been temporarily dropped, until it will be equivalent to CPython 3.3+ and supported by pip (#223). • Support for the pep257 console script has been dropped. Only the pydocstyle console script should be used (#216, #218). • Errors are now printed to stdout instead of stderr (#201, #210). New Features • Decorator-based skipping via --ignore-decorators has been added (#204). • Support for using pycodestyle style wildcards has been added (#72, #209). • Superfluous opening quotes are now reported as part of D300 (#166, #225). • Fixed a false-positive recognition of D410 and added D412 (#230, #233). • Added --config=<path> flag to override the normal config file discovery and choose a specific config file (#117, #247). • Support for specifying error codes with partial prefix has been added, e.g., --select=D101,D2 (#72, #209). • All configuration file can now have the .ini extension (#237). • Added better imperative mood checks using third party stemmer (#235, #68). Bug Fixes • Made parser more robust to bad source files (#168, #214) • Modules are now considered private if their name starts with a single underscore. This is a bugfix where “public module” (D100) was reported regardless of module name (#199, #222). • Removed error when __all__ is a list (#62, #227). • Fixed a bug where the @ sign was used as a matrix multiplication operator in Python 3.5, but was considered a decorator by the parser (#246, #191). 1.3.11 1.1.1 - October 4th, 2016 Bug Fixes • Fixed an issue where the flake8-docstrings failed when accessing some public API from pydocstyle. 1.3.12 1.1.0 - September 29th, 2016 Major Updates • pydocstyle is no longer a single file. This might make it difficult for some users to just add it to their project, but the project has reached certain complexity where splitting it into modules was necessary (#200). New Features • Added the optional error codes D212 and D213, for checking whether the summary of a multi-line docstring starts at the first line, respectively at the second line (#174). • Added D404 - First word of the docstring should not be “This”. It is turned off by default (#183). • Added the ability to ignore specific function and method docstrings with inline comments: 1. “# noqa” skips all checks. 2. “# noqa: D102,D203” can be used to skip specific checks. Bug Fixes • Fixed an issue where file paths were printed in lower case (#179, #181). • The error code D300 is now also being reported if a docstring has uppercase literals (R or U) as prefix (#176). • Fixed a bug where an __all__ error was reported when __all__ was imported from another module with a different name (#182, #187). • Fixed a bug where raise X from Y syntax caused pydocstyle to crash (#196, #200). 1.3.13 1.0.0 - January 30th, 2016 Major Updates • The project was renamed to pydocstyle and the new release will be 1.0.0! New Features • Added support for Python 3.5 (#145). • Classes nested inside classes are no longer considered private. Nested classes are considered public if their names are not prepended with an underscore and if their parent class is public, recursively (#13, #146). • Added the D403 error code - “First word of the first line should be properly capitalized”. This new error is turned on by default (#164, #165, #170). • Added support for .pydocstyrlerc and as configuration file name (#140, #173). Bug Fixes • Fixed an issue where a NameError was raised when parsing complex definitions of __all__ (#142, #143). • Fixed a bug where D202 was falsely reported when a function with just a docstring and no content was followed by a comment (#165). • Fixed wrong __all__ definition in main module (#150, #156). • Fixed a bug where an `AssertionError` could occur when parsing `__future__` imports (#154). 1.4 Older Versions Note: Versions documented below are before renaming the project from `pep257` to `pydocstyle`. 1.4.1 0.7.0 - October 9th, 2015 New Features • Added the D104 error code - “Missing docstring in public package”. This new error is turned on by default. Missing docstring in `__init__.py` files which previously resulted in D100 errors (“Missing docstring in public module”) will now result in D104 (#105, #127). • Added the D105 error code - “Missing docstring in magic method”. This new error is turned on by default. Missing docstrings in magic method which previously resulted in D102 error (“Missing docstring in public method”) will now result in D105. Note that exceptions to this rule are variadic magic methods - specifically `__init__`, `__call__` and `__new__`, which will be considered non-magic and missing docstrings in them will result in D102 (#60, #139). • Support the option to exclude all error codes. Running `pep257` with `--select=` (or `select=` in the configuration file) will exclude all errors which could then be added one by one using `add-select`. Useful for projects new to `pep257` (#132, #135). • Added check D211: No blank lines allowed before class docstring. This change is a result of a change to the official PEP257 convention. Therefore, D211 will now be checked by default instead of D203, which required a single blank line before a class docstring (#137). • Configuration files are now handled correctly. The closer a configuration file is to a checked file the more it matters. Configuration files no longer support `explain`, `source`, `debug`, `verbose` or `count` (#133). Bug Fixes • On Python 2.x, D302 (“Use u”“ for Unicode docstrings”) is not reported if `unicode_literals` is imported from `__future__` (#113, #134). • Fixed a bug where there was no executable for `pep257` on Windows (#73, #136). 1.4.2 0.6.0 - July 20th, 2015 New Features • Added support for more flexible error selections using `--ignore`, `--select`, `--convention`, `--add-ignore` and `--add-select` (#96, #123). Bug Fixes • Property setter and deleter methods are now treated as private and do not require docstrings separate from the main property method (#69, #107). • Fixed an issue where `pep257` did not accept docstrings that are both unicode and raw in Python 2.x (#116, #119). • Fixed an issue where Python 3.x files with Unicode encodings were not read correctly (#118). 1.4.3 0.5.0 - March 14th, 2015 New Features - Added check D210: No whitespaces allowed surrounding docstring text (#95). - Added real documentation rendering using Sphinx (#100, #101). Bug Fixes - Removed log level configuration from module level (#98). - D205 used to check that there was a blank line between the one line summary and the description. It now checks that there is exactly one blank line between them (#79). - Fixed a bug where --match-dir was not properly respected (#108, #109). 1.4.4 0.4.1 - January 10th, 2015 Bug Fixes - Getting ImportError when trying to run pep257 as the installed script (#92, #93). 1.4.5 0.4.0 - January 4th, 2015 **Warning:** A fatal bug was discovered in this version (#92). Please use a newer version. New Features - Added configuration file support (#58, #87). - Added a --count flag that prints the number of violations found (#86, #89). - Added support for Python 3.4, PyPy and PyPy3 (#81). Bug Fixes - Fixed broken tests (#74). - Fixed parsing various colon and parenthesis combinations in definitions (#82). - Allow for greater flexibility in parsing __all__ (#67). - Fixed handling of one-liner definitions (#77). 1.4.6 0.3.2 - March 11th, 2014 First documented release! 1.5 License Copyright (c) 2012 GreenSteam, <http://greensteam.dk/> Copyright (c) 2014-2017 Amir Rachum, <http://amir.rachum.com/> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documenta- tion files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PAR- TICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFT- WARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. CHAPTER 2 Credits pydocstyle is a rename and continuation of pep257, a project created by Vladimir Keleshev. Maintained by Amir Rachum.
{"Source-Url": "http://www.pydocstyle.org/_/downloads/en/latest/pdf/", "len_cl100k_base": 7458, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 38773, "total-output-tokens": 8609, "length": "2e12", "weborganizer": {"__label__adult": 0.0002503395080566406, "__label__art_design": 0.00027441978454589844, "__label__crime_law": 0.00018537044525146484, "__label__education_jobs": 0.0005331039428710938, "__label__entertainment": 5.8650970458984375e-05, "__label__fashion_beauty": 8.863210678100586e-05, "__label__finance_business": 0.00010263919830322266, "__label__food_dining": 0.00021648406982421875, "__label__games": 0.0005412101745605469, "__label__hardware": 0.0002834796905517578, "__label__health": 0.00012010335922241212, "__label__history": 0.00011682510375976562, "__label__home_hobbies": 6.568431854248047e-05, "__label__industrial": 0.00013458728790283203, "__label__literature": 0.00018513202667236328, "__label__politics": 0.0001283884048461914, "__label__religion": 0.00024271011352539065, "__label__science_tech": 0.0009021759033203124, "__label__social_life": 8.064508438110352e-05, "__label__software": 0.012420654296875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0001780986785888672, "__label__transportation": 0.0001145005226135254, "__label__travel": 0.0001308917999267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28652, 0.05689]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28652, 0.1268]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28652, 0.80174]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 419, false], [419, 419, null], [419, 679, null], [679, 679, null], [679, 1310, null], [1310, 3674, null], [3674, 5251, null], [5251, 6936, null], [6936, 9619, null], [9619, 13160, null], [13160, 15513, null], [15513, 17372, null], [17372, 18933, null], [18933, 21339, null], [21339, 23587, null], [23587, 26116, null], [26116, 27486, null], [27486, 28515, null], [28515, 28652, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 419, true], [419, 419, null], [419, 679, null], [679, 679, null], [679, 1310, null], [1310, 3674, null], [3674, 5251, null], [5251, 6936, null], [6936, 9619, null], [9619, 13160, null], [13160, 15513, null], [15513, 17372, null], [17372, 18933, null], [18933, 21339, null], [21339, 23587, null], [23587, 26116, null], [26116, 27486, null], [27486, 28515, null], [28515, 28652, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28652, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28652, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 419, 3], [419, 419, 4], [419, 679, 5], [679, 679, 6], [679, 1310, 7], [1310, 3674, 8], [3674, 5251, 9], [5251, 6936, 10], [6936, 9619, 11], [9619, 13160, 12], [13160, 15513, 13], [15513, 17372, 14], [17372, 18933, 15], [18933, 21339, 16], [21339, 23587, 17], [23587, 26116, 18], [26116, 27486, 19], [27486, 28515, 20], [28515, 28652, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28652, 0.15038]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
a99ff2199844654e8e7f8a0ff475f52dc37900a6
Finding Computer Science Syllabi on the World Wide Web Matthew Phillips 1. Abstract Syllabi contain information useful to students, faculty, and many other people, and given the ubiquity of the WWW many schools are now putting their syllabi online for these people and the general public to access. Even though these syllabi may be available, they might be hard to find. This means that faculty, students, and anyone else who might have an interest in viewing those syllabi might find it useful to be able to browse a collection of reliable syllabi. To build a collection of reliable syllabi it is necessary to find those syllabi on the WWW, this is made easy with a tool like the Google Web API. Once the syllabi are found on the web it is necessary to examine those syllabi and look for desired characteristics to be sure they are desired syllabi. The syllabi that contain the desired characteristics are kept and the rest are discarded. This elimination process can be accomplished using a tool like a classification tree, more specifically, tools like the Orange Data Mining Library and C4.5. This paper describes the process of finding syllabi on the WWW using the Goolge Web API, retrieving those syllabi using Python, and filtering them using the Orange Data Mining Library and C4.5 so that a reliable set of syllabi can be constructed. 2. Introduction As the WWW (the web) expands into almost every part of our society we see many changes coming about. One of these changes is the transition from materials being printed and distributed physically to materials being published online. One such thing that falls into this category is a syllabus. It make sense for schools to publish syllabi online instead of publishing them on paper due to cost, time, and other factors. Given that many of these syllabi are now available online and generally open for public viewing it makes sense to gather theses syllabi and construct a browsable collection as a wealth of information exists in these syllabi. An easy way to gather these syllabi is through an existing web search engine. Given that such a large number of schools are publishing a large number of their syllabi on the web a powerful way of using a search engine is needed. One of the easiest to use and most powerful search engines available today is Google. Google provides an API called the Google Web API that allows for easy gathering of large numbers of web pages using a programming language like Python. When a program language like Python is used, a direct path to Google’s vast index of web pages is available. A language like Python can be used to gather syllabi via the Google Web API and examine those syllabi for preferred characteristics. These characteristics can be stored in the form of a table. These characteristics can then be compared to the characteristics of a preferred set of syllabi via classification tree. Once a potential syllabus has been compared to a desired syllabus it is easy to either keep or discard that potential syllabus. If a syllabus is determined desirable it can be added to the collection of desirable syllabi and if it is found to be undesirable it can be discarded. In this fashion it is easy to build a set of good, or desirable syllabi. 3. Literature Review Syllabi Given the ubiquity and the ease of use of the web many educational institutions now have a number of their syllabi online. This is especially true for computer science syllabi given that most computer scientists are web enthusiasts. Having syllabi available online makes sense for schools as it saves the school resources (e.g. paper, toner, etc) and time. The cost to host a web page containing a syllabi is very small compared to the cost of printing out many syllabi for many students. In addition to saving the school money, often times it is also more convenient for students to reference a syllabus online compared to a paper copy because, in general, the location of the online syllabus does not change. It also makes sense for schools to publish their syllabi on the web as it provides an easy way to have a readily available archive. That is, once a school starts a new semester, a new syllabus with a new name associated with the semester and school year is put up on the web-server and the old syllabus remains there too. This is not a problem since each syllabus will have a distinct name. In general, schools have no need to keep their syllabi private and therefore they share them with anyone who has access to the web. MIT is a prime example of this with their Open Courseware Project (OCW) [1]. MIT has provided a great deal of information about their courses including their syllabi through their OCW program, but that discussion is beyond the scope of this paper. Given that a large number of schools have substantial internet presence and that within each of these schools many courses are offered, many syllabi are found on the web. At the time of this writing Google reports that it has more than 1,690,000 results relating to online syllabi [2]. These online syllabi are certainly useful to their target audience, the students and faculty at that school, but they are also useful to many other people. These other people include potential students trying to learn material taught in a specific course without taking that course, a faculty member that is slated to teach a class and needs to reference existing syllabi, etc. A syllabus from a computer science related class is likely to contain many things that would be useful to a person seeking information relating to the syllabus topic. One of the most important things a syllabus can contain is a list of topics, a course outline, that is to be covered in the class. This is generally a fairly detailed breakdown of the topics that a class will cover in a semester. The reader of the syllabus can easily take these topics as a good starting point for fleshing out a study on their own. That is, a person wanting to learn more about the area of computer science covered in the syllabus could view the main points covered in a course and then do further research on each one of those points. Without going into detail, I am sure you can see how a list of topics might be beneficial to a faculty member who is trying to put together a syllabus of his/her own. Another important thing contained in many computer science syllabi is a reference list, or a bibliography. A bibliography in a syllabus usually contains a listing of related papers that have been published. These papers are often supplemental material to a textbook used in a class, but if they are complete enough, can be used as the only reading material for a class. You can see how having access to existing reading lists would be helpful to a faculty member that is in the process of putting together a new syllabus. Referencing existing bibliographies can obviously save the faculty member time, but it can also strengthen the field of computer science by keeping topics covered in a class somewhat standard when moving from school to school. Many other things contained in syllabi on the web can be useful to people looking for information relating the syllabi. These things include, texts being used for the course, projects/assignments given in the course, homework being assigned in the course, grading schemes being used in the course, reference material other than that contained in the bibliography needed for the course, schedules being used to cover the material, and many other things. Obviously the content of a syllabus is important to user trying to find out information relating to the topic discussed in the syllabus, but someone like a faculty member that is trying to construct a syllabus might be looking for other things in a syllabus such as structure. Structure of a syllabus is sometimes almost as important as a syllabus’s content [3]. Given the importance of a syllabus and their abundance on the web we need to have a way to locate them. This is where something like the Google search engine [4] becomes important. Google Google is a powerful search engine that currently indexes more than 4,285,199,774 web pages [5]. Within Google’s stored index there are at least 1,690,000 web pages relating to syllabi [2]. Not all of these web pages are good, useful syllabi, but they are somehow related to syllabi. To deal with the large volume of web pages that are indexed by Google, Google Research has provided developers and researchers with the Google Web API [6]. The Google Web API is a set of tools that allows developers and researches direct access to Google’s index and provides them with a more powerful interface than the one found through a web browser. Instead of interacting with Google through a web browser, communication is accomplished through the Simple Object Access Protocol (SOAP). SOAP is high level protocol that transforms information into the eXtensible Markup Language (XML) and uses HTTP to communicate the XML via the internet. SOAP simply uses standard HTTP methods such as GET, POST, HEAD, etc. to communicate the XML between web-servers. [7] Given that the only thing needed to communicate using the Google Web API is SOAP, which only relies on common internet technologies, the Google Web API is accessible from most any web-server. The common setup for using the Google Web API is to have a script (usually written in something like Perl or Python) running on a local server access SOAP which uses XML to pass data via HTTP to the Google Web API server. Visually, the process looks something like Figure 1. SOAP, XML, and HTTP are all standard installations on a webserver. The only thing other thing needed to gather information from the Google index is the Google Web API. The Google Web API is freely available for download by going to [6]. The Google Web API can be used with many different languages including Perl, Python, Java, C++, C#, etc. The most popular languages used with the Google Web API are the scripting languages such as Perl and Python, but just about anything can be used [8]. Python Python is a versatile and powerful scripting language that includes many built-in tools. It is an excellent choice for working with the Google Web API because in general, when working with the Google Web API, most information being manipulated is in plain text. Python has great support for text parsing with its string package. Inside the string package Python has full support for text parsing and string manipulation including support for regular expressions. [9] Python is also an excellent choice for working with the Google Web API because a Python Google toolkit has been designed specifically to allow easy interaction between Python and the Google Web API. This toolkit is called PyGoogle and is the work of Mark Pilgrim [10]. Mark Pilgrim has designed PyGoogle so that it is a Python wrapper module and takes away much of the underlying SOAP and XML interactions. Since PyGoogle takes away the messy SOAP and XML interactions it leaves the developer free to work on higher level problems such as performing useful tasks with the information gathered from the Google Web API. PyGoogle is a set of scripts that are simply downloaded to the local server. There is no installation process associated with PyGoogle, you simply have Python import PyGoogle as a package and you are free to use it. To make sense of the large number of pages available through the Google Web API, a Python package like the Orange Data Mining Library needs to be used. [11] The Orange Data Mining Library is a machine learning Python package that includes support of R. Quinlan’s C4.5 [12]. **The Orange Data Mining Library and C4.5** There are many functions in the Orange Data Mining Library that are useful for machine learning and artificial intelligence, but for sorting through information retrieved via the Google Web API it is most straightforward to use the built in support for C4.5. C4.5 is the work of J. R. Quinlan and has grown to be the de facto standard for constructing decision trees in the machine learning world [13]. C4.5 is a program for inducing classification rules in the form of decision trees from a set of given examples. Specifically what happens is that you give C4.5 a table of good or controled data and it trains itself as to what to look for by generating association rules and then uses those association rules sort the test data that is entered. By sort, I mean the data that you want to keep and the data you want to discard. It is easiest to get a feel for how C4.5 works by taking a look at an example. This classic example is taken directly from [13]. Suppose we want to find out if a specific golfer will play given the data in Table 1. We will build a classifier which, based on the features found in Table 1, OUTLOOK, TEMPERATURE, HUMIDITY and WINDY will predict whether or not the golfer will play. There are 2 classes: (play) and (don’t play). There are 14 examples. There are 5 examples which give as result "don’t play" and 9 examples which gives as result "will play." From our data in Table 1, C4.5 will create the decision tree found in Figure 2. You can see how the golfer data shown in Table 1 can just as well represent most any other type of data that has discernible attributes. In keeping with the information in this paper you can see that can we use Python to gather syllabi on the web by using the Google Web API. Then take those syllabi and generate information about their characteristics, put those characteristics in the form of a table, and then input that table into C4.5 and compare it to our training table to produce a decision tree so that we can keep the syllabi we want and discard the ones we do not want. 4. Experimental Report Finding Syllabi To build a high quality collection of computer science syllabi I started by writing a Python script with Python version 2.2 to gather syllabi. This Python script, getssyllabi.py called PyGoogle version 0.5.3 and passed to it the number of web pages it wanted, 10, and the index to start at. The Google Web API only allows retrieval of 10 web pages at a time, so a loop is needed to bump the index by 10 each time through the loop. For example, getssyllabi.py calls PyGoogle the first time and says get me 10 web pages starting at index 0. So the first time through the loop, web pages 0 through 9 are gathered and the second time through the loop, web pages 10 through 19 are gathered and so on. The Google Web API limited me to 1000 queries every 24 hours so I had to be careful not to accidently set my loop to an outragous number. I found that in general I started to receive a lot of noise, or non-syllabi, after about 1000 web pages. I programmed getssyllabi.py to only search educational domain names, .edu, to improve my likelyhood of only gathering syllabi. So, when I searched for syllabi relating to an Operating Systems course, my Google Web API search string looked something like “operating systems site:.edu”. I had getssyllabi.py search for each syllabus topic and create a text file containing all the matching URLs returned from getsyllabi.py. To clarify, for each course I was interested in, I had getsyllabi.py find about 1000 web pages and create a textfile for each set of about 1000 web pages and then put the URLs associated with those web pages in the text file. I searched for total of 10 syllabi topics: - algorithms - compiler design - data structures - formal languages - network architecture - numerical methods - operating systems - programming languages - software engineering - unix I realize this is far from and exhaustive list of courses offered by most computer science departments, but it is somewhat representative of the core classes offered by many computer science departments. After I had gathered all the syllabi I had accumulated 10 times about 1000 syllabi, or 10000 syllabi from 10 different computer science courses. Retrieving Syllabi In the previous step my getsyllabi.py script only retrieved the URL of the webpage that is related to the syllabus. In order for me to determine if the URL contains a desirable syllabus I had to retrieve the HTML pages associated with each of the stored URLs. To do this I wrote a script called getsource.py which read a URL from one of my 10 text files and then using the -source option in the web browser lynx, I dumped the html source into a new text file with the name being the URL of the retrieved data. So, for each URL I ran a command like “lynx -source http:///www.vt.edu/syllabus.html ¿ http:///www.vt.edu/syllabus.html” After I ran getsyllabi.py I now had 10000 URLs with html source. Examining Syllabi Now that I had a collection of syllabi with their html sources I had to determine which ones I wanted to keep and which ones I wanted to discard. To determine this I had to look at what I thought were good examples of syllabi and pick out their characteristics. After looking through many syllabi I found five that I thought were good looking syllabi. I noticed that each one of these syllabi had certain keywords that they shared. All of these syllabi contained the keywords in this list: - Lecture or Lectures - Course or Courses - Grade or Grades or Grading - Instructor or Instructors or Teacher or Teachers or Professor or Professors or Lecturer or Lecturers - Exam or Exams or Test or Tests or Quiz or Quizzes or Homework or Project or Projects Please note that this list of keywords is not case sensitive. I then wrote a script called buildtable.py that used a Python string function called find to search through each html syllabus for each item in the bulleted list above. If it found an item it would insert the proper note in the field of the table thus building a table containing the characteristics of each html syllabus. This table had about 10000 rows, one for each html syllabus that I had gathered. An abbreviated sample of this table is found in Table 2. I also constructed a training table that contained the characteristics of the five syllabi that I thought were good examples of a syllabus. I have not provided that table as it is not very interesting. After I had my two tables constructed I then ran them through the Orange Data Mining Library’s implementation of C4.5. More specifically, I ran my table that my buildtable.py script built against my training table to generate a classification tree. Filtering Out Undesirable Syllabi and Overview The Orange Data Mining Library makes the filtering of data trivial as it only requires four instructions: load training table, build classification tree, run test data through classification tree, print out results. Figure 3 shows a high-level overview of the process I have just discussed. Table 2: Syllabi data. <table> <thead> <tr> <th>Lecture or Lectures</th> <th>Course or Courses</th> <th>Grade or Grades or Grading</th> <th>Instructor or Teachers or Professor or Profesor or Lecturer or Lecturers</th> <th>Exam or Exams or Test or Tests or Quiz or Quizzes or Project or Projects</th> <th>Syllabus</th> </tr> </thead> <tbody> <tr> <td>True</td> <td>True</td> <td>True</td> <td>True</td> <td>True</td> <td>Keep</td> </tr> <tr> <td>True</td> <td>True</td> <td>True</td> <td>True</td> <td>False</td> <td>Don’t Keep</td> </tr> <tr> <td>False</td> <td>False</td> <td>True</td> <td>True</td> <td>True</td> <td>Don’t Keep</td> </tr> <tr> <td>True</td> <td>True</td> <td>True</td> <td>True</td> <td>True</td> <td>Keep</td> </tr> </tbody> </table> Figure 3: Overview of finding, retrieving, examining, and filtering of syllabi. Results After running my test data through the classification tree I found that 8399 out of about 10000 met my requirements for a syllabus. After receiving these results I have a script called getfinal.py that finds the difference between the two sets of syllabi and moves the rejected syllabi out of the directories and into a directory called rejected.nameofcourse. So, what I have left are 10 directories containing the html source for what I have determined to be good quality syllabi and a set of directories called rejected.nameofcourse containing the rejected syllabi. 5. Future Work A great deal of work can be done to expand this project. Three of the bigger things that are obviously not thoroughly researched are: More syllabi need to be gathered according to course topic - I only gathered syllabi for 10 course topics. There are many more courses that are part of computer science departments and the project is incomplete without them. This is something of a difficult problem because some of the courses that are not as common seem to have different names at different schools. For instance, a class dealing with object oriented programming might be called Object Oriented Programming, or it might be called Object Oriented Design, or it might be called Object Oriented Programming with C++, etc. Further examination of syllabi - Other characteristics can exist in syllabi, I only examined syllabi according to words they contain. Maybe common structures can be easily found in syllabi. Maybe URLs can be taken into consideration more when determining if a syllabus is desirable. Additional file formats need to be included - I only examined plain text and html syllabi, but other formats exist. I ran across many syllabi that were contained in PDF, PS, RTF, or Microsoft Word format. It would be useful to examine these syllabi too. References
{"Source-Url": "http://fox.cs.vt.edu:80/SSP/reports/PhillipsTR2004.pdf", "len_cl100k_base": 4631, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 18856, "total-output-tokens": 5360, "length": "2e12", "weborganizer": {"__label__adult": 0.0005865097045898438, "__label__art_design": 0.0010328292846679688, "__label__crime_law": 0.000766754150390625, "__label__education_jobs": 0.1873779296875, "__label__entertainment": 0.0002061128616333008, "__label__fashion_beauty": 0.000457763671875, "__label__finance_business": 0.000973224639892578, "__label__food_dining": 0.0009012222290039062, "__label__games": 0.00109100341796875, "__label__hardware": 0.0015869140625, "__label__health": 0.0012197494506835938, "__label__history": 0.00110626220703125, "__label__home_hobbies": 0.0004444122314453125, "__label__industrial": 0.0008363723754882812, "__label__literature": 0.0011930465698242188, "__label__politics": 0.0006747245788574219, "__label__religion": 0.0011224746704101562, "__label__science_tech": 0.10845947265625, "__label__social_life": 0.0005593299865722656, "__label__software": 0.03326416015625, "__label__software_dev": 0.65380859375, "__label__sports_fitness": 0.0005359649658203125, "__label__transportation": 0.000995635986328125, "__label__travel": 0.0005688667297363281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23199, 0.0206]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23199, 0.73581]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23199, 0.92101]], "google_gemma-3-12b-it_contains_pii": [[0, 3251, false], [3251, 7269, null], [7269, 10567, null], [10567, 13038, null], [13038, 15141, null], [15141, 15981, null], [15981, 18788, null], [18788, 20276, null], [20276, 22132, null], [22132, 23199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3251, true], [3251, 7269, null], [7269, 10567, null], [10567, 13038, null], [13038, 15141, null], [15141, 15981, null], [15981, 18788, null], [18788, 20276, null], [20276, 22132, null], [22132, 23199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23199, null]], "pdf_page_numbers": [[0, 3251, 1], [3251, 7269, 2], [7269, 10567, 3], [10567, 13038, 4], [13038, 15141, 5], [15141, 15981, 6], [15981, 18788, 7], [18788, 20276, 8], [20276, 22132, 9], [22132, 23199, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23199, 0.05714]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6b40eb6f57c387e0f0af7c56b5c7f04e555dee0c
Towards An Avatar Architecture for the Web of Things Michael Mrissa, Lionel Médini, Jean-Paul Jamont, Nicolas Le Sommer, Jérôme Laplace To cite this version: Michael Mrissa, Lionel Médini, Jean-Paul Jamont, Nicolas Le Sommer, Jérôme Laplace. Towards An Avatar Architecture for the Web of Things. [Research Report] Université Lyon 1 - Claude Bernard. 2015. <hal-01376637> Towards An Avatar Architecture for the Web of Things Michael Mrissa, Lionel Médini, Jean-Paul Jamont, Nicolas Le Sommer and Jérôme Laplace Abstract—The Web of Things (WoT) extends the Internet of Things considering that each physical object can be accessed and controlled using Web-based languages and protocols. In this paper, we summarize ongoing work promoting the concept of avatar as a new virtual abstraction to extend physical objects on the Web. An avatar is an extensible and distributed runtime environment endowed with an autonomous behaviour. Avatars relay on Web languages, protocols and reasoning about semantic annotations to dynamically drive connected objects, exploit their capabilities and expose their functionalities as Web services. Avatars are also able to collaborate together in order to achieve complex tasks. I. INTRODUCTION The future Internet has been envisioned as an Internet of Things\(^1\), in which billions of heterogeneous objects will be connected to the Internet using wired or wireless links. The “Web of Things” (WoT) extends the Internet of Things in order to enable access and control of physical objects using Web standards. Objects are expected to expose logical interfaces through Web services, to describe Web contents and services using semantic Web languages and annotations, and to communicate together through standard protocols in order to provide software interoperability between objects. Although the number and types of connected objects increases quickly\(^2\), the WoT is not yet a reality, as several issues must be addressed in order to seamlessly interconnect physical objects, and make these objects accessible on the Web. On the one hand, objects are heterogeneous and rarely able to communicate together because most of them implement proprietary communication protocols instead of Web standard protocols [1]. Yet, end-users can combine their capabilities, thus providing meaningful and complex functionalities. On the other hand, objects usually respond to basic requests using their sensors and actuators, whereas users require comprehensible and usable services to achieve their goals. We argue that an open market of software components and applications dedicated to connected objects that rely on Web standards will help the WoT to become a reality. Such a WoT marketplace should permit developers and industrial companies to distribute their software applications and components, and should provide end-users with software pieces allowing them to implement different functionalities into their objects in order to perform various tasks. To reach this objective, a WoT runtime environment (WoT RE) must be defined. The runtime environment must bridge the gap between resource-constrained objects and the Web, which offers complex and heavyweight, but interoperable and user-friendly technologies. We identified the following requirements for such an environment: • (R1) Autonomy and resource management: Connected objects can be autonomous devices with limited resources (battery, CPU and memory). Thus, the WoT RE must be able to estimate the global cost of physical actions in terms of device usage, computation and networking, to determine if an object can realize a given action. • (R2) Live reactivity: The WoT RE must be able to adapt its behavior to its environment at runtime, regarding functional and QoS aspects. It must be able to provide a service with graceful degradation if typical service computation time exceeds time requirements. • (R3) Computation delegation: to promote autonomy, resource management and live reactivity, the WoT RE should allow deploying code modules on the object processing unit or on a cloud infrastructure and should identify the most suitable location to execute each module. • (R4) Safety: The WoT RE must ensure the physical and informational harmlessness of the object for people and assets, using risk analysis and regulation criteria before realizing an action. • (R5) Disconnection tolerance: The WoT RE must be able to support the connectivity disruptions of between the mobile objects themselves and between the mobile objects and the access points of an infrastructure-based network. • (R6) Interoperability: The WoT RE must be able to handle heterogeneous objects in terms of size, OS and protocols. It must also allow objects with similar physical properties to fulfill the same actions (device independence). • (R7) User-understandable interfaces: The WoT RE must provide entry points to handle requests, and provide applications that correspond to users’ high-level goals. The ASAWoO project\footnote{Project homepage: https://liris.cnrs.fr/asawoo/} aims at providing an distributed, open and generic architecture for WoT REs along with a WoT infrastructure that provides means to execute, secure and link WoT RE instances, and a WoT application marketplace. WoT REs are called avatars and are endowed with an autonomous behavior and on Web languages and protocols. An avatar provides a virtual abstraction of physical objects in the Web. It can expose object basic capabilities as high-level functionalities on the Web, connect to and interact with objects using the most appropriate protocols and languages, perform different reasoning processes to discover, build and adapt such functionalities, query external Web services, interact with users and other objects and execute generic WoT applications. In the remainder of this paper, we show how our architecture design meets the requirements above. The (Rn) notation indicates a specific design to answer the n\textsuperscript{th} requirement. Our preliminary results show good insight on the relevance of our approach. The remainder of the paper is organized as follows. Section II describes the approach and gives detail on the avatar architecture and its lifecycle. Section III gives details on the different contributions involved in the avatar architecture. Section IV presents our prototype to show the feasibility of our work. Section V discusses related work and highlights the novelty of our contribution. Section VI discusses our ongoing results and gives some guidelines for future work. II. Avatar Architecture and Lifecycle Avatars possess a complex component-based architecture, allowing to take into account additional information that is not a priori visible to connected objects but is available from other Web sources and/or avatars to add intelligence to object behaviors. The main components that we propose to leverage such high-level behaviors in WoT applications rely on Web standards to provide interoperability among objects and on advances in various domains, such as component-based programming, embedded systems, cloud computing, delay-tolerant networks, Web and semantic Web technologies and multi-agent systems to leverage single and collective intelligence in object behaviors. As illustrated in Figure 1, an avatar can be distributed on an object and in a cloud infrastructure depending on the resources offered by the object it extends. We identify three categories of connected objects: 1) Resourceful Objects: provide software services and embed a Web server that offers service interfaces. It is often simple to link these objects with other objects or software services, and to deploy all the avatar components on them. 2) Resource-constrained Objects: cannot embed all avatar components due to restricted resources but it is possible to link them to distant hosts that can embed missing components. 3) Resourceless Objects: These objects are passive objects, detected using unique identifiers such as QR codes or RFID tags. They do not have any computation, storage and memory capability. Their avatars are deployed on the cloud or on the local network gateway. According to these categories, it is possible to adapt the deployment of software components of the avatar to different places to globally improve its operation. In the following, we present the avatar architecture and its lifecycle. A. Avatar Architecture Our avatar runtime environment has been designed as an OSGi service-oriented architecture. The implementation of the runtime environment is entirely decoupled from its logical architecture. Consequently, it is possible to adapt the avatar to different types of objects dynamically (R2), as well as to adapt the distribution of the services implementing the avatar to different places - object, local network gateway and cloud - in order to improve the avatar execution (R1, R3). Therefore, the avatar architecture is structured as a set of "manager" components (OSGi services) with each a particular role. Each service can be transparently (R3) and at runtime (R2) deployed on any physical location depending on the object and application context. Services interact with each other according to the principles that guide the behavior of the avatar during its lifecycle presented below (Sec. II-B). With the architecture come the necessary inter-service middleware and service deployment and communication schemes to build the avatar as a common logical entity, sometimes requiring distant communication. Figure 2 shows the services available in the architecture, grouped into functional modules as follows. 1) Core module: The core module includes components that are central to the architecture and reused in different steps of the avatar lifecycle. The reasoner allows reasoning about knowledge representation and is useful to the local functionality manager, the context manager and the privacy manager. The local cache improves middleware performance and speeds up data exchange between the different services of the architecture. The Component deployment manager is a core component that decides when and where to deploy the other components in the architecture. This component is essential to respect the (R1), (R2), (R3) and (R5) requirements. 2) **Web service module**: The **WoT Application Server** is the endpoint that exposes simple (one involved) or complex (multiple involved) functionalities available as applications. Both local (one avatar/object involved) and collaborative (multiple avatars/objects involved) applications are available to other avatars and end-users. The **HTTP Client** allows the avatar to interact with an external Web service available on the Web, including the applications another avatars provide. These two components are also in charge of implementing the inter-avatar negotiation processes, using a Web service-based communication scheme. 3) **WoT Application module**: A **WoT application** (i.e. end-user understandable, high-level object behavior) is executed inside a **WoT Application Container** that can be physically distributed over the physical layers of the architecture (object, gateway, cloud) thanks to the **WoT Application Deployment Manager** (R7). The different parts of the application are implemented as “code modules” that are cross-compiled to be either executed on the object or on the gateway/cloud wrt. contextual adaptation decisions (R2). 4) **Local functionality module**: The **Capabilities Manager** exchanges with the object to discover and identify its capabilities (see Sec. III-A). The **Local Functionality Manager** deduces available functionalities from the set of capabilities of the capabilities manager (see Sec. III-C). To do so, it also gets helped by the context and privacy managers and the reasoner to reason about exposable functionalities according to the current situation. 5) **Collaboration module**: An avatar must identify other avatars that can provide functionality. To enable object collective behavior, the **Collaborative Functionality Discovery Manager** allows to look for external functionality in the avatar community. By observing the activity of other avatars in its immediate environment, the **Collaborative Agent Manager** can identify if its goals are compatible with the goals of other avatars. It can also note if a conflict with other avatars occurs (resource/function access). According these interaction situations (obstruction, independence, collaboration...), negotiations with other avatars could be achieved to expose collaborative functionality in the WoT Application server (cf Sec. III-D). 6) **Communication module**: The **Network Interface Manager** and the **Application Protocol Adaptation Manager** respectively select the right network (Wi-Fi, Bluetooth, Zigbee,...) and application protocols (CoAP, HTTP) according to available communication interfaces and performance needs (throughput and energy consumption). In order to support connectivity disruptions due to mobile contexts, we have introduced in the communication module of the avatars a **DTN Communication Manager**, responsible for initializing and configuring the opportunistic communication protocol we have defined and that relies on the “store, carry and forward” principle (R5). These managers are described in Sect. III-B. 7) **Filtering module**: The **Context Manager** aggregates data from domain ontologies, external services and environment events into contextual situations [2], in order to perform semantic multi-level adaptation, to 1) identify on which avatar to expose a collaborative function; 2) decide which functionalities to expose wrt. object context; 3) choose where (on which layer) to deploy each architecture component and application code module; 4) determine the most appropriate protocol stack for the current communication scheme and contextual conditions. The **privacy manager** will rely on models developed in previous work [3] that describe a query in terms of user role, purpose of the query and data queried for simplicity purpose we assume in this paper that an avatar community is delimited to the local network. --- An observer design pattern can be implemented here for dynamic updates to answer the (R2) requirement. to reason about privacy constraints and protect data (R4). 8) Interoperability module: The Appliance communication manager is the high level component of the interoperability module. It communicates with other components in the architecture through its high level interface and as well works with the appliance configuration manager and the appliance driver described below to communicate with the object. The Appliance configuration manager relies on a database of object configuration tools to associate communication methods to objects. For instance, when a Lego Mindstorms sends its ID on the USB wire, the configuration manager sends the required drivers to load to enable communicating with the object. The Appliance driver loads and uses the drivers to send and receive messages to and from the object. Thanks to its high level uniform interface drivers are dynamically loaded and abstracts low level object communication (R6). All the managers get involved at different, sometimes overlapping stages during the avatar lifecycle. We describe the lifecycle in the following section. B. Avatar lifecycle The avatar begins its lifecycle with its instantiation from the avatar builder that creates an avatar instance. The avatar builder is designed to be located on the local network gateway or in layer so that it becomes possible to detect the arrival of an object in the network (new wifi connection, new bluetooth device detected, new USB device plugged, etc.). Upon creation, the avatar deploy the main components connects to and exchanges a set of messages with the object it extends to discover its actuators and sensors (core communication and interoperability module). Once the sensors and actuators have been discovered, they form a list of capabilities (local functionality module). Based on this list the avatar decides with the help of a reasoning engine which capabilities to expose as functionalities (filtering module). Functionalities are exposed as Web services and can be discovered and invoked by other avatars in the context of WoT applications [4] (communication, WoT application, collaboration and Web service module). During the lifetime of the avatar and object, services are queried by other avatars and end users. Sometimes pluggable devices are added to or removed from the object, or environmental information changes (day/night, weather, etc.). In such case the avatar is notified of the change (via polling or observer pattern) and updates its capabilities and exposed functionalities accordingly, which answers the live reactivity requirement. When the lifecycle comes to an end (object disconnection), the avatar notifies its community that the services it exposed are not available anymore and terminates all the processes in memory it is attached to. III. AVATAR COMPONENTS A. Avatar/Object Introspection Physical objects can have different capabilities in terms of processing, memory, communication, sensing and action. In order to discover an object resources and amongst others decide how to deploy the avatar of a given object, we perform an introspection of the object using SAJE (System-Aware-Java Environment)\(^6\). SAJE is a part of the hardware abstraction layer of the ASAWoO middleware platform. SAJE makes it possible to give information about the capabilities of physical objects, and to control some components of these objects such as the communication interfaces. The discovery of resourceless objects is not performed directly on the objects, but instead on the devices they are attached to. This discovery is performed continuously on some objects, because sensors or actuators can be plugged (or unplugged) dynamically. Based on the information returned by SAJE, the deployment manager of the ASAWoO middleware platform is able to deploy dynamically from a remote repository the OSGi bundles that allow to monitor and to control the hardware components (e.g., sensors, actuators) of the physical objects. These bundles also allow to have a semantic description of the capabilities of the objects. These capabilities are then used by the ASAWoO middleware platform in order to decide which functionalities can be deployed dynamically on the object (or on the cloud). B. Communication protocols Depending on the capabilities and on the execution context of the objects, the avatar runtime environments are able to select the most suited communication protocols dynamically. Thus, they can either use a communication protocol based on HTTP (over TCP), or a standard UDP-based version of COAP or a disruptions tolerant based version of COAP, which implements the ”store, carry and forward principle” in order to support the connectivity disruptions. Such connectivity disruptions can be frequent and unpredictable in use cases involving mobile devices (e.g., robots) equipped with short range wireless communication interfaces such as Bluetooth, Wi-Fi or Zigbee. Opportunistic and disruption/delay tolerant (DT) communications have been studied in several research works and projects over the last years [5], but the issues introduced by the service-oriented opportunistic (ort DT) computing have been addressed only in few works [6], [7] The disruption tolerant COAP based protocol implemented in avatars currently relies on the solution we proposed in [7]. C. Semantic processing Semantic processing is a major feature of our architecture. An avatar needs to reason about capabilities and functionalities, while taking into account several aspects from privacy to security and context. In [4], we proposed a generic model to describe and exploit the semantic relationships between a functionality used from the application perspective, and a capability that expresses the possibility for an object to realize an action. Our model enables domain-dependent instances to populate the ontology and provide the means for avatars to get the information about which simple or complex functionalities can be exposed, being given a set of available capabilities. As well, it allows identifying complex functionalities that are partially implemented with the help of existing objects. \(^6\)https://www-casa.irisa.fr/saje/ opening the possibilities for additional application when other avatars enter the community. As well, the context manager aims at performing multi-level adaptation [8]. It relies on a semantic context model [9] that can be processed at different abstraction levels and populated with functional and QoS data provided by object sensors and external resources, and on an adaptation engine [10], both compatible with environmental data and high-level functionalities. D. Enabling Collective Behavior between Avatars An avatar inherits goals, knowledge, sensors and actuators from its physical object. Its capabilities and its knowledge are extended with Web information and services but also through its community. To meet a goal, an avatar need resources (energy, storage, cpu, bare objects, ...) and skills (Web services, avatar’s skills, ...). If local resources and skills are available to accomplish a goal, an avatar do not generally need collective features. If an avatar has all the skills to accomplish a task but if the resources are not sufficient, it will be in a situation of obstruction. A coordination mechanism will be necessary to avoid harmful interactions. If resources are sufficient but if an avatar does not have all the required skills, collaboration mechanisms will be needed. In these three cases, we assume that the goals of all the avatars are compatible. If this is not the case, depending on the availability of resources and skills, avatars could be in antagonism (individual or collective conflicts if there are insufficient resources, individual or collective competition if the skills are not available). A possible solution is to establish coalition against a subset of other avatars. Avatars, resources and services could appear or disappear dynamically, so an avatar must continuously be aware of the situation of interaction in which it operates. IV. Prototype Our current prototype is divided in the following parts: - A WoT physical infrastructure that contains a gateway to connect objects and a WoT Processing Unit (e.g. a cloud infrastructure) that hosts avatar parts outside of the objects. - A WoT logical infrastructure that contains an avatar container and the different ontologies and repositories depicted in Section 2 - An avatar architecture implemented in Java/OSGi and designed that can locally or remotely instantiate and invoke the avatar components and compose the core module of the avatar architecture - A WoT Runtime Environment that implements the WoT component framework on the object layer; its implementation language depends on the object OS and bridges this framework with the object hardware - A discovery module implemented using the SAJE library - A Web service module using the JAX-RS library - An interoperability module based on the AllJoyn framework - A set of OWL functionality and capability classes that describe domain knowledge according to different scenario - A semantic local functionality module [4] implemented using the Java OWL API and operated using the HermiT reasoner - A preliminary collaboration module adapting the ABT algorithm into a set of HTTP exchanges between RESTful resources. - A communication module that implements COAP and disruption-tolerant protocols, based on the solution proposed in [7] We are currently implementing the filtering and WoT application modules. V. Related Work: Web of Things Infrastructures The Web of Things integrates various research and application fields, among which embedded systems, wireless networks, software infrastructure, Web technologies and artificial intelligence. According to [11] and more recently⁸, a Web of Things infrastructure should: allow discovering objects without configuration, dynamically adapt to its environment, be secured so that things and applications are harmless and avoid privacy issues, allow manual or semi-automatic service composition and provide services that make sense for the users. From a more technical point of view, it should: - rely on Web standards to achieve interoperability [12] - take into account several communication models (request/response, message-oriented, event-based, publish-subscribe, streaming...) [13], [14] - allow executing code on objects or delegating it to the cloud - semantically deduce available functions and enrich data [15] - open an easy way for developing marketable applications⁹ - encourage developers to respect good practices¹⁰ Several ongoing projects (Webinos¹¹, Compose¹², SensorMeasurement¹³, CityPulse¹⁴...) and infrastructures ([12], [13], [15] ) are related to the Web of Things. Each one highlights a specific point of view or different properties. For instance, the COMPOSE project is oriented towards standardizing WoT marketable applications, CityPulse focuses on event processing, the SensorMeasurement project proposes a reasoning toolkit to reason on sensor data and other work focuses on object security¹⁵. However, if the lack for a standard specification for developing WoT infrastructures can only be solved by organizations ¹⁰http://iot-datamodels.blogspot.fr/2014/05/design-patterns-for-internet-of-things.html ¹¹http://www.compose-project.eu/ ¹²http://www.compose-project.eu/activities.html ¹³http://www.sensormeasurement.appspot.com/ ¹⁴http://www.ict-citypulse.eu/ ¹⁵http://www.ict-citypulse.eu/activities.html such as the World Wide Web Consortium, it is possible to define a comprehensive architecture for software objects that represent physical ones on the Web, such as avatars do. Such architectures can be contained in a WoT infrastructure that will cope with yet-to-come WoT standards, assuming that avatar communication schemes follow state of the art principles in terms of services and protocols. A comprehensive architecture for software objects that can cope with multiple points of view has been proposed for the IoT in the FI-Ware project. But to the best of our knowledge, a similar architecture that targets WoT standards is missing. Therefore, such an architecture can take advantage of advances in each field and one can develop modules related to specific concerns, as long as these works can be encapsulated in components. Using this approach, our avatar architecture proposes different modules that allow plugin heterogeneous objects, communicating with them using different paradigms and protocols, deduce and reason about their functionalities, adapt their behavior according to semantized context representations, collaborate with one another and expose standard services to the users. VI. CONCLUSION The connection between the Web and physical objects is not yet a reality. In this paper, we propose an avatar architecture that enables connecting objects to the Web and improving their skills with additional intelligence. Avatars receive data from the objects they extend and provide reasoning capability to drive object towards a cleverer behavior, thus naturally improving object intelligence and rising object possibilities to a new level. Future work includes developing multi-agent communication protocols for effective exchange and creation of value-added functionality to be exposed to end-users, as well as studying the limitations of the different parts of our architecture. ACKNOWLEDGEMENT This work is supported by the French ANR (Agence Nationale de la Recherche) under the grant number <ANR-13-INFR-012>. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01376637/file/Liris-7036.pdf", "len_cl100k_base": 5277, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20796, "total-output-tokens": 6829, "length": "2e12", "weborganizer": {"__label__adult": 0.00042724609375, "__label__art_design": 0.0009527206420898438, "__label__crime_law": 0.0004715919494628906, "__label__education_jobs": 0.0004274845123291016, "__label__entertainment": 0.0001437664031982422, "__label__fashion_beauty": 0.00021970272064208984, "__label__finance_business": 0.00034546852111816406, "__label__food_dining": 0.0004916191101074219, "__label__games": 0.0005846023559570312, "__label__hardware": 0.0026397705078125, "__label__health": 0.0008254051208496094, "__label__history": 0.0005860328674316406, "__label__home_hobbies": 0.00013208389282226562, "__label__industrial": 0.0006775856018066406, "__label__literature": 0.0003609657287597656, "__label__politics": 0.0004532337188720703, "__label__religion": 0.0007939338684082031, "__label__science_tech": 0.1951904296875, "__label__social_life": 0.000125885009765625, "__label__software": 0.01467132568359375, "__label__software_dev": 0.77783203125, "__label__sports_fitness": 0.0003466606140136719, "__label__transportation": 0.0010061264038085938, "__label__travel": 0.0003459453582763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31819, 0.01447]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31819, 0.70687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31819, 0.8863]], "google_gemma-3-12b-it_contains_pii": [[0, 372, false], [372, 5188, null], [5188, 10487, null], [10487, 14494, null], [14494, 20680, null], [20680, 26228, null], [26228, 31819, null]], "google_gemma-3-12b-it_is_public_document": [[0, 372, true], [372, 5188, null], [5188, 10487, null], [10487, 14494, null], [14494, 20680, null], [20680, 26228, null], [26228, 31819, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31819, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31819, null]], "pdf_page_numbers": [[0, 372, 1], [372, 5188, 2], [5188, 10487, 3], [10487, 14494, 4], [14494, 20680, 5], [20680, 26228, 6], [26228, 31819, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31819, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d12e2774928d5f1f12d7f441df49e052d836707d
Predicting Performance in an Introductory Programming Course by Logging and Analyzing Student Programming Behavior Christopher Watson, Frederick W.B. Li and Jamie L. Godwin School of Engineering and Computing Sciences University of Durham Durham, United Kingdom {christopher.watson, frederick.li, j.l.godwin}@durham.ac.uk Abstract— The high failure rates of many programming courses means there is a need to identify struggling students as early as possible. Prior research has focused upon using a set of tests to assess the use of a student’s demographic, psychological and cognitive traits as predictors of performance. But these traits are static in nature, and therefore fail to encapsulate changes in a student’s learning progress over the duration of a course. In this paper we present a new approach for predicting a student’s performance in a programming course, based upon analyzing directly logged data, describing various aspects of their ordinary programming behavior. An evaluation using data logged from a sample of 45 programming students at our University, showed that our approach was an excellent early predictor of performance, explaining 42.49% of the variance in coursework marks – double the explanatory power when compared to the closest related technique in the literature. Keywords- Learning Analytics, Prediction, CS1, Behavior. I. INTRODUCTION Due to a reputation for high failure rates [1], predicting a student’s performance in a first programming course is a well studied problem, and over the past fifty years, various predictors have been proposed. Early work mainly used standardized aptitude tests to predict performance [2]. As programming became more widespread, researchers (1980-90) began to explore a greater range of cognitive [3], psychological [4], and demographic [5] predictors. Researchers over the past two decades (1990-2010) extended prior work by exploring similar factors [6][7] and the predictive potential of new innovations in pedagogy [8][9]. However a limitation of studies to date is their tendency to use lengthy tests that often yield inconsistent results. Given potentially high enrollment numbers, the use of tests to gather predictive data can take a considerable amount of time for an instructor to process. Even if a test was indicative of performance, by the time it was processed, it may be too late for students to withdraw, or for instructors to intervene to prevent students from failing [7]. The criteria used for prediction is the main limitation of prior studies. Whilst cognitive, psychological, behavioral, or demographic traits may be indicative of performance, they are not directly related to the regular programming behavior of a student, or the programming tasks which they are required to perform. Because of these reasons, the indirect criteria used by prior studies fails to reflect changes in the learning progress and/or the learning behavior of a student over time. There is a need to explore new predictors of performance, which are not based upon indirect criteria, but are instead based upon criteria which can be automatically measured, and directly reflect changes in a student’s learning progress. As well as being able to identify weaker students, such predictors could be used to drive an expert system [10][11] – providing weaker students with appropriate pedagogical interventions when required. A suitable measure could be based upon profiling a student by logging data describing various aspects of their ordinary programming behavior. Whilst recent research has provided visualizations of such data to instructors so that a manual intervention could be made [12], only [13] has attempted to collectively quantify several aspects of programming behavior into a predictor of performance. Jadud [13] proposes an algorithm called the Error Quotient (EQ) (revised in [14]). The algorithm uses a scoring function based upon the amount of errors a student encountered and how successive compilation failures in a session compare in terms of error message, location, and edit location. An overall score (range 0-1) for a student's performance during a session is computed by averaging the score of a set of successive compilation events. Higher EQ is indicative of weaker students. Although previously used by several studies [11][15][16], the EQ was shown to be a weak predictor of performance. This could be due to several methodological flaws concerning the incompleteness and inaccuracy of the approach, which we attempt to address and expand upon in our work (Sec. V). Our contributions include - A unique approach for predicting performance based upon how a student responds to different types of error compared to their peers (proposing time as a predictor), - A substantial improvement in terms of explanatory power and predictive accuracy by addressing the shortcomings of the main related approach [13][14]. II. ABOUT THE DATASET USED IN THIS STUDY To explore possible predictors of achievement, we used a sample of students who studied the 2012/2013 Introduction to Programming (IP) course at our university. Programming behavior was directly logged by using an extension for the BlueJ IDE. Each time a student compiled their code on a university PC the extension would log a snapshot of their program source code along with the event type (success or fail), timestamp, error message reported and line number if applicable. Similar data was collected for invocations. As the use of final exams has been criticized as a means to accurately measure programming ability [9], we use a student's overall coursework mark as the reference criterion of this study. This consisted of a weighting of their marks on a mid-term exam (25%), project (25%), practical exam (40%), and weekly lab exercises (10%). A total of 45 students (42 male) provided us with consent to use their logged data. 7 students indicated they had prior programming experience, but the majority indicated that the longest program they had written prior to course commencement was a medium length program (<2000 lines). Although data was logged over the duration of the course, due to the nature of student assignment work which involved intentionally propagating errors into source code, we restrict our analysis to the data gathered from 14 sessions (Term 1: weeks 3-9, Term 2: weeks 12-18). III. The Watwin Algorithm The uniqueness of our algorithm is to incorporate a scoring approach, where a student is relatively penalized based upon the amount of time that they take to resolve a specific type of error, compared to the resolve times of their peers. In the first stage of our algorithm, logged programming behavior is used to construct a set of successive compilation-event pairings, so that a student’s responses to different errors can be analyzed. This requires constructing consecutive pairings for each file that a student has attempted to compile, and estimating the amount of time a student has spent working on an error. In the second stage, each pairing is scored by assigning penalties based upon aspects of behavior which previous and our own research has identified as indicative of weaker performing students. Our algorithm is outlined as: **Input:** A set of student programming logs (compilation and invocation) for all files a student compiled during a session. 1. **Prepare** a set of compilation pairings using the process presented in Sec. III (A). 2. **Quantify Programming Behavior** - **Score** each compilation pairing produced from (1) by using the scoring algorithm (Fig. 1). - **Normalize** each score by dividing by 35 (the maximum possible score for each pairing). - **Average** the normalized scores of all pairings. **Output:** The mean average of all pairings (in the range 0-1), which is taken as the student’s Watwin score for the session. A score of 0 indicates that the student encounters no errors over a session. A score of 1 indicates that every compilation ended in an error, and that the student spent substantially longer than their peer’s between successive compilation events. The closer the score is to 0, the stronger the student. A. Preparing a Set of Compilation Pairings 1) **Pair Construction.** For each file that a student attempted to compile during a session first construct a tuple of pairings \( \{e_1, e_2\}, \{e_2, e_3\}, \ldots, \{e_m, e_n\} \), using the compilation events associated with a file, ordered by timestamp. A naïve way to construct pairings would be to use the natural order that events occurred during a session. But this would fail to take into account the possibility of a student working on multiple files simultaneously, and can lead to an inaccurate representation of their programming behavior. For example, in a pairing \( \{e_i, e_j\} \) where \( e_i \) and \( e_j \) represent compilations of two distinct files, if the event type of \( e_i \) was ‘fail’ and the type of \( e_j \) was ‘success’, then the pairing \( \{e_i, e_j\} \) would incorrectly convey the student resolved the error of \( e_i \). 2) **Pair Pruning.** Identify and remove all pairings \( \{e_i, e_j\} \) where the code snapshots of \( e_i \) and \( e_j \) are identical. These cases can be caused by a ‘compile project’ feature of development software, and can artificially inflate the total number of compilation pairings. To take into account superficial changes which may have been made between compilations, such as adding comments or modifying layout, we first remove comments from the snapshots of \( e_i \) and \( e_j \) by using a regex expression. A standardized layout is then applied to the snapshots and compared for a match. If the snapshots are identical then \( \{e_i, e_j\} \) is removed. Also remove pairings where the event type of \( e_j \) was ‘success’. 3) **Filtering Commented and Deletion Fixes.** Whilst deleting and commenting code blocks can yield compilable files, these strategies provide little evidence of a student’s understanding of how to repair the actual fault. These actions can also be performed quickly; therefore the time taken to resolve an error in this manner may not be representative of the time taken to resolve using an actual fix. Deletion fixes are detected by computing the diff ratio between the snapshots of \( e_i \) and \( e_j \). If the count of insertions and changes = 0, and deletes > 0, then the pair is removed. Commented fixes are detected and removed by extracting the region of code surrounding the error location of \( e_i \), and using a regex expression to determine if the same fragment has only become commented in the snapshot of \( e_j \). 4) **Error Message Generalization.** Error messages within each compilation event pairing \( \{e_i, e_j\} \) are generalized by removing all identifier information. This allows us to build a profile for different classes of error, rather than for single specific messages. For example, “unknown class - Pet” becomes generalized to “unknown class”. 5) **Time Estimation.** The final step involves estimating the amount of time that a student has spent working on each compilation pairing \( \{e_i, e_j\} \). The simplest approach would be to directly compute the difference between timestamps of \( e_i \) and \( e_j \). But as our pairings are constructed on a per-file basis, this would fail to take into account whether a student has spent time working on other files between \( e_i \) and \( e_j \). We therefore first construct a combined sequence of invocation and compilation events \( \{h_1, h_2, \ldots, h_k\} \) for all files in a session, ordered by timestamp. For every \( \{e_i, e_j\} \), if there exists an \( h_k \) such that the timestamp of \( e_i > h_k > e_j \), we estimate the time spent on \( \{e_i, e_j\} \) as the difference between the timestamps of \( e_i \) and \( h_k \). The assumption is that a student has stopped working on the source of \( e_i \), and has instead only worked on the source code associated with \( h_k \). B. Quantifying Ordinary Programming Behavior 1) Identifying Appropriate Predictors. Before developing a mechanism to meaningfully quantify a student's behavior, we first had to determine which aspects could indicate they were struggling to produce syntactically valid code. Prior research by [13] suggests that behavior exhibited by weaker students includes producing compilation pairings where both events result in compilation failures, have the same generalized message, and have the same error location. [15] found significant correlations between types of compilation pairings and performance where both events resulted in compilation failure and marginal correlations for pairings with the same message. Using our dataset we performed similar studies by correlating the average number of specific types of pairings which a student produced during a session, with performance. Significant correlations were found the average number of pairings whose both event types were compilation failures \((r(45)=-.43, p<.01)\), the average number of pairings where the generalized error message was the same \((r(45)=-.47, p<.01)\) and the average number of pairings where the generalized error messages were different \((r(45)=-.39, p<.01)\). Correlations were also found between average number of pairings with the same error location \((r(45)=-.26, p<.01)\). Our findings are consistent with [13] and [15], indicating that stronger programmers are associated with making less repeated errors, and will usually succeed in resolving an error in the next compilation. A predictor which previous research [2-9][13-16] has not explored is the amount of time which a student takes to resolve an error. Research by [17] showed that the resolve times of certain types of errors vary based upon student ability; however, they did not use this variable to predict performance. We also hypothesized that in addition to having a higher frequency of errors; weaker students would also take longer to resolve errors than stronger students. After first removing outliers using the \(2MAD\) rule [18], we found a strong significant correlation between a student’s mean resolve time and performance \((r(45)=-.53, p<.01)\), which would seem to confirm our hypothesis. Because of this, we have incorporated resolve time as a predictor in our scoring model. However, different types of error can be more difficult for a student to resolve than others. Grouping the resolve times into 7 distinct classes of error (syntax, computation, identifiers, scope, exceptions, inheritance, abstraction) [11], a non-parametric Kruskal-Wallis test [18] confirmed that resolve times were significantly different between different classes of error \((\chi^2(6)=1512.88, p<.01)\). 2) Scoring Programming Behavior. Based upon these findings, instead of considering the amount of time that a student takes to resolve any error, we consider the time they take to resolve a generalized type (Sec. III A(4)) of error, in comparison to a distribution of resolve times of their peers. As these distributions are generally positively skewed, we use the robust \(2MAD\) approach [18] to remove outliers, and apply a penalty based upon where a student’s resolve time lies in the distribution. If their resolve time is more than one deviation below the mean, then they have resolved an error much faster than their peers - so we apply a low penalty. If a student’s resolve time is more than one deviation above the mean, then they have resolved an error much slower than their peers - apply a higher penalty. Otherwise, apply a mid-range penalty. The main advantage of scoring students in this manner is that we can implicitly take into account the relative difficulty of different types of error. For instance, suppose a student resolved a GUI error in 30 seconds. Compared to their peers, this may be a good time, and the student would incur a low penalty. However, if they took 30 seconds to resolve a ';' expected error, then compared to their peers, this may be a bad time, and the student would incur a higher penalty. After scoring all pairings using the scoring algorithm (Fig. 1), the scores of all pairings are normalized and averaged to produce a Watwin score. 3) Deriving Fair Penalties. The penalties assigned in the scoring algorithm (Fig. 1) were not determined through random guesswork. We first experimented by weighting the penalties of each component based upon the strengths of their correlations with performance. But, this produced a narrow range of Watwin scores, and we felt that a better spread of individuals was required. We therefore carried out a brute-force search of the space surrounding the parameters we had originally chosen. The regression models generated were ranked based upon their explanatory power, and penalties were then determined by repeated random sub-sampling of the strongest 100,000 results. Although not yielding the strongest possible explanatory model for our dataset, the derived parameters had the advantage of spreading the Watwin scores whilst simultaneously reducing the deviation between a student’s session scores. Along with the cross-validation we performed (Sec. IV), this supports the generalizability of our approach to independent datasets. ![Figure 1. Watwin Scoring Algorithm.](image-url) IV. RESULTS AND EVALUATION To evaluate the effectiveness of our algorithm as a predictor of a student’s programming performance, we performed a linear regression, using a student’s Watwin score as the independent variable, and their overall coursework mark as the dependent variable. We also considered the ability of Watwin as a classifier of student performance, based upon undergraduate degree boundaries set at our university (first \(\geq 70\%), second 50-69\%, third 40-49\%, and fail: <40\%). An inspection of the scatter graph showed a linear relation existed between a student’s Watwin scores and performance, and that there were no significant outliers present. Residual independence was confirmed by the Durbin-Watson statistic (2.11), and the normality of residual distribution confirmed by an inspection of a histogram and P-P plot. We found that a linear regression based upon a student’s Watwin score could significantly predict performance, \(F(1, 43) = 31.77, p < .01\), explaining 42.49\% of the variance in coursework marks (a strong effect [19]). The final RMSE of the model was low at 6.91\% and the final accuracy of the predictive classifier was 75\%. Further validation of our model using leave-one-out cross validation yielded a mean \(R^2\) of .4204 (\(SD=0.13\)), RMSE of 7.09\% (\(SD=1.12\)), and classification accuracy of 75\% (\(SD=1.30\)), indicating a good level of consistency with the full model. However, it is important to consider how our algorithm performs, in terms of accuracy and explanatory power over the duration of a course. Interestingly, previous work [2-9] [13-16], used all available data to drive their predictive models. But predicting a student’s failure at the end of a course leaves little time for an instructor intervention. Therefore for each session in both datasets, we computed a regression and the classification accuracy, using only the data which had been logged up to, or during the session. We found that after 4 sessions, accuracy had risen into the 60’s range, and after 5 sessions accuracy leveled off and stayed in the 70’s range consistently over the duration of the course. However, measures of accuracy are reliant upon the underlying classification used. A more interesting analysis is to compare how the explanatory power of the regression changes over time. As can be seen from Fig. 2, by the end of the first term (week 9), a substantial percentage of the variance in coursework marks could be explained by our algorithm (30\%), which rose to over 40\% by the end of the second term. The average explanatory power of the algorithm was high, explaining 30.05\% (\(SD=15.97\)) of the variance in performance. This confirms that our approach is data driven, and performs less well when data is scarce. | Data Sample Point | Watwin | | | Jadud | | | | |-------------------|--------|---|---|--------|---|---| | | \(R^2\) | RMSE | Acc. | \(R^2\) | RMSE | Acc. | | End of Course | .4249 | 6.91 | 75.56 | .1922 | 8.19 | 60.00 | | Average | .3005 | 7.60 | 68.83 | .1407 | 8.44 | 55.82 | Figure 2. Explanatory Power of Watwin and Jadud During The Course V. COMPARISON TO JADUD’S ERROR QUOTIENT A. Addressing the Methodological Weaknesses The major methodological flaw of Jadud’s Error Quotient [13] concerns the method used to construct a set of pairings. In Jadud’s work, a set of consecutive compilation pairings are created by using events in the order that they occurred during a session. As previously discussed, this approach is flawed, as it assumes that either students only work on a single source file, or work on multiple files in a linear manner. However, we have found that students do not work in this way, and switching between files is common. Using our dataset we built 45,001 compilation pairings using Jadud’s method. We found that 13,490 pairings (29.98\%) were based upon compilation events from two different files. This has serious implications for the validity of the approach. For instance, when examining pairings having event types in the form \{fail, success\}, we found that 2,138 (24.13\%) were based upon events from two different files. Almost 25\% of the cases indicated that a student had resolved an error, whereas in reality, they had simply compiled a different file. We addressed this shortcoming by constructing pairings on a per-filename basis, allowing us to more accurately profile student behavior based upon the evolution of code across distinct files. Also, by constructing compilation pairings on a per-session basis, it is possible for the source code similarity to be calculated using the source of two distinct files meaning that extra compilation pairings will be included in the filtered set. There are no measures taken to check superficial changes made to source code can be incorrectly flagged as semantic changes. The flaws of the preparation and filtering methods have implications for the validity of the scoring algorithm used. In Jadud’s approach, pairings having event types in the form \{fail, success\} will score 0. But, it is possible that a large percentage of these pairings are invalid (30\% in our dataset). As a student’s error quotient is averaged using the sum of every pair from a session, having a large amount of invalid 0 scoring pairings can lower a student’s EQ, and inaccurately reflect their performance. Finally, there are the fundamental differences between the Watwin and Jadud approaches to consider. Whilst we found that a student’s mean error resolve time strongly... correlated with performance ($r(45) = -.53$), Jadud’s approach does not incorporate any scoring of behavior based upon this dimension. It also fails to take the type of error into account, and scores all errors equally. Very recent research [17] and this paper have both shown that students will find different types of error more difficult to resolve than others. Our uniqueness is to take these factors into account by relatively penalizing students based upon the amount of time they took to resolve an error, in comparison to a distribution of normal behavior defined by their peers. ### B. Evaluation of Performance We applied Jadud’s algorithm to our datasets. Consistent with previous findings [13-17], we found Jadud’s EQ to be a weak predictor of performance, and that a student’s error quotient could explain less than half of the variance in performance, compared to their Watwin scores (Table 1). As can be seen from Fig 2 whilst the explanatory power of the EQ improves over time, it eventually levels off and remains a consistently weak predictor, only explaining between 15%-20% of the variance in performance over the final weeks of the course. This is also confirmed by the low standard deviations of average $R^2$ values of the EQ values (Table 1). In contrast, the explanatory power of the Watwin scores consistently increases over the duration of the course, and is a strong early predictor, explaining almost 30% of the variance in performance after 5-6 sessions of data has been collected. To explore the effect of the previously outlined methodological weaknesses of Jadud’s algorithm, we ran Jadud’s algorithm using pairings built using the Watwin algorithm. We found an increase in the explanatory power of Jadud’s model ($R^2 = .26 (+.07)$), suggesting that whilst an appropriate preparation technique can improve explanatory power, alone, it is not enough to match the performance of our scoring approach where students are relatively penalized based upon their resolve times and programming behavior. ### VI. Conclusion and Future Work In this paper we presented Watwin, a dynamic algorithm designed to predict student performance in a programming course. Unlike prior work [2-9] which mainly used indirect criteria to predict performance, our approach is based upon analyzing directly logged, quantitative data describing aspects of a student’s ordinary programming behavior. This allows prediction of performance to evolve over time reflecting changes in the student’s learning progress without the need to use multiple tests that often yield inconsistent results. The originality of our algorithm is to incorporate a method, where a student is relatively penalized based upon the amount of time they took to resolve an error, in comparison to a distribution of normal behavior defined by the resolve times of their peers. We addressed the methodological weaknesses of the closest related approach [13-14], and an evaluation has shown that our approach is a good predictor of performance, even early in a course. Future work will aim to further validate our approach using data gathered from an independent sample of students, to identify more characteristics of programming behavior that are indicative of weaker students through the use of multivariate statistical [20] and data mining techniques [21], and to apply our algorithm within an expert system to select and supply appropriate compiler feedback to students [11]. ### References
{"Source-Url": "http://dro.dur.ac.uk/19225/1/19225.pdf?DDD10+d74ks0+dcs0lw+d700tmt=", "len_cl100k_base": 5550, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19729, "total-output-tokens": 7160, "length": "2e12", "weborganizer": {"__label__adult": 0.0008664131164550781, "__label__art_design": 0.0011892318725585938, "__label__crime_law": 0.000942230224609375, "__label__education_jobs": 0.1951904296875, "__label__entertainment": 0.0002262592315673828, "__label__fashion_beauty": 0.0005044937133789062, "__label__finance_business": 0.0010776519775390625, "__label__food_dining": 0.0012664794921875, "__label__games": 0.0013341903686523438, "__label__hardware": 0.001972198486328125, "__label__health": 0.0016412734985351562, "__label__history": 0.0009760856628417968, "__label__home_hobbies": 0.0005183219909667969, "__label__industrial": 0.0015316009521484375, "__label__literature": 0.0014705657958984375, "__label__politics": 0.0008797645568847656, "__label__religion": 0.00110626220703125, "__label__science_tech": 0.0723876953125, "__label__social_life": 0.0006632804870605469, "__label__software": 0.00824737548828125, "__label__software_dev": 0.70263671875, "__label__sports_fitness": 0.000942707061767578, "__label__transportation": 0.0018053054809570312, "__label__travel": 0.0005373954772949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29694, 0.03313]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29694, 0.52564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29694, 0.92808]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5539, false], [5539, 12017, null], [12017, 17331, null], [17331, 22898, null], [22898, 29694, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5539, true], [5539, 12017, null], [12017, 17331, null], [17331, 22898, null], [22898, 29694, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29694, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29694, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5539, 2], [5539, 12017, 3], [12017, 17331, 4], [17331, 22898, 5], [22898, 29694, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29694, 0.06173]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ec7cbc49ea84a19239da29776b02fb6c3491d323
[REMOVED]
{"Source-Url": "http://svn.aksw.org/papers/2012/EKAW_SlideWiki/camera-ready/public.pdf", "len_cl100k_base": 6585, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 33092, "total-output-tokens": 8121, "length": "2e12", "weborganizer": {"__label__adult": 0.0003693103790283203, "__label__art_design": 0.0016956329345703125, "__label__crime_law": 0.0006113052368164062, "__label__education_jobs": 0.0287322998046875, "__label__entertainment": 0.0002980232238769531, "__label__fashion_beauty": 0.0002846717834472656, "__label__finance_business": 0.0144500732421875, "__label__food_dining": 0.00051116943359375, "__label__games": 0.0008034706115722656, "__label__hardware": 0.0008907318115234375, "__label__health": 0.0005078315734863281, "__label__history": 0.0005784034729003906, "__label__home_hobbies": 0.00025081634521484375, "__label__industrial": 0.0006666183471679688, "__label__literature": 0.0006022453308105469, "__label__politics": 0.0005779266357421875, "__label__religion": 0.0004811286926269531, "__label__science_tech": 0.051239013671875, "__label__social_life": 0.0005249977111816406, "__label__software": 0.22119140625, "__label__software_dev": 0.67333984375, "__label__sports_fitness": 0.0002827644348144531, "__label__transportation": 0.0005331039428710938, "__label__travel": 0.0003981590270996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38343, 0.02447]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38343, 0.29104]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38343, 0.8942]], "google_gemma-3-12b-it_contains_pii": [[0, 2616, false], [2616, 5815, null], [5815, 8945, null], [8945, 12183, null], [12183, 13587, null], [13587, 16336, null], [16336, 19176, null], [19176, 20855, null], [20855, 22028, null], [22028, 23943, null], [23943, 27139, null], [27139, 30069, null], [30069, 32808, null], [32808, 35442, null], [35442, 38343, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2616, true], [2616, 5815, null], [5815, 8945, null], [8945, 12183, null], [12183, 13587, null], [13587, 16336, null], [16336, 19176, null], [19176, 20855, null], [20855, 22028, null], [22028, 23943, null], [23943, 27139, null], [27139, 30069, null], [30069, 32808, null], [32808, 35442, null], [35442, 38343, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38343, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38343, null]], "pdf_page_numbers": [[0, 2616, 1], [2616, 5815, 2], [5815, 8945, 3], [8945, 12183, 4], [12183, 13587, 5], [13587, 16336, 6], [16336, 19176, 7], [19176, 20855, 8], [20855, 22028, 9], [22028, 23943, 10], [23943, 27139, 11], [27139, 30069, 12], [30069, 32808, 13], [32808, 35442, 14], [35442, 38343, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38343, 0.1129]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
1320e3f9220e70f0a935d2097f15941cd637d59d
Case Study: Significant Schedule Delays in a Complex NDI-Based System David Carney June 1998 About this Series Government policies on the acquisition of software-intensive systems have recently undergone a significant shift in emphasis toward the use of existing commercial products. Some Requests for Proposals (RFPs) now include a mandate concerning the amount of COTS (commercial off-the-shelf) products that must be included. This interest in COTS products is based on a number of factors, not least of which is the spiraling cost of software. Given the current state of shrinking budgets and growing need, it is obvious that appropriate use of commercially available products is one of the remedies that might enable the government to acquire needed capabilities in a cost-effective manner. In systems where the use of existing commercial components is both possible and feasible, it is no longer acceptable for the government to specify, build, and maintain a large array of comparable proprietary products. However, like any solution to any problem, there are drawbacks and benefits: significant tradeoffs exist when embracing a commercial basis for the government’s software systems. Thus, the policies that favor COTS use must be implemented with an understanding of the complex set of impacts that stem from use of commercial products. Those implementing COTS products must also recognize the associated issues—system distribution, interface standards, legacy system reengineering, and so forth—with which a COTS-based approach must be integrated and balanced. In response to this need, a set of monographs is being prepared that addresses the use of COTS software in government systems. Each monograph will focus on a particular topic, for example: the types of systems that will most benefit from a COTS approach; guidelines about the hard tradeoffs made when incorporating COTS products into systems; recommended processes and procedures for integrating multiple commercial products; upgrade strategies for multiple vendors’ systems; recommendations about when not to use a commercial approach. Since these issues have an impact on a broad community in DoD and other government agencies, and range from high-level policy questions to detailed technical questions, we have chosen this modular approach; an individual monograph can be brief and focused, yet still provide sufficient detail to be valuable. About this Monograph This monograph describes a program that is currently building a large system for the DoD. This system makes extensive use of pre-existing components: re-engineered legacy systems, components that are government-furnished equipment (GFE), and both government- and commercial-off-the-shelf (GOTS and COTS) software. Thus, although the system is not precisely a “COTS-based system,” the issues faced in building this system are parallel enough to those in COTS-based systems to make it a useful case study. The expected audience for this monograph is a general audience, and the major issues tend to be more programmatic and managerial rather than purely technical. The goal of the monograph is to focus on the complexities (which do include both technical and programmatic issues) that can hinder or dilute the expected benefits of using commercial components in complex government systems. 1 Introduction The focus of this monograph series is on issues specific to commercial software. In this monograph, however, we make a slight change in focus. We describe a program tasked with building a major air defense system for the DoD that makes extensive use of pre-existing components: re-engineered legacy systems, components that are government-furnished equipment (GFE), and both government- and commercial-off-the-shelf (GOTS and COTS) software. The commonly-used term for this diversion of components is “non-developmental items” (NDI).\(^1\) While the system examined in this monograph is not precisely the type addressed by the other papers in this series, the issues faced in building this system are parallel enough to those in COTS-based systems to make it a useful case study. In particular, the aim of this monograph is to examine, through this program, the types of factors that can influence acquisitions that use heterogeneous pre-existing components, whether COTS or otherwise. The project in question suffered a large schedule slip early in its lifetime. One of the goals of this study, therefore, is to examine those factors related to use of NDI that may have been contributors. However, the use of preexisting components is only one of several contributing factors. The project was also characterized by other features that reflect the government’s ongoing efforts to modernize its acquisition strategy through new and innovative procurement methods. Some of these features, as described and analyzed herein, figured heavily in the schedule slip. Others were less significant and are noted merely for background information. Common to all of these innovative features, however, is their newness to government and contractor personnel alike. This newness is significant, because while many such innovative concepts are demonstrably beneficial toward improving the acquisition process, their inverse aspect—lack of familiarity—was also a major cause of the project’s schedule delay. A prime focus of this monograph is on the novel factors, use of NDI being one of them, that characterize this project. The author of this monograph was part of a team that interviewed members of this project in preparation for this case study, and at the request of the project’s Executive Officer. Where it is necessary to refer to the project by name, “Project X” will be used. Note, however, that a reference to “Project X” is typically to the entire project: contractor, subcontractors, and government personnel alike. --- \(^1\) The Federal Acquisition Regulations (FAR) make a much more precise distinction between “COTS” and “NDI” than we intend by this statement. The remainder of this document is as follows. In Section 2, the project is briefly described. Section 3 presents the current status of the project. Section 4 is an analysis of the project. Section 5 contains a summary of lessons and recommendations. 2 General Description of the Project This section will describe project X from four vantage points: the expected functionality of the system, the make-up of the various organizations involved in the project, the project’s use of innovative processes and methods, and the sources of the pre-existing components that are expected to be used in the system. 2.1 Description of the System The system is a complex command-and-control system that performs several real-time air defense functions. It replaces and modernizes an existing fielded system. The high-level capabilities include surveillance, tracking, data communications, and weapons control; a set of applications will make use of these high-level capabilities to provide end-user functionality. There are also data link interfaces to numerous external hardware and software systems. Project X improves on the existing system in several ways. - The replacement system will duplicate the capability of the existing system, but will also include additional functionality. - The replacement system is intended to be open and extensible. - The replacement system will be written in an object-oriented language, replacing the obsolete language of the existing system. Expansion of system capability is one obvious benefit of this project. Another key benefit is the expected reduction in maintenance costs currently incurred due to the obsolete programming language. This project is also consistent with the general desire throughout DoD to use modern programming technology and to speed up the acquisition process. The system will be fielded first with an initial capability in the 1998-99 timeframe (eighteen months after project start), and the full capability will be fielded in 2000-01. The initial capability will be introduced through four builds, and there will be three more builds for the full capability. 2.2 Organizational Make-Up of the Project Project X has a single prime contractor, with several subcontractors located throughout North America. The government management of the program is shared by two distinct military organizations. In addition, a large number of personnel from a major government support contractor actively contribute to the management of the project. The project makes extensive use of integrated process teams (IPTs). Each IPT has a team leader drawn from the contractor (or one of the subcontractors) and also has a government focal point. --- By “current status,” we mean the current status as of the author’s interviews with the project staff; these took place toward the end of 1997. Most of the IPTs are technical, with an overall IPT for project management. The technical IPTs are partitioned primarily according to the major functional areas of the system (e.g., surveillance, data communication). In addition, one IPT focuses on the end-user applications, and one focuses on the new functional capability of the replacement system. 2.3 Innovative Processes and Methods in the Project Both by the contractor’s initial proposal and by subsequent government direction, the project is making use of several innovative elements; three are especially important. First, the contractor and government are jointly participating in IPTs, as described above. There is an express intention for the government to avoid a “heavy-handed” approach in this project, and the IPTs are a vehicle to achieve this. Second, the contractor is expressly making use of a spiral development process, as opposed to the traditional “waterfall” model. The expectation is that by streamlining the acquisition process (e.g., through reduction of documentation, using a “requirements tradespace”), costs will be reduced and time of development reduced. Third, and perhaps most significant, a very large portion of the system is expected to be drawn from other sources: these pre-existing components are intended to provide a major amount of functionality to the fielded system. 2.4 Sources of Pre-Existing Components There are several sources of these pre-existing components. The most critical one is an internal research and development effort by the contractor. This effort is producing a set of components that are expected to provide much of the end-user application software. They comprise roughly 88K lines of code, which will be about 35% of the overall system. Software provided as “government-furnished equipment” (GFE) is another major source of pre-existing components. One large item of GFE for project X comes from a separate defense project, which is currently working on a different system, but one whose functionality contains much of the enhanced functionality needed in project X. By government direction, this other project is providing its output for use by the project X contractor. In addition, project X will make use of the DoD’s Defense Information Infrastructure/Common Operation Environment (DII/COE), which is provided as GFE by the Defense Information Systems Agency (DISA). DII/COE is also a major element in the system being built by the other defense system. Yet another potential source of pre-existing components includes both government off-the-shelf (GOTS) software and various commercial off-the-shelf (COTS) components, several of which are being considered for possible inclusion in the system. These include infrastructure components (e.g., system administration software, network middleware) and a large number of components for data communications. A simple depiction of the system and its incorporation of pre-existing components is shown in Figure 1. 3 Current Status of the Project Based on a series of interviews with both contractor and government project members, the current status of the project can be described in five key areas: - the contractor’s internal R&D software program - the other GFE software (both from the existing DoD project and DII/COE) - the contractor’s software development and integration processes - the structure and workings of the IPTs - risks to the current project schedule The first four of these items are described in Sections 3.1 through 3.4. Note that these descriptions are intended to be primarily factual. Afterwards, in Section 4, we present an assessment of this information, together with analysis of other significant information and the assessment of risks to the current schedule. 3.1 Contractor’s Internal Research and Development Program The internal R&D program has been in existence for approximately two years. The program is essentially a reengineering one, since the goal is to abstract functionality from an existing air defense system; there is no significant addition of functionality. This R&D project is organizationally separate from project X, and is managed by a different manager, though both of these managers report to the same vice-president. The contractors’ original plan for this effort was to create a functional kernel useful for multiple air defense applications, and then to market this commercially to several potential users. To that end, the code is being translated (to Ada95), and an extensive repackaging is being carried out to bring the code into conformance with object-oriented principles and also into conformance with DII/COE. In the original plan, use of an automated translation tool was expected to facilitate the translation. Several things have modified the original R&D plan. First, the contractor judged that the translation tool did not provide acceptable results; the automatic translation has therefore been abandoned and the code is being reengineered by hand. Second, the expected customer base has not materialized, and project X is currently the only major consumer of this software. Third, the R&D effort fell behind its original schedule (i.e., the schedule that was assumed when project X was begun). This was partially due to the loss of automatic translation, but is at least equally due to staffing shortfalls. There are two major results of these modified circumstances. First, the reengineering effort is now refocused at producing what is essentially a library of reusable components. These still provide a generalized set of air defense capabilities, most of which are useful to project X. However, these capabilities are not integrated in any substantial way, but will exist as a set of loosely coupled Ada packages. Second, the R&D schedule has been brought into closer harmony with project X. Scheduled builds of the R&D system now focus their functional makeup on the needs of project X, and those functions that are not needed (by project X) have been deferred to the last part of the revised R&D schedule. In addition, the team performing the R&D project have agreed to assist the project X team in their integration activities. 3.2 Other GFE Software Another DoD project is expected to contribute its output as GFE to project X. This will be used in two ways. First, it will provide the enhanced functionality for project X’s system, and second, it will provide several infrastructure capabilities. This other system is being implemented by a different large defense contractor, which is acting in this instance as a subcontractor to project X. At the time of the interviews for this study, there had been little cooperation between these two programs. Expected delivery of components to project X was delayed by as much as six months, and there was no substantive co-location of personnel between the two. Some of the needed components (i.e., those that are part of project X’s infrastructure) have dependencies and schedule impacts on the rest of the project. Other components (i.e., those that provide the enhanced functionality) are less critical, and are architecturally more modular; they can be added to project X whenever they are received. To date there has been apparently little investigation by the contractor about the precise technical approach for integration of these components (which are generally “black boxes”) into the system. In addition, the DoD’s DII/COE is a component of project X, and is also a component of the other defense project. Both projects are therefore dependent on the ongoing DISA schedule for DII/COE releases. At the time of the interviews, project X had encountered severe delays in receiving all of the current DII/COE components from DISA. (By contrast, the other project, which also relies on DII/COE, had apparently gained a “fast-track” path to receive the DII/COE components in a more timely fashion.) 3.3 Software Development and Integration Process In addition to the extensive reuse of NDI, the development approach of project X makes use of other innovative development processes. One of these is the intentional minimization of many artifacts of an “old” acquisition process. Excess documentation is to be avoided, (e.g., the written proposals were asked to be limited), and where practical, use of oral reports and demonstrations is preferred. Project X also makes use of the “spiral” development approach, as opposed to such traditional approaches as “waterfall.” Since there is often some confusion about what the “spiral” model really is, it is useful to cite Barry Boehm, who did much of the early work on this model: Each cycle of the spiral [identifies] the alternative means of implementation...and the constraints imposed...The next step is to evaluate the alternatives...This may involve prototyping, simulation...or combinations of these...Once the risks are evaluated,...a plan for the next level of prototyping [is made]...and the development of a more detailed prototype. [Boehm 88] The project has apparently been successful in minimizing excess documentation; the interview team saw few indications that the contractor was being delayed by a requirement to produce large (and implicitly unnecessary) documents. However, in its use of a “spiral” development process, we observed some inconsistencies between intention and reality. In particular, there was little evidence of a willingness to leave some details undefined until later in the development process. Instead, an extensive requirements negotiation activity, one consuming much more time than planned (and one that would be typical in a waterfall process), had been conducted, and there was a perceived need on both sides to finalize all of the specific details, by defining all of the requirements in advance of any work on actually building the system. As of the interviews, most of the requirements were considered to be fully defined. (There was, however, some inconsistency in determining precisely how many requirements were still outstanding.) 3.4 Structure and Workings of the IPTs There are eight integrated product teams in project X: - program management - system engineering - installation and support - surveillance - data communications - services and integration (of the enhanced functionality) - displays - applications The number of personnel varies from IPT to IPT, but most of the teams have about two dozen members. The IPTs are each clearly divided into two subsets, which are quite distinct: one is government, the other contractor, and they reside in widely separated places. The government subteam is further divided into personnel from the different military organizations that are sponsoring the project. There are periodic meetings of the teams and the project as a whole. While these meetings avoid such nomenclature as “critical design review,” they are nonetheless the occasions when the contractor describes the current status of various project and technical issues to the government members, and when the government makes requests of the contractor that certain steps be taken or priorities reordered. 4 Analysis: An Assessment of the Project The individual assessments of the four key areas described above are presented below. However, many of these findings are interrelated, and we will therefore provide an additional section that synthesizes these findings in terms of the overall program. 4.1 Contractor’s Internal Research and Development Program We concluded that the R&D project is now proceeding reasonably well. While there had undoubtedly been problems in the past, the program appears to have improved its management, and appears to be meeting its current schedule. Interviews with the key software designers on the project indicated that the technical capability to produce the intended product is in place. Two areas of concern still exist. Staffing is a genuine problem, and though the contractor described a plan to hire more staff, recruitment of adequately experienced personnel will very likely be difficult. Another remaining issue is that in recasting the R&D effort from producing a unified subsystem to producing a set of loosely coupled packages, the difficulty and risk of integrating these packages has now been transferred to project X. One mitigation for this risk is that the contractor plans to have the R&D personnel assist with the integration effort, and presuming that this occurs, this risk should be minimized considerably. 4.2 Other GFE Software The dependence on GFE from the other defense program (see Section 3.2) has two potentially adverse impacts on project X. The first is on the schedule: since the other program’s schedule is not fully conformant with that of project X, its own project requirements will clearly take precedence over those of project X. This was evident in the six-month delay that had already occurred at the time of the interviews. This dependence has a critical and a non-critical side. In the case of the enhanced functionality that it provides to project X, the dependence is non-critical, since the components are expected to be relatively modular, and are not critical pieces on which others will depend. However, there is also some critical infrastructure capability, and for this, any schedule delay from the other contractor will almost certainly result in a schedule slip. A second issue concerns the presence of DII/COE in both systems, and the differences in delivery paths for both. (As noted above in Section 3.2, project X has had extreme difficulty in getting full delivery of DII/COE, while the other contractor has a “fast track” path for delivery.) From one perspective, it might be thought that the existence of two separate paths for DII/COE—one directly from DISA, the other as a subset of the components from the other contractor—suggests a benefit for project X (i.e., if the components are delayed through one path, they might be more quickly received through the other). However, it was indicated that there would be at least some “alteration” in the DII/COE components by the other contractor (but with no specificity of the extent of this “alteration”). This implies that the two separate paths might also imply two separate sets of components, with the attendant issues of versioning, configuration management, and so forth, all of which are potential sources of delay, and worse, potential sources of system defects. 4.3 Software Development and Integration Process The “spiral” model is not truly being used on this project. The most obvious manifestation of this is in the area of requirements. As described previously, if the project were truly following a spiral approach, at least one iteration of building the system would probably precede any attempt to finalize the requirements. Instead, project X appears to be relying on a traditional approach that demands full completion of requirements definition before implementation begins. This assessment is further corroborated by the overall project plan. There are now seven scheduled builds of the system. However, no applications software is scheduled to be included until the fourth build, and the major portion of the application capability is deferred until the final build. In a true spiral approach, it would be advantageous to incorporate end-user functionality as early as possible. There is another aspect of project X’s development process that was of concern. Whether it uses spiral, waterfall, or any other approach, any project that relies on pre-existing components must make allowances for the constraints they impose. For instance, project X is expecting to use CORBA, a complex COTS product, as part of the system infrastructure. It is therefore reasonable that expertise in using this mechanism be available. The project plans, however, do not appear to factor in such elements as purchasing commercial CORBA implementations, training, or experimentation. We felt that this inexperience has the potential to result in additional schedule delays. 4.4 Structure and Workings of the IPTs The IPTs are not working well, and are the cause of a serious deficiency in the program. The IPTs are not truly integrated teams at all. That there is a “government-only” side and a “contractor-only” side is evidence of this: discussions with individuals from both sides sometimes produced markedly different views of the current status. There are other indicators of this separation: as noted above, there are still traditional project reviews (however they are named) in which the contractor describes to the government the current status of various technical areas. But if the teams really were integrated, such reviews would be moot, since all members of the team (i.e., the government members included) would be conversant in the ongoing work of the team. This results at least partially from the lack of any real degree of co-location. This is not merely between the government and the contractor, but between contractor, several sub-contractors (particularly including the other large defense contractor), and the two different military sponsoring agencies. This wide distribution of personnel among numerous sites causes a serious lack of coherence. While there are laudable attempts to overcome this drawback (e.g., many individuals spend one- or two-week sessions at other sites), this is insufficient, and the we concluded that this issue is a major cause of the current schedule slip. --- 3 The Common Object Request Broker Architecture, which is most simply (but also simplistically) described as an object-oriented messaging capability. One glaring corroboration of the potential negative impact of distributed development emerged accidentally during a discussion with the contractor about the internal R&D project. The project X personnel described their difficulty in getting necessary technical information with the R&D personnel because they were in a different building on the contractor’s facility. But now that the R&D personnel had been relocated to project X’s building — a move fifty yards from one building to another — the project X team was able to gain much more detailed technical information when it was needed. Given this (not unexpected) fact, it is hardly surprising that an aggregate distance of several thousand miles should prove to be at least as great a barrier to effective technical interchange between contractor, subcontractor, and government personnel. 4.5 Synthesis: How These Factors Interact The previous sections have described individually the findings and assessments in four key areas; some of these factors clearly contributed to project X’s schedule slip. However, the interview team also found that these factors were not independent, and that there was a deeper (and perhaps more important) cause of delay: many new and unfamiliar methods, procedures, and approaches were being introduced simultaneously in this program, and these all interacted. This produced unfortunate results in both the management and the technical areas. Management There is a lack of familiarity on all sides both about IPTs and about a spiral development approach. Thus, on one hand, the contractor, aiming to appease the government’s interest in innovation, has defined a set of nominally independent teams. But there is, apparently, no clear understanding of what this approach implies. For instance, if we replace the traditional project hierarchy by integrated teams, who then takes on the role of chief architect? Who is the person really in charge? Who knows the entire system? There was no one that we could find that answered this description. On the government side, the government participants in project X, are very mindful of the charge to “avoid heavy-handedness,” and are therefore keeping hands off— but in the wrong way, by not really being truly integrated into the IPTs, and by not participating in critical decisions that need to be made on a daily basis. (It is ironic that in the one area where there really does need a “hands-off” approach from government, namely, letting requirements be loose and undefined until late in the program, there is far too much hands-on.) The result of all this is that, to all appearances, both sides had a remarkable posture of inactivity: the contractor was waiting for direction from the government, and the government was (mostly) keeping their “hands off.” The use of pre-existing components has a different impact on the program management style. Cost and schedule are the general drivers for pursuing this approach, but there are drawbacks in each case that offset the potential benefits, and that must be factored into project planning. Thus, saving time in development by using NDI (e.g., from the other defense contractor) is offset by the fact the other contractor’s delays become your delays. As another example, the savings that result from buying rather than building a complex component (e.g., CORBA) may be offset by the potentially large cost of gaining sufficient expertise in using that component. Technical Most of the technical problems faced by the project X contractor also result from an unforeseen aspect of using NDI, COTS, and pre-existing components in general. If such an approach is used on a project, there are specific technical implications that one must face. For instance, one might avoid the time and expense of building a complex component (i.e., by getting it as NDI), but then one must have the technical expertise to integrate a component whose internal workings are unknown, whose design assumptions are possibly at odds with the current system, and that may well exhibit undocumented behaviors. Thus, the need for expertise has not been removed, but is simply transferred to a different skill domain. In fact, there are many entirely new skills implied by using pre-existing components: evaluating components based on only partial knowledge; creating adequate testing procedures for “black-box” components; debugging systems with components whose source code is unavailable. A different issue is relevant for the contractor and the government side. There is a great danger that unfamiliarity with the benefits and risks of an NDI approach will be translated into unrealistic expectations. It is simply wishful thinking to believe that a complex system like project X can be essentially and easily constructed from a set of heterogeneous pieces that were all created independently. To assume this is overly simplistic, and denies the widespread experience in the engineering community. 4.6 Risks to the Current Project Schedule At the time of the interviews, a revised project schedule had been developed by the contractor that made a seven-month adjustment to the original schedule. We examined this schedule and concluded that there was a strong potential that this revised schedule would slip further. Some of the risks that could lead to this were essentially programmatic: lack of experiences with IPTs and the spiral approach, and insufficient engagement of the government personnel. Others were essentially technical: the difficulty of integrating heterogeneous components; introduction of new technology (e.g., CORBA); the applicability and suitability of the pre-existing components to the functional requirements of the system. The final conclusion was that the revised schedule had a large element of risk. 5 Summary: Lessons and Recommendations As a corollary to this case study, we made some high-level recommendations to mitigate these risks to the schedule. The major recommendations were programmatic: there appeared to be a need to establish urgency at every level throughout the program, and there was a compelling need by both government and contractor to clarify responsibility and authority for decisions. But aside from mitigations of individual problems and risks, the interview team found that most of the problems uncovered during the interviews were essentially symptoms. The underlying issue is that many of these innovative methods and styles, especially when used simultaneously, can bring about competing goals and constraints, and these must be faced and prioritized. If the schedule has priority (as is evidenced by the aggressive, 18-month schedule for project X to arrive at initial capability), then there must be a willingness to relax some requirements, to evolve to full requirements, and someone must make a command decision that staff will be transferred or relocated as necessary. If performance of the system has priority (as evidenced by the extensive requirements activity), then it would be entirely reasonable to initially establish the full understanding of the system. In fact, in such a case then it would probably much more reasonable to use a waterfall, rather than a spiral development process. And finally, if cost has priority (as evidenced by the desire is to use as much pre-existing material as possible), then there must be a willingness to accept the implications of this approach on the project schedule, on system performance, on the contractor’s management style, and on the government’s participation in the project. References Feedback Comments or suggestions about these monographs are welcome. We want this series to be responsive to the real needs of government personnel. To that end, comments concerning inclusion of other topics, the focus of the papers, or any other issues are of great value in continuing this series of monographs. Comments should be sent to: Editor SEI Monographs on COTS Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 cots@sei.cmu.edu
{"Source-Url": "https://resources.sei.cmu.edu/asset_files/WhitePaper/1998_019_001_29696.pdf", "len_cl100k_base": 6628, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25904, "total-output-tokens": 7211, "length": "2e12", "weborganizer": {"__label__adult": 0.00023615360260009768, "__label__art_design": 0.00022208690643310547, "__label__crime_law": 0.0003554821014404297, "__label__education_jobs": 0.0012989044189453125, "__label__entertainment": 4.100799560546875e-05, "__label__fashion_beauty": 0.00010597705841064452, "__label__finance_business": 0.0009088516235351562, "__label__food_dining": 0.00020956993103027344, "__label__games": 0.0004153251647949219, "__label__hardware": 0.0010118484497070312, "__label__health": 0.0002739429473876953, "__label__history": 0.00021457672119140625, "__label__home_hobbies": 6.496906280517578e-05, "__label__industrial": 0.0004606246948242187, "__label__literature": 0.00014257431030273438, "__label__politics": 0.0003120899200439453, "__label__religion": 0.00020003318786621096, "__label__science_tech": 0.0146484375, "__label__social_life": 6.115436553955078e-05, "__label__software": 0.00952911376953125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0001583099365234375, "__label__transportation": 0.0005345344543457031, "__label__travel": 0.00013327598571777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34685, 0.0146]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34685, 0.24823]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34685, 0.96451]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 3335, null], [3335, 6021, null], [6021, 8862, null], [8862, 11854, null], [11854, 13625, null], [13625, 17120, null], [17120, 19599, null], [19599, 23128, null], [23128, 26523, null], [26523, 30180, null], [30180, 33631, null], [33631, 34685, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 3335, null], [3335, 6021, null], [6021, 8862, null], [8862, 11854, null], [11854, 13625, null], [13625, 17120, null], [17120, 19599, null], [19599, 23128, null], [23128, 26523, null], [26523, 30180, null], [30180, 33631, null], [33631, 34685, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34685, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34685, null]], "pdf_page_numbers": [[0, 95, 1], [95, 3335, 2], [3335, 6021, 3], [6021, 8862, 4], [8862, 11854, 5], [11854, 13625, 6], [13625, 17120, 7], [17120, 19599, 8], [19599, 23128, 9], [23128, 26523, 10], [26523, 30180, 11], [30180, 33631, 12], [33631, 34685, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34685, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9ec169acd11d1b0f693f54f929a76abdfe01557e
Automating the process of choosing among highly correlated covariates for multivariable logistic regression Michael C. Doherty, i3 Drug Safety, Waltham, MA Xiaochun Zhang, i3 Drug Safety, Waltham, MA Abstract In observational studies, there can be significant differences between the characteristics of a treatment and a control group. To reduce the potential for confounding, controls are matched to members of the treatment group using propensity scores estimated by multivariable logistic regression analyses. Propensity score modeling can involve the inclusion of hundreds of covariates including patient diagnoses, medical procedures, and medication exposures, highly correlated variables can complicate the multivariable logistic regression. This paper describes a statistical method to remove the variables automatically, with little input from the programmer. By utilizing the R statistic output from PROC CORR, followed by a MACRO to select which variables to keep and which ones to remove from the model, the programmer can save time in selecting the covariates to be used in the model statement in PROC REG. Introduction In observational studies, there can be substantial differences between the characteristics of a treatment and a control group. Propensity score matching is a multivariable technique that can achieve a high degree of balance between the comparison groups, producing groups that have very similar patterns of a large number of key variables, and thus, reducing the potential for confounding. Propensity score modeling can involve the inclusion of highly correlated variables which can complicate the multivariable logistic regression model. Instead of removing the highly correlated variables by hand, a statistical method has been developed to remove those variables automatically, with little input from the programmer. Describe Example In this example, the covariates under consideration are 0,1 flags indicating whether a subject had a particular diagnosis, procedure, or pharmacy dispensing during the baseline period. When two variables are highly correlated, it is often better to remove the covariate which occurs less frequently. The program described below selects which variable to retain in the regression model by choosing the factor with a larger absolute value of the R statistic and a higher prevalence in the study population. Describe Method The programmer’s task is two fold. First, identify the variables that are highly correlated and second, remove the offending covariates using an iterative procedure. To conceptualize the process, the table below shows the highly correlated covariates in descending order of their R statistic. In this example, let’s assume we wish to keep VarA as a covariate in the model. Since VarA and VarB are so highly correlated, we would like to remove VarB from consideration. We would also like to remove VarF, since it is also highly correlated with VarA. Note how VarB is highly correlated to VarE and VarZ. Since VarB is being removed from consideration, we do not necessarily wish to remove either VarE or VarZ, unless they are also highly correlated with VarA. After removing all variables that are highly correlated with VarA, we will then move onto VarC and find any variables that are highly correlated with it and remove them. The selection macro will loop through the list until it has worked its way through the entire list of variables and has removed the offending highly correlated variables. The program creates a list of variables in order from the highest to lowest R value. However, we also need to choose which variable should be in the left hand column (Covariate 1) and which variable should be in the right hand column (Covariate 2). Since these covariates are indicators, the selection macro chooses those variables that occur more often in our sample to be in the left hand column, and thus, are more likely to be retained. For instance, in the example above, VarA is kept while VarB is removed because VarA occurs more often. Now that we know what we want to do, we can being setting up our program to do it. Our first step is to create some global macros. In this example, we set up a macro for the dataset containing the covariates (dt), the lower limit of the R statistic we are interested in (HighCorr), and the variables we wish to exclude from consideration (exvar), including any continuous variables (contv). ```sas %let dt=outcomes; %let HighCorr=0.7; %let exvar=indv_id cohort i; %let contv=scnddiabdxdt scnddysldxdt diabrxdt dyslrxdt diaboutdt dysloutdt; ``` Since we are assessing hundreds of covariates, we use PROC CONTENTS to create our variables list. As a precaution, we select only those variable where=(type = 1), i.e., numeric. Create a dataset (varlabel) with the variables and their labels for later use using PROC SORT. ```sas proc contents data=in.&dt(drop=&exvar &contv) nокrint ``` <table> <thead> <tr> <th>Covariate 1</th> <th>Covariate 2</th> <th>R Statistic</th> </tr> </thead> <tbody> <tr> <td>VarA</td> <td>VarB</td> <td>0.967</td> </tr> <tr> <td>VarC</td> <td>VarD</td> <td>0.945</td> </tr> <tr> <td>VarB</td> <td>VarE</td> <td>0.931</td> </tr> <tr> <td>VarA</td> <td>VarF</td> <td>0.903</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> <tr> <td>VarZ</td> <td>VarB</td> <td>0.715</td> </tr> </tbody> </table> out=dtvname(keep=name label nobs type where=(type=1)); proc sort data=dtvname(rename=(name=vname)) out=varlabel(keep=vname label); by vname; run; Next create macro variables for the total number of variables (totvar) and the total number of observations (totobs). data _null_; set dtvname end=last; call symput('var'||left(put(_n_,4.), trim(left(name)))); if last then do; call symput('totvar', left(put(_n_,4.))); call symput('totobs', left(put(nobs,12.))); end; run; Now create a macro that will create the list of variables (varlst) and a list of string variables (Mlst) v1 … vN (where N is the total number of variables under consideration) that correspond to the variable names. %macro varlst; %do i=1 %to &totvar; &&var&i %end; %mend varlst; %macro MLst; %do i=1 %to &totvar; v&i="&&var&i" %end; %mend; Use PROC CORR to obtain the R statistic for each pairing. The output should keep only the R statistic (i.e., drop the N, MEAN and STD observations from the output dataset). proc corr noprint data=finaldt out=temp(where=(_type_ not in('MEAN','STD','N'))); var %varlst; run; Now create a table of pairings and sort by the R statistic in descending order. Remove any pairs whose R value is below our lower limit (highcorr), as well as any pairing with a R value of one (e.g. the R statistic of VarA with itself equals one). Only three variables are retained. The variable ‘_name_’ is changed to CorrVar1. CorrVar2 is set to the variable name of the variable that is highly correlated with CorrVar1. CorrVal is an estimate of the R statistic. data CorrTemp(keep=CorrVar1 CorrVar2 CorrVal); set temp(drop=_type_ rename=(_name_=CorrVar1)); %mlst; array ChkCorr(&totvar) %varlst; Next create macro variables for the total number of variables (totvar) and the total number of observations (totobs). array VN(&totvar) $ v1-v&totvar; do i=1 to dim(chkcorr); if (chkcorr(i) >= abs(&highcorr)) and (chkcorr(i) ne 1) then do; CorrVar2=vn(i); CorrVal=chkcorr(i); output; end; end; run; proc sort data=corrtemp; by descending corrval; run; At this point, we have duplicates in our dataset. The dataset CorrTemp looks like the following: <table> <thead> <tr> <th>CorrVar1</th> <th>CorrVar2</th> <th>CorrVal</th> </tr> </thead> <tbody> <tr> <td>VarA</td> <td>VarB</td> <td>R1</td> </tr> <tr> <td>VarB</td> <td>VarA</td> <td>R1</td> </tr> <tr> <td>VarC</td> <td>VarD</td> <td>R2</td> </tr> <tr> <td>VarD</td> <td>VarC</td> <td>R2</td> </tr> <tr> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Note how we have the correlation between VarA and VarB in observation one and the correlation between VarB and VarA in observation two. The same holds for VarC and VarD in observations three and four. Remove those duplicates using the lag function. Note that pairs will have increasing odd values (1, 3, 5, etc...). data ChkDup(keep=CorrVar1 CorrVar2 CorrVal pairs); set corrtemp; Name1=lag(corrvar1); Name2=lag(corrvar2); pairs=_n_; if name1=corrvar2 and name2=corrvar1 then delete; run; Create a frequency count for each indicator flag for use in the selection criteria. Transpose the resulting dataset for ease of merging. proc summary data=finaldt; var %varlst; output out=sumdt(drop=_freq_ _type_) sum= ; run; proc transpose data=sumdt out=transdt; run; Split up the pairs and merge the counts onto the two resulting datasets (dt1 and dt2). We will use the ‘pairs’ variable to match our correlated pairings later on. Set the two datasets (dt1 and dt2) together to create our working dataset (one). proc sql; create table dt1 as select a.corrvar1 as vname, a.corrval, b.col1 as Count, a.pairs from chkdup a, transdt b where upcase(a.corrvar1)=upcase(b._name_); quilt; proc sql; create table dt2 as select a.corrvar2 as vname, a.corrval, b.col1 as Count, a.pairs from chkdup a, transdt b where upcase(a.corrvar2)=upcase(b._name_); quilt; data one; set dt1 dt2; run; The macro ‘chkrcd’ selects which variables to keep and which ones to remove from consideration in the modeling process. Any highly correlated pairs are ordered by their frequency of occurrence and the covariate which occurs more often is selected to be retained, and the other is set to be deleted. %macro chkrcd; %let i=1; %let N=1; proc datasets; delete basedt deletedt; run; %do %while (&N > 0); * Sort in descending order by R statistic, pairs and frequency; proc sort data=one; by descending corrval pairs descending descending count; run; * Set PreNM to previous name and PrePair to previous pair using lag function; data one; set one; PreNM=lag(vname); PrePair=lag(pairs); run; * During actual runs, you probably want to comment out print statements; proc print data=one; title "Dataset at Loop &i"; run; * CREATE TWO DATASETS: KEEP (kdt&i) AND DELETE(ddt&i); data kdt&i(keep=rcd rename=(rcd=vname)) ddt&i(keep=rcd rename=(rcd=vname)); set one; length str krcd drcd $200 kbase dbase $30; retain krcd drcd kbase dbase; * 1ST RECORD IS ALWAYS KEPT; * kvar will contain the list of variables to keep; if _n_=1 then do; kbase=vname; call symput('kvar', left(trim(upcase(vname)))); krcd=symget('kvar'); rcd=vname; output kdt&i; end; else if _n_ = 2 then do; * 2ND RECORD IS ALWAYS DELETED; * dvar will contain the list of variables to delete; dbase=vname; call symput('dvar', left(trim(upcase(vname)))); drcd=symget('dvar'); rcd=vname; output ddt&i; end; else do; * For _n_ ge 3, we need to make sure we are working on the lines where pair = prepair. Then we search for the variable we are keeping (krcd). If vname matches up with krcd, then set rcd to prenm. Or if prenm matches with krcd then set rcd to vname; if (indexw(krcd,upcase(vname)) > 0 or indexw(krcd,upcase(prenm)) > 0) and pairs=prepair then do; * Check to make sure the variable we are about to put into the deleted variable list should not be kept; if indexw(krcd,upcase(prenm)) > 0 and vname ne kbase then rcd=vname; if indexw(krcd,upcase(vname)) > 0 and prenm ne kbase then rcd=prenm; * ADD NEW DELETING VARIABLE INTO MACRO VARIABLE LIST; str=symget('dvar')||''||left(trim(upcase(rcd))); call symput('dvar', left(trim(str))); * REMOVE THIS VARIABLE FROM KEEPING VARIABLE LIST; str=symget('kvar'); str=tranwrd(str,upcase(rcd),''); call symput('kvar', left(trim(str))); drcd=symget('dvar'); * Output to deleted dataset (ddt&i) if rcd ne kbase; if rcd ne kbase then do; output ddt&i; end; end; * Look for drcd (deleted variable from observation 2); else if (indexw(drcd,upcase(prenm)) > 0 or indexw(drcd,upcase(vname)) > 0) and pairs=prepair then do; if indexw(drcd,upcase(prenm)) > 0 and indexw(krcd,upcase(vname))=0 then rcd=vname; else if indexw(drcd,upcase(vname)) > 0 and indexw(krcd,upcase(prenm))=0 then rcd=prenm; str=symget('dvar'); * ADD NEW KEEPING VARIABLE TO LIST IF THIS VARIABLE IS NOT IN THE DELETING LIST; if indexw(str,upcase(rcd))=0 then do; str=symget('kvar')||''||left(trim(upcase(rcd))); call symput('kvar', left(trim(str))); krcd=symget('kvar'); if rcd ne dbase then do; output kdt&i; * Sort dataset one by vname and make sure there are no duplicates or blanks in keeper dataset (kdt&i); proc sort data=one; by vname; proc sort nodupkey data=kdt&i; where vname ne ''; by vname; run; * You may want to comment out print statements after debugging; proc print data=kdt&i; title "Keep Records from Loop &i"; run; * Make sure there are no duplicates or blanks in deleted dataset (ddt&i); proc sort nodupkey data=ddt&i; where vname ne ''; by vname; run; * You may want to comment out print statements after debugging; proc print data=ddt&i; title "Delete Records from Loop &i"; run; * UPDATE THE CHECKING FILE, REMOVE THE RECORDS IN KEEP/DELETE FILES; * CREATE A SELECTED VARIABLE FILE; data cdt one; merge kdt&i(in=in1) ddt&i(in=in2) one(in=in3); by vname; * Put into cdt if variable is a keeper, but not in deleted dataset.; if in1=1 and in2=0 then output cdt; * If no determination has been made (i.e. not in keeper or deleted dataset, output to dataset one; else if in1=0 and in2=0 and in3=1 then output one; run; * Again, make sure there are no duplicates in dataset to keep; proc sort data=cdt(keep=vname) nodupkey; by vname; run; * Append list of variables to keep onto dataset called basedt; proc append base=basedt data=cdt; run; * Append list of variables to delete onto dataset called deletedt; proc append base=deletedt data = ddt; run; * Make sure dataset one is not empty. If it is, then stop loop; data _null_; call symput('N', left(put(num,8.))); if not 0 then set one nobs=num; stop; run; * increment counter; %let i=%eval(&i+1); %end; %mend; %chkrcd; The ‘basedt’ dataset contains our list of variables to keep. Make sure to remove any duplicates and reattach the labels to complete the process. proc sort nodupkey data=basedt; by vname; run; data sel_var_vs; merge basedt(in=in1) varlabel; by vname; if in1; proc print data=sel_var_vs; title "***** High (&highcorr) Correlation Variables Listing *****"; run; The dataset ‘deletedt’ contains the list of variables we are removing. Remove any duplicates and reattach labels to complete the process. proc sort nodupkey data=deletedt; by vname; run; data del_var_vs; merge deletedt(in=in1) varlabel; by vname; if in1; proc print data=del_var_vs; title "***** High (&highcorr) Correlation Variables Deleted *****"; run; Conclusion Propensity score modeling can use hundreds of covariates; however, not all of them are necessary. By eliminating the highly correlated variables from the logistic regression model, the process can run more efficiently. Selecting which variable to include between the highly correlated pairs is done by listing the correlated covariates by descending order of their R statistic and then choosing the variable which occurs more frequently. Often the programmer will want to examine the pairs that are retained and excluded, and thus, all covariates are listed in the output. The macro presented can be used to automate the process of removing the highly correlated covariates from the dataset before running the model and may increase efficiency.
{"Source-Url": "https://www.lexjansen.com/pharmasug/2008/tt/TT07.pdf", "len_cl100k_base": 4357, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 17387, "total-output-tokens": 4738, "length": "2e12", "weborganizer": {"__label__adult": 0.00045561790466308594, "__label__art_design": 0.0004451274871826172, "__label__crime_law": 0.0005898475646972656, "__label__education_jobs": 0.004222869873046875, "__label__entertainment": 0.00010848045349121094, "__label__fashion_beauty": 0.0002536773681640625, "__label__finance_business": 0.0008935928344726562, "__label__food_dining": 0.0006732940673828125, "__label__games": 0.001132965087890625, "__label__hardware": 0.0015821456909179688, "__label__health": 0.005733489990234375, "__label__history": 0.00033545494079589844, "__label__home_hobbies": 0.0002932548522949219, "__label__industrial": 0.0012731552124023438, "__label__literature": 0.00027823448181152344, "__label__politics": 0.00044417381286621094, "__label__religion": 0.0005769729614257812, "__label__science_tech": 0.31591796875, "__label__social_life": 0.00022864341735839844, "__label__software": 0.086669921875, "__label__software_dev": 0.576171875, "__label__sports_fitness": 0.0007500648498535156, "__label__transportation": 0.0005793571472167969, "__label__travel": 0.0003447532653808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15541, 0.01199]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15541, 0.46363]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15541, 0.80617]], "google_gemma-3-12b-it_contains_pii": [[0, 2689, false], [2689, 5280, null], [5280, 7142, null], [7142, 8798, null], [8798, 10316, null], [10316, 12269, null], [12269, 13861, null], [13861, 15541, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2689, true], [2689, 5280, null], [5280, 7142, null], [7142, 8798, null], [8798, 10316, null], [10316, 12269, null], [12269, 13861, null], [13861, 15541, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15541, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15541, null]], "pdf_page_numbers": [[0, 2689, 1], [2689, 5280, 2], [5280, 7142, 3], [7142, 8798, 4], [8798, 10316, 5], [10316, 12269, 6], [12269, 13861, 7], [13861, 15541, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15541, 0.05338]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
d91b73b509c210b1418d46e25aacb2210f1c5980
Introduction This release note provides important information about the STM32 PMSM FOC SDK motor control software design kit composed by PMSM FOC FW library and ST MC Workbench (reference: STSW-STM32100). This release note is updated periodically in order to keep you abreast of evolutions of the package and any problems or limitations found. Check the ST microcontroller support website at [http://www.st.com](http://www.st.com) to ensure that this is the latest version of this release note. Customer support For more information or help concerning STM32 PMSM FOC SDK, please contact the nearest sales office. For a complete list of ST offices and distributors, please refer to [http://www.st.com](http://www.st.com). Software updates You can download software updates and all the latest documentation from the ST microcontroller support site at [http://www.st.com](http://www.st.com). Microcontrollers supported - STM32F030C6/C8/K6/R8 - STM32F100, STM32F103 - STM32F2 Series - STM32F302xB/C, STM32F303xB/C - STM32F4 Series. Contents 1 Description ............................................................... 3 2 ST Motor Control Workbench ........................................ 4 2.1 What's new .......................................................... 4 2.2 Program features .................................................. 4 2.3 Supported ST MC Platform ........................................ 4 2.4 Supported STM32 microcontrollers ............................... 4 2.5 Release information .............................................. 5 2.5.1 About release 4.0.0 ........................................ 5 2.5.2 About major previous releases .............................. 5 Release 3.0.4 ................................................. 5 Release 3.0.2 ................................................. 5 Release 3.0.1 ................................................. 5 Release 3.0.0 ................................................. 6 2.6 Host PC system requirements .................................... 6 3 STM32 PMSM FOC FW library ......................................... 7 3.1 New features ...................................................... 7 Version v4.0 ...................................................... 7 Version v3.4 ...................................................... 7 Version v3.3 ...................................................... 7 3.2 Known problems and limitations ................................ 7 3.2.1 Know problems/limitations in v3.4 fixed in v4.0 ........ 7 3.2.2 Known limitations in v4.0 ................................ 8 3.2.3 Known problems/limitations in v3.3 fixed in v3.4 ........ 8 3.2.4 Known limitations in v3.4 ................................ 9 3.2.5 Known problems/limitations in v3.2 fixed in v3.3 ........ 9 3.2.6 Known limitations in v3.3 ................................ 9 4 Revision history ...................................................... 10 1 Description ST’s STM32 offers the performance of the industry-standard ARM™ Cortex™-M core at the service of vector (or field-oriented) control (FOC) algorithms, widely used in high-performance drives. The STM32 PMSM FOC SDK (STSW-STM32100), which includes the PMSM FOC FW library and ST MC Workbench, allows the user to evaluate the STM32 performance in applications driving single or dual Field Oriented Control of 3-phase Permanent Magnet motors (PMSM, BLDC). ST MC Workbench is a PC software that reduces the design effort and time in the STM32 PMSM FOC firmware library configuration. The user, through a graphical user interface (GUI), generates all parameter header files which configure the library according to the application needs and can in real time monitor and change some variables of the algorithm. ![Figure 1. Work process and monitor of ST MC Workbench](image-url) 2 ST Motor Control Workbench 2.1 What’s new - Added High frequency injection sensor less (HFI). - Added of start-up rump plot. - Added AC Input info form. 2.2 Program features - Configuration of all the parameters required by ‘STM32 PMSM FOC firmware library’ supported - Generation of all the header files (.h) required by ‘STM32 PMSM FOC firmware library’ supported - Support for single and dual motor control - Online communication with motor control application 2.3 Supported ST MC Platform - STM32 PMSM FOC SDK v4.0 - STM32 PMSM FOC SDK v3.4 (support for STM32F03x, STM32F050x, STM32F051x, STM32F3xx and previous microcontrollers) - STM32 PMSM FOC SDK v3.3 (support for STM32F05xx) - STM32 PMSM FOC SDK v3.2 (support for STM32F2xx and STM32F4xx) - STM32 PMSM FOC SDK v3.0.1 (for online communication) - STM32 PMSM FOC SDK v3.0 2.4 Supported STM32 microcontrollers - STM32F030C6/C8/K6/R8 - STM32F100 Value line - STM32F103 - STM32F2 Series - STM32F302xB/xC, STM32F303xB/xC - STM32F4 Series 2.5 Release information 2.5.1 About release 4.0.0 - Drive Management - Added HFI sensor less management - Improved Start-up parameters (start-up rump plotting). - Added AC Input selection - Power Stage - Improved Driving Signals Polarity: Added the possibility to force the same values for all U, V, W drivers simultaneously. - Program features: - Adding Compact/Extend mode: showing/hiding advanced feature. - Minor bug fixes 2.5.2 About major previous releases Release 3.0.4 - Added unit measure for Torque&Flux - Cut-off frequency - Changed behavioral of bus voltage sensing inverting input - Improved conversion from STM32F05xx of WB 2.1 to STM32F051x of WB 3.0 - Removed read-only for OPAMP inverting pins if shared resource enable and OPAMP Gain is External in dual motor configuration - Removed read-only for COMP output pins - Improved start-up time - Minor bug fixes Release 3.0.2 - Minor bug fixes Release 3.0.1 - Added Alternate Function information for comparators - Pin management in change target library - External Protection and No protection selection available also for no STM32F3x - Added managing of the ADC sampling for phase current feedback (clock frequency, minimum, divider) - Allowing internal/external or vice-versa gain type in shared resource configuration - Minor bug fixes Release 3.0.0 - Support for STM32F3x for single and dual - Extended support for STM32F0xx (STM32F030x, STM32F050x, STM32F051x) - Example projects list - Recent project list - Export for logs - Support for embedded OPAMP - Support for embedded COMP - PFC support - Extended DAC functionality - Added Amplification Network Gain form and export in HTML format - Minor bug fixes 2.6 Host PC system requirements - PC running on Microsoft Windows? operating systems. - Required Space: ~ 30 MB - Minimum screen resolution: 1024x768 - RS-232 serial communication port or, equivalently, a USB to RS-232 converter (required for online communication) Note: The software requires Microsoft .NET Framework 2.0 SP2 or higher Note: System Administration Rights are required for Setup. 3 STM32 PMSM FOC FW library 3.1 New features Version v4.0 - Sensorless (High Frequency Injection HFI plus B-EMF State Observer, PLL rotor speed/angle computation from B-EMF, only for STM32F3x or STM32F4xx) - Support to ARM\textsuperscript{TM} Keil\textsuperscript{®} \mu\textsuperscript{Vision}\textsuperscript{®} and IAR Embedded Workbench\textsuperscript{®} IDEs - Simplify the MC SDK with a self-explaining approach - Ready to use application examples - Fast unidirectional serial communication - Simplified user LCD interface Version v3.4 - Support of the STM32F3x microcontroller families has been added. - Support of STM32F3’s enhanced set of peripherals including comparators, PGAs, DACs, high-speed ADCs and CCMRAM. - Support of the STM32F0x family enlarged, now comprising STM32F030x, STM32F050x, STM32F051x. - HardFault handler now used for error signaling and application safety. - The STM3210B-EVAL new LCD MB895/S (HX8347D) is now supported by LCD FW. - Support of L6230’s enable pins. - Default variables can be configured and displayed using the DAC functionality. - In dual motor control mode, the same bus voltage measurement can be used for both motor controls. Version v3.3 - Support of the STM32F05xx microcontroller families has been added. - Support of inrush current limiter has been added. - General purpose ramps generation class has been added. 3.2 Known problems and limitations 3.2.1 Known problems/limitations in v3.4 fixed in v4.0 - Current reading error for STM32F3x due wrong computation of sampling time in NS done in the parameter conversion. - Wrong encoder alignment is performed - when set alignment angle is different from 90 degrees or - when motor poles pairs are different from 2. - - If HALL sensor is used then an error (division by 0) occurs when the timer pre-scaler becomes 0. 3.2.2 Known limitations in v4.0 - STEVAL-IHM022V1 Dual Drive - Free RTOS - Hard fault driving both motors. - Sampling time is not taken in consideration to determine if sampling in the middle of period is possible (affects STM32F3). - ENCODER speed sensor function: one Input Capture pin of the selected timer must be grounded, according to the remapping in use and this rationale: IC4 in case of TIM2, IC4 in case of TIM3, IC3 in case of TIM4, IC4 in case of TIM5. - Hall / Encoder modules: GPIO configuration is not locked. - False spike in the DAC variable related to measured current during calibration phase. - LCD User Interface: when the “start both motor” button is pushed, a new value of target speed settled before is not taken into account. - The linker file (IAR) for MC project (STM32F2 and STM32F4 microcontrollers) doesn’t take in account the Flash and RAM memory reserved for the LCD Project. - DAC functionality: TIM3 remap not configurable when working with STM32F103 with Flash memory density lower than or equal to 128kBytes. - Serial Com User Interface: MCI_StartMotor and MCI_StopMotor, return value not checked. 3.2.3 Known problems/limitations in v3.3 fixed in v3.4 - An issue in the state observer (speed and position sensor) has been solved; the malfunctioning was introduced in v3.3, when optimizing execution speed on STM32F0. - In Timebase.c, counters declared as 8bit now corrected as 16bit variables. - RampMngr Class compliancy with MISRA rule. - InrushCurrentLimiter Class compliancy with MISRA rules fixed. - In 3shunt and ICS current reading class, function SwitchOnPWM now waits the timer update before activating driving signals. - In 3shunt current reading class, phase C calibration now done correctly using 1 ADC peripheral only. - The linker file (IAR) for MC project (F2 and F4) didn’t take into account the Flash and RAM memory reserved for LCD Project. - Added the support for the LCD marked MB895/S C-03 of STM3210B-EVAL and STEVAL-IHM022v1. - Added the support for the LCD marked MB895/P C-03 of STM3210E-EVAL. 3.2.4 Known limitations in v3.4 - ENCODER speed sensor function: one Input Capture pin of the selected timer must be grounded, according to the remapping in use and this rationale: IC4 in case of TIM2, IC4 in case of TIM3, IC3 in case of TIM4, IC4 in case of TIM5. - Hall / Encoder modules: GPIO configuration is not locked. - False spike in the DAC variable related to measured current during calibration phase. - LCD User Interface: when the “start both motor” button is pushed, a new value of target speed settled before is not taken into account. - The linker file (IAR) for MC project (STM32F2 and STM32F4 microcontrollers) doesn’t take in account the Flash and RAM memory reserved for the LCD Project. - DAC functionality: TIM3 remap not configurable when working with STM32F103MD. - Serial Com User Interface: MCI_StartMotor and MCI_StopMotor, return value not checked. 3.2.5 Known problems/limitations in v3.2 fixed in v3.3 - The issue that stuck the LCD firmware when the DAC function was not enabled has been fixed. - The generation of wrong PWM frequencies in the STM32F1xx configuration for TIM_CLOCK_DIVIDER not equal to one has been fixed. - The generation of wrong dead times in the STM32F1xx configuration for TIM_CLOCK_DIVIDER not equal to one has been fixed. - The integral term of PID objects with MAX integral term instead of zero has been fixed. - The "Initial Electrical Angle" settled via the STMC workbench did not have any effect in the previous version of the library. The issue has been fixed. - Cross checks and backward compatibilities with the workbench generated files has been fixed in the LCD project. - The current measurement for ICS sensors (STM32F103HD, STM32F2xx or STM32F4xx) using the wrong offset has been fixed. - The switch context algorithm of all dual motor current sensing classes (STM32F103HD, STM32F2xx or STM32F4xx) has been fixed. 3.2.6 Known limitations in v3.3 - ENCODER speed sensor function: one Input Capture pin of the selected timer must be grounded, according to the remapping in use and this rationale: IC4 in case of TIM2, IC4 in case of TIM3, IC3 in case of TIM4, IC4 in case of TIM5. 4 Revision history Table 1. Document revision history <table> <thead> <tr> <th>Date</th> <th>Revision</th> <th>Changes</th> </tr> </thead> <tbody> <tr> <td>14-Nov-2012</td> <td>1</td> <td>Initial release.</td> </tr> <tr> <td>18-Apr-2014</td> <td>2</td> <td>Added Version 3.4.</td> </tr> <tr> <td>4-Jun-2014</td> <td>3</td> <td>Updated <em>Introduction, Section 1: Description</em> and <em>Section 3.2.1</em>.</td> </tr> <tr> <td></td> <td></td> <td>Added ST MC Workbench release notes in <em>Section 2</em>.</td> </tr> <tr> <td></td> <td></td> <td>Added Version 4.0 of PMSM FOC FW library in <em>Section 3.2.1</em> and</td> </tr> <tr> <td></td> <td></td> <td><em>Section 3.2.2</em>.</td> </tr> </tbody> </table> Please Read Carefully: Information in this document is provided solely in connection with ST products. STMicroelectronics NV and its subsidiaries (“ST”) reserve the right to make changes, corrections, modifications or improvements, to this document, and the products and services described herein at any time, without notice. All ST products are sold pursuant to ST’s terms and conditions of sale. Purchasers are solely responsible for the choice, selection and use of the ST products and services described herein, and ST assumes no liability whatsoever relating to the choice, selection or use of the ST products and services described herein. No license, express or implied, by estoppel or otherwise, to any intellectual property rights is granted under this document. If any part of this document refers to any third party products or services it shall not be deemed a license grant by ST for the use of such third party products or services, or any intellectual property contained therein or considered as a warranty covering the use in any manner whatsoever of such third party products or services or any intellectual property contained therein. UNLESS OTHERWISE SET FORTH IN ST’S TERMS AND CONDITIONS OF SALE ST DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY WITH RESPECT TO THE USE AND/OR SALE OF ST PRODUCTS INCLUDING WITHOUT LIMITATION IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE (AND THEIR EQUIVALENTS UNDER THE LAWS OF ANY JURISDICTION), OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. ST PRODUCTS ARE NOT DESIGNED OR AUTHORIZED FOR USE IN: (A) SAFETY CRITICAL APPLICATIONS SUCH AS LIFE SUPPORTING, ACTIVE IMPLANTED DEVICES OR SYSTEMS WITH PRODUCT FUNCTIONAL SAFETY REQUIREMENTS; (B) AERONAUTIC APPLICATIONS; (C) AUTOMOTIVE APPLICATIONS OR ENVIRONMENTS, AND/OR (D) AEROSPACE APPLICATIONS OR ENVIRONMENTS. WHERE ST PRODUCTS ARE NOT DESIGNED FOR SUCH USE, THE PURCHASER SHALL USE PRODUCTS AT PURCHASER’S SOLE RISK, EVEN IF ST HAS BEEN INFORMED IN WRITING OF SUCH USAGE, UNLESS A PRODUCT IS EXPRESSLY DESIGNATED BY ST AS BEING INTENDED FOR “AUTOMOTIVE, AUTOMOTIVE SAFETY OR MEDICAL” INDUSTRY DOMAINS ACCORDING TO ST PRODUCT DESIGN SPECIFICATIONS. PRODUCTS FORMALLY ESCC, QML OR JAN QUALIFIED ARE DEEMED SUITABLE FOR USE IN AEROSPACE BY THE CORRESPONDING GOVERNMENTAL AGENCY. Resale of ST products with provisions different from the statements and/or technical features set forth in this document shall immediately void any warranty granted by ST for the ST product or service described herein and shall not create or extend in any manner whatsoever, any liability of ST. ST and the ST logo are trademarks or registered trademarks of ST in various countries. Information in this document supersedes and replaces all information previously supplied. STMicroelectronics group of companies © 2014 STMicroelectronics - All rights reserved Australia - Belgium - Brazil - Canada - China - Czech Republic - Finland - France - Germany - Hong Kong - India - Israel - Italy - Japan - Malaysia - Malta - Morocco - Philippines - Singapore - Spain - Sweden - Switzerland - United Kingdom - United States of America www.st.com
{"Source-Url": "http://www.st.com/st-web-ui/static/active/en/resource/technical/document/release_note/DM00068242.pdf", "len_cl100k_base": 4201, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 21287, "total-output-tokens": 4831, "length": "2e12", "weborganizer": {"__label__adult": 0.0008988380432128906, "__label__art_design": 0.0007371902465820312, "__label__crime_law": 0.0005245208740234375, "__label__education_jobs": 0.0002846717834472656, "__label__entertainment": 0.00015592575073242188, "__label__fashion_beauty": 0.0004210472106933594, "__label__finance_business": 0.0003952980041503906, "__label__food_dining": 0.0005316734313964844, "__label__games": 0.00173187255859375, "__label__hardware": 0.2059326171875, "__label__health": 0.000492095947265625, "__label__history": 0.0002620220184326172, "__label__home_hobbies": 0.000457763671875, "__label__industrial": 0.00394439697265625, "__label__literature": 0.00017654895782470703, "__label__politics": 0.0002682209014892578, "__label__religion": 0.0008230209350585938, "__label__science_tech": 0.039154052734375, "__label__social_life": 5.7578086853027344e-05, "__label__software": 0.0594482421875, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0008568763732910156, "__label__transportation": 0.0017404556274414062, "__label__travel": 0.0002579689025878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17195, 0.06453]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17195, 0.043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17195, 0.83231]], "google_gemma-3-12b-it_contains_pii": [[0, 1084, false], [1084, 3073, null], [3073, 3961, null], [3961, 5011, null], [5011, 6334, null], [6334, 7109, null], [7109, 8942, null], [8942, 11002, null], [11002, 13153, null], [13153, 14007, null], [14007, 17195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1084, true], [1084, 3073, null], [3073, 3961, null], [3961, 5011, null], [5011, 6334, null], [6334, 7109, null], [7109, 8942, null], [8942, 11002, null], [11002, 13153, null], [13153, 14007, null], [14007, 17195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17195, null]], "pdf_page_numbers": [[0, 1084, 1], [1084, 3073, 2], [3073, 3961, 3], [3961, 5011, 4], [5011, 6334, 5], [6334, 7109, 6], [7109, 8942, 7], [8942, 11002, 8], [11002, 13153, 9], [13153, 14007, 10], [14007, 17195, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17195, 0.03791]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
671cf3a6eeac1491ac761741bcaa5e911f3da82b
[REMOVED]
{"len_cl100k_base": 7269, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23685, "total-output-tokens": 7779, "length": "2e12", "weborganizer": {"__label__adult": 0.00027680397033691406, "__label__art_design": 0.00028586387634277344, "__label__crime_law": 0.00027561187744140625, "__label__education_jobs": 0.0004496574401855469, "__label__entertainment": 3.933906555175781e-05, "__label__fashion_beauty": 0.0001112222671508789, "__label__finance_business": 0.00020802021026611328, "__label__food_dining": 0.00024700164794921875, "__label__games": 0.0003936290740966797, "__label__hardware": 0.0004982948303222656, "__label__health": 0.00025391578674316406, "__label__history": 0.000125885009765625, "__label__home_hobbies": 4.2557716369628906e-05, "__label__industrial": 0.0002033710479736328, "__label__literature": 0.0002008676528930664, "__label__politics": 0.00016891956329345703, "__label__religion": 0.000286102294921875, "__label__science_tech": 0.003801345825195313, "__label__social_life": 4.8100948333740234e-05, "__label__software": 0.005329132080078125, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00016510486602783203, "__label__transportation": 0.00026488304138183594, "__label__travel": 0.00012195110321044922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39655, 0.03388]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39655, 0.51978]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39655, 0.95004]], "google_gemma-3-12b-it_contains_pii": [[0, 5031, false], [5031, 12030, null], [12030, 19066, null], [19066, 26073, null], [26073, 32718, null], [32718, 39511, null], [39511, 39655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5031, true], [5031, 12030, null], [12030, 19066, null], [19066, 26073, null], [26073, 32718, null], [32718, 39511, null], [39511, 39655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39655, null]], "pdf_page_numbers": [[0, 5031, 1], [5031, 12030, 2], [12030, 19066, 3], [19066, 26073, 4], [26073, 32718, 5], [32718, 39511, 6], [39511, 39655, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39655, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
1e2c817cd1cd3cf0871b2dfaa2bfab772769e198
Variant-Based Decidable Satisfiability in Initial Algebras with Predicates Raúl Gutiérrez\textsuperscript{1} José Meseguer\textsuperscript{2} \textsuperscript{1}DSIC, Universitat Politècnica de València, Spain \textsuperscript{2}University of Illinois at Urbana-Champaign, Illinois, USA \textbf{Namur (Belgium), October 11, 2017} Motivation 1. Some of the most recent advances in software verification are due to the systematic use of decision procedures in model checkers and theorem provers. 2. For a system specified by theory $T$, SMT solving can partially automate verification by using procedures for decidable subtheories $T_i$. 3. Limitation of SMT tools: lack of extensibility of decidable fragment. 4. Users can extend a specification’s decidable fragment if theory-generic decision procedures are added. 5. Variant-based satisfiability (VS): a decision procedure for initial algebras $T_{\Sigma/E\cup B}$ generic on theories $(\Sigma, E \cup B)$ under quite general conditions. 6. Limitation: current VS algorithm applies well to user-definable data structures, but cannot handle user-definable predicates. Extend variant-based satisfiability to initial algebras with user-definable predicates under fairly general conditions using two key ideas: 1. characterizing the cases when $p(u_1, \ldots, u_n) \neq tt$ by means of constrained patterns; and 2. eliminating all occurrences of disequalities of the form $p(u_1, \ldots, u_n) \neq tt$ in a quantifier-free (QF) formula by means of such patterns. Outline 1 Motivation 2 Variant Satisfiability 3 Predicates 4 OS-compactness 5 Negative Patterns 6 Inductive Satisfiability Decision Procedure 7 Implementation 8 Conclusions Example: Sets of Natural Numbers \((\Sigma, E \cup B)\) \[ \text{fmod ACU-NAT is} \] \[ \text{sort Natural .} \] \[ \text{op 0 : } \rightarrow \text{Natural [ctor] .} \] \[ \text{op 1 : } \rightarrow \text{Natural [ctor] .} \] \[ \text{op \_\_+_ : Natural Natural } \rightarrow \text{Natural [ctor assoc comm id: 0] .} \] \[\text{endfm}\] \[ \text{fmod ACU-NAT-FUN is} \] \[ \text{pr ACU-NAT .} \] \[ \text{op max : Natural Natural } \rightarrow \text{Natural [comm] .} \] \[ \text{op min : Natural Natural } \rightarrow \text{Natural [comm] .} \] \[ \text{op \_-_- : Natural Natural } \rightarrow \text{Natural .} \] \[\text{*** monus} \] \[ \text{vars N M : Natural .} \] \[ \text{eq max(N,N + M) = N + M [variant] .} \] \[ \text{eq min(N,N + M) = N [variant] .} \] \[ \text{eq N - (N + M) = 0 [variant].} \] \[ \text{eq (N + M) - N = M [variant] .} \] \[\text{endfm}\] Example: Sets of Natural Numbers \((\Sigma, E \cup B)\) ```latex fmod ACU-NAT-SET is pr ACU-NAT. sort NaturalSet. sort Pred. subsort Natural < NaturalSet. op mt : -> NaturalSet [ctor]. op _,_ : NaturalSet NaturalSet -> NaturalSet [ctor assoc comm]. op tt : -> Pred [ctor]. *** set containment op _=C_ : NaturalSet NaturalSet -> Pred [ctor]. vars NS NS' : NaturalSet. *** identity of set union eq NS , mt = NS [variant]. *** idempotency of set union eq NS , NS = NS [variant]. *** idempotency of set union eq NS , NS , NS' = NS , NS' [variant]. eq mt =C NS = tt [variant]. eq NS =C NS = tt [variant]. eq NS =C NS , NS' = tt [variant]. endfm ``` R. Gutiérrez & J. Meseguer (UPV & UIUC) Variants Given a decomposition $\mathcal{R} = (\Sigma, B, \bar{E})$ of a MS equational theory $(\Sigma, E)$ and a $\Sigma$-term $t$, a variant of $t$ is a pair $(u, \theta)$ such that: - $u =_B (t\theta)!_{\bar{E},B}$, - $\text{dom}(\theta) \subseteq \text{vars}(t)$, and - $\theta = \theta!_{\bar{E},B}$, that is, $\theta(x) = \theta(x)!_{\bar{E},B}$ for all variables $x$. $(u, \theta)$ is called a ground variant iff, furthermore, $u \in T_\Sigma$. Given variants $(u, \theta)$ and $(v, \gamma)$ of $t$, $(u, \theta)$ is called more general than $(v, \gamma)$, denoted $(u, \theta) \sqsupseteq_B (v, \gamma)$, iff there is a substitution $\rho$ such that: - $(\theta \rho)|_{\text{vars}(t)} =_B \gamma$, and - $u \rho =_B v$. Let $[t]_{\bar{E},B} = \{(u_i, \theta_i) \mid i \in I\}$ denote a complete set of variants of $t$, that is, a set of variants such that for any variant $(v, \gamma)$ of $t$ there is an $i \in I$, such that $(u_i, \theta_i) \sqsupseteq_B (v, \gamma)$. Example: Variants get variants in ACU-NAT-FUN: min(1, N:Natural + K:Natural) . Variant #1 Natural: min(1, N:Natural + K:Natural) Variant #2 Natural: 1 K:Natural --> 1 + K1:Natural Variant #3 Natural: 1 N:Natural --> 1 + N1:Natural Variant #4 Natural: 0 N:Natural --> 0 K:Natural --> 0 get variants in ACU-NAT-FUN: N:Natural - K:Natural . Variant #1 Natural: N:Natural - K:Natural Variant #2 Natural: 0 K:Natural --> K1:Natural + N:Natural Variant #3 Natural: N1:Natural N:Natural --> N1:Natural + K:Natural Variant #4 Natural: 0 N:Natural --> 0 K:Natural --> 0 Finite Variant Property - A decomposition $\mathcal{R} = (\Sigma, B, R)$ has the finite variant property (FVP) iff for each $\Sigma$-term $t$ there is a finite complete set of variants $\llbracket t \rrbracket_{R,B} = \{(u_1, \theta_1) \ldots (u_n, \theta_n)\}$. - If $B$ has a finitary $B$-unification algorithm, and $\mathcal{R} = (\Sigma, B, R)$ has FVP, $\llbracket t \rrbracket_{R,B}$ can be chosen to be the set of most general variants. Note FVP easy to check when it holds. Example: ACU-NAT-SET is FVP. Representing Predicates - A **predicate** is viewed as a function symbol $p : s_1 \ldots s_n \to \text{Pred}$, with \text{Pred} a new sort having constant \text{tt}. - An atomic formula $p(t_1, \ldots, t_n)$ is then expressed as the equation $p(t_1, \ldots, t_n) = \text{tt}$. Example: Predicates on Sets of Natural Numbers fmod ACU-NAT-SET-PRED is pr ACU-NAT-SET . *** strict order op _>_ : Natural Natural -> Pred [ctor] . *** sort predicates op natural : NaturalSet -> Pred [ctor] . op even : NaturalSet -> Pred [ctor] . op odd : NaturalSet -> Pred [ctor] . vars N M : Natural . eq N + M + 1 > N = tt [variant] . eq natural(N) = tt [variant] . eq even(N + N) = tt [variant] . eq odd(N + N + 1) = tt [variant] . endfm **Constructor Variants** **Question** What variants of t cover as instances modulo B all canonical forms of all ground instances of t? Let $R = (\Sigma, B, R)$ be an FVP decomposition of $(\Sigma, E)$ protecting a constructor decomposition $R_\Omega = (\Omega, B_\Omega, R_\Omega)$. Assume that: - $\Sigma = \Omega \cup \Delta$ with $\Omega \cap \Delta = \emptyset$; - $B$ has a finitary $B$-unification algorithm and $B = B_\Omega \cup B_\Delta$, with $B_\Omega$ $\Omega$-equations and if $u = v \in B_\Delta$, $u,v$ are non-variable $\Delta$-terms. Call $\llbracket t \rrbracket_{R,B}^\Omega = \{(v, \theta) \in \llbracket t \rrbracket_{R,B} : v \in T_\Omega(X)\}$ the set of constructor variants of t. **Answer** If $[u] \in C_{R_\Omega}$ is of the form $u \equiv_B (t\gamma)!_{R,B}$, then there is $(v, \theta) \in \llbracket t \rrbracket_{R,B}^\Omega$ and a normalized ground substitution $\tau$ such that $u \equiv_B v\tau$. R. Gutiérrez & J. Meseguer (UPV & UIUC) An equational OS-FO theory \((\Sigma, E)\) is called **OS-compact** iff: - for each sort \(s\) in \(\Sigma\) we can effectively determine whether \(s\) is finite or infinite in \(T_{\Sigma/E,s}\), and, if finite, can effectively compute a representative ground term \(\text{rep}([u]) \in [u]\) for each \([u] \in T_{\Sigma/E,s}\); - \(=_{E}\) is decidable and \(E\) has a finitary unification algorithm; and - any finite conjunction \(\bigwedge D\) of negated \(\Sigma\)-atoms whose variables all have infinite sorts and such that \(\bigwedge D\) is \(E\)-consistent is satisfiable in \(T_{\Sigma,E}\). Call an OS theory \((\Sigma, E)\) **OS-compact** iff OS-FO theory \((\Sigma, E)\) is OS-compact. **Theorem** If \((\Sigma, E)\) is an **OS-compact** theory, then satisfiability of QF \(\Sigma\)-formulas in \(T_{\Sigma,E}\) is decidable. Current Variant Satisfiability Theorem 1 If \((\Omega, B_\Omega)\) has \(B_\Omega\) only with \(ACCU\)-axioms, then \((\Omega, B_\Omega)\) is OS-compact. Theorem 2 (Variant Satisfiability) If \((\Sigma, E \cup B)\) if FVP and protects \((\Omega, B_\Omega)\) with \(B_\Omega \subseteq ACCU\), then QF satisfiability in \((\Sigma, E \cup B)\) is decidable. Limitation Question What happens with the user-definable predicates? - $p$ is a constructor operator of sort $\text{Pred}$ which is not a free constructor modulo the axioms $B_\Omega$. - The OS-compactness of a constructor decomposition $\mathcal{R}_\Omega = (\Omega, B_\Omega, R_\Omega)$ can be broken (or be a hard to prove task) when adding user-definable predicates. Solution We provide a decision procedure for validity and satisfiability of QF formulas in the initial algebra of an FVP theory $\mathcal{R}$ that may contain user-definable predicates and protects a constructor decomposition $\mathcal{R}$ that need not be OS-compact under reasonable assumptions. Example: Negative Patterns - Greater than: $N > N + M$ - Even: - $\text{even}(mt)$ - $\text{even}(N + N + 1)$ - $((N = C \ NS /= tt), (NS /= mt)) \implies \text{even}((N, NS))$ - Odd: - $\text{odd}(mt)$ - $\text{odd}(N + N)$ - $((N = C \ NS /= tt), (NS /= mt)) \implies \text{odd}((N, NS))$ - Natural: - $\text{natural}(mt)$ - $((N = C \ NS /= tt), (NS /= mt)) \implies \text{natural}((N, NS))$ Negative Patterns - Negative constrained patterns are of the form: \[ \bigwedge_{1 \leq l \leq n_j} w^j_l \neq w'^j_l \Rightarrow p(v^j_1, \ldots, v^j_n) \neq tt, \quad 1 \leq j \leq m_p \] with the \(v^j_i\), \(w^j_l\) and \(w'^j_l\) \(\Omega_c\)-terms with variables in \(Y_j = \text{vars}(p(v^j_1, \ldots, v^j_n))\). - These negative constrained patterns are interpreted as meaning that the following semantic equivalences are valid in \(C_R\) for each \(p \in \Omega_\Pi\), where \(\rho_j \in \{\rho \in [Y_j \rightarrow T_{\Omega_c}] \mid \rho = \rho!_{R,B}\}\), \(B = B_\Delta \uplus B_{\Omega_c}\), and \(R = R_\Delta \uplus R_{\Omega_c} \uplus R_\Pi\): \[ [p(v^j_1, \ldots, v^j_n)\rho_j] \in C_R \Leftrightarrow \bigwedge_{1 \leq l \leq n_j} (w^j_l \neq w'^j_l)\rho_j \] \[ [p(t_1, \ldots, t_n)] \in C_R \Leftrightarrow \exists j \exists \rho_j \ [p(t_1, \ldots, t_n)] = [p(v^j_1, \ldots, v^j_n)\rho_j] \land \bigwedge_{1 \leq l \leq n_j} (w^j_l \neq w'^j_l)\rho_j \] The inductive validity decision problem of whether $C_R \models \varphi$ is reduced to deciding whether $\neg \varphi$ is unsatisfiable in $C_R$. In this way, it is enough to decide the satisfiability of a conjunction of $\Sigma$-litersals of the form $\bigwedge G \land \bigwedge D$ (the QF $\Sigma$-formula in disjunctive normal form), where the $G$ are equations and the $D$ are disequations. Steps: 1. **Unification.** Satisfiability of the conjunction $\bigwedge G \land \bigwedge D$ is replaced by satisfiability for some conjunction in the set $\{ (\bigwedge D\alpha)_{R,B} \mid \alpha \in \text{VarUnif}_E(\bigwedge G) \}$. The Inductive Satisfiability Decision Procedure (2/2) 2 **Π-Elimination.** For each $\bigwedge D' = \bigwedge D_1 \land p(t_1, \ldots, t_n) \neq tt \land \bigwedge D_2$, we replace $\bigwedge D'$ by all not obviously unsatisfiable conjunctions of the form: $$\left( \bigwedge D_1 \land \bigwedge_{1 \leq l \leq n_j} w^{j_l} \neq \alpha^{j_l} \land \bigwedge D_2 \right) \theta \alpha$$ where $1 \leq j \leq m_p$, $W = \text{vars}(\bigwedge D')$, $(p(t'_1, \ldots, t'_n), \theta) \in \llbracket p(t_1, \ldots, t_n) \rrbracket_{R, B}^W$, and $\alpha$ is a disjoint $B_{\Omega_c}$-unifier of the equation $p(t'_1, \ldots, t'_n) = p(v^{j_1}_1, \ldots, v^{j_n}_n)$. 3 **Reduce Conjunctions of \(\Sigma\) Disequalities to Conjunctions of \(\Omega_c\) Disequalities.** For $\bigwedge D'$ a $\Delta \cup \Omega_c$-conjunction of disequalities, viewed as a ($\Delta \cup \Omega_c$)-term its constructor $\Omega_c^\wedge$-variants are of the form $(\bigwedge D'', \gamma)$, with $\bigwedge D''$ an $\Omega_c$-conjunction of disequalities. Then $\bigwedge D'$ is satisfiable in $C_R$ iff some $\bigwedge D'' \tau$ so obtained is $B_{\Omega_c}$-consistent for some $\Omega_c^\wedge$-variant $(\bigwedge D'', \gamma)$ of $\bigwedge D'$. We have implemented the variant satisfiability decision procedure in a new prototype tool. The implementation consists of 11 new Maude modules (from 17 in total), 2345 new lines of code, and uses the Maude’s META-LEVEL to carry out the steps of the procedure in a reflective way. We have also developed a Maude interface to ease the definition of properties and patterns as equations. The three steps of the variant satisfiability procedure are implemented using Maude’s META-LEVEL functions. Example: Odd and Even mod ACU-NAT-SET-PRED-CONJECTURES is pr ACU-NAT-SET-PRED-PATTERNS . *** odd(N) = tt \iff even(N) /= tt . op prop1 : Natural -> AtomMagma . op prop2 : Natural -> AtomMagma . eq prop1(N) = (odd(N) = tt) , (even(N) = tt) . eq prop2(N) = (even(N) /= tt) , (odd(N) /= tt) . endm Unification of prop1: No variant unifiers can be found. Unification of prop2: (even(N) /= tt) , (odd(N) /= tt) Predicate elimination of prop2: even(M + M) /= tt , odd(M + M) /= tt \Rightarrow tt /= tt , odd(M + M) /= tt Unsatisfiable! Example: Greater Than mod ACU-NAT-SET-PRED-CONJECTURES is pr ACU-NAT-SET-PRED-PATTERNS . *** N > M = tt \/ N = M \/ M > N = tt op prop : Natural Natural -> AtomMagma . eq prop(N,M) = (N > M /= tt) , (N /= M) , (M > N /= tt) . endm Unification of prop: (N > M /= tt) , (N /= M) , (M > N /= tt) Predicate elimination of prop: (N > N + 0 /= tt) , (N /= N + 0) , (N + 0 > N /= tt) => (N /= N) Unsatisfiable! Conclusions and future work - Satisfiability decision procedures can be either theory-specific or theory-generic. These two classes of procedures complement each other: theory specific ones are more efficient; but theory-generic ones are user-definable and can substantially increase the range of SMT solvers. - Our work has extended variant satisfiability to support initial algebras specified by FVP theories with user-definable predicates under fairly general conditions. Since such predicates are often needed in specifications, this substantially enlarges the scope of variant-based initial satisfiability algorithms. - The most obvious next step is to combine the original variant satisfiability algorithm with the present one. - Furthermore, our goal is to include this powerful decision procedure in our automatic inductive theorem prover $\nu$-ITP.
{"Source-Url": "https://www.sci.unich.it/lopstr17/slides/1709.05203-slides.pdf", "len_cl100k_base": 4807, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 42016, "total-output-tokens": 5967, "length": "2e12", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.00041294097900390625, "__label__crime_law": 0.0006155967712402344, "__label__education_jobs": 0.0007534027099609375, "__label__entertainment": 0.00010395050048828124, "__label__fashion_beauty": 0.00021541118621826172, "__label__finance_business": 0.0002963542938232422, "__label__food_dining": 0.0005507469177246094, "__label__games": 0.0008592605590820312, "__label__hardware": 0.0013065338134765625, "__label__health": 0.0008182525634765625, "__label__history": 0.0003287792205810547, "__label__home_hobbies": 0.00015807151794433594, "__label__industrial": 0.0008649826049804688, "__label__literature": 0.0004315376281738281, "__label__politics": 0.0004353523254394531, "__label__religion": 0.0007543563842773438, "__label__science_tech": 0.11376953125, "__label__social_life": 0.0001583099365234375, "__label__software": 0.00785064697265625, "__label__software_dev": 0.8671875, "__label__sports_fitness": 0.0004146099090576172, "__label__transportation": 0.0009541511535644532, "__label__travel": 0.00026154518127441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14590, 0.01513]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14590, 0.55009]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14590, 0.62994]], "google_gemma-3-12b-it_contains_pii": [[0, 335, false], [335, 1129, null], [1129, 1523, null], [1523, 1697, null], [1697, 2571, null], [2571, 3301, null], [3301, 4287, null], [4287, 4858, null], [4858, 5373, null], [5373, 5652, null], [5652, 6127, null], [6127, 7105, null], [7105, 7949, null], [7949, 8306, null], [8306, 8980, null], [8980, 9392, null], [9392, 10374, null], [10374, 11010, null], [11010, 12240, null], [12240, 12735, null], [12735, 13308, null], [13308, 13729, null], [13729, 14590, null]], "google_gemma-3-12b-it_is_public_document": [[0, 335, true], [335, 1129, null], [1129, 1523, null], [1523, 1697, null], [1697, 2571, null], [2571, 3301, null], [3301, 4287, null], [4287, 4858, null], [4858, 5373, null], [5373, 5652, null], [5652, 6127, null], [6127, 7105, null], [7105, 7949, null], [7949, 8306, null], [8306, 8980, null], [8980, 9392, null], [9392, 10374, null], [10374, 11010, null], [11010, 12240, null], [12240, 12735, null], [12735, 13308, null], [13308, 13729, null], [13729, 14590, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14590, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14590, null]], "pdf_page_numbers": [[0, 335, 1], [335, 1129, 2], [1129, 1523, 3], [1523, 1697, 4], [1697, 2571, 5], [2571, 3301, 6], [3301, 4287, 7], [4287, 4858, 8], [4858, 5373, 9], [5373, 5652, 10], [5652, 6127, 11], [6127, 7105, 12], [7105, 7949, 13], [7949, 8306, 14], [8306, 8980, 15], [8980, 9392, 16], [9392, 10374, 17], [10374, 11010, 18], [11010, 12240, 19], [12240, 12735, 20], [12735, 13308, 21], [13308, 13729, 22], [13729, 14590, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14590, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
05fddb0bf2e84fb0f8b2361a5136cbcbbe4bacf8
Developing a Hierarchical Multi-Label Classifier for Twitter Trending Topics Jinan Fiaidhi¹, Sabah Mohammed¹, Aminul Islam¹, Simon Fong² and Tai-hoon Kim³ ¹Department of Computer Science Lakehead University, Thunder Bay, Ontario P7B 5E1, Canada ²Faculty of Science and Technology, University of Macau, Macau, China ³Department of Computer Engineering, Glocal Campus, Konkuk University, Korea ¹{jfiaidhi, mohammed, maislam}@lakeheadu.ca, ²ccfong@umac.mo, ³taihoonn@kku.ac.kr Abstract In recent years, there has been rapid growth in discussion groups and micro blogging, in which an important characteristic of the entries is their trending topics on some generalized categories. Many researchers have attempted to classify trending topics by using only keywords, trending topics are rarely straightforward; they are normally expressed in a more subtle manner. It is well accepted that using high-dimensional multi-modal language features for tweets content representation and classifier training may achieve more sufficient characterization of the diverse properties of the tweets and further result in higher discrimination power of the classifiers. However, training the classifiers in a high-dimensional multi-modal feature space requires a large number of labeled training tweets, which will further result in the problem of curse of dimensionality. To tackle this problem, a hierarchical feature subset selection algorithm need to be used to enable more accurate tweets classification; where the processes for feature selection and classifier training are seamlessly integrated in a single framework. In this article, we used the LingPipe classifier to accurately classify the Twitter trending topics where it shows a substantial improvement over their state-of-the art trending topics-trained counterparts. Keywords: Trending topics; Trending Topics Classification; LingPipe API 1. Introduction Currently Twitter is heavily used as a source of communication. People are busy writing on Twitter about what's going around and within their personal space as well as on variety of shared issues. With the torrential streams of tweets, there's an emerging demand to sieve signals from noises and harvest useful information. Besides Twitter Search¹, there are many Twitter Analytics tools (e.g., TwitAnalyzer², MicroPlaza³, Twist⁴, TwitTruly⁵, TweetStats⁶, ¹ https://twitter.com/search ² http://www.twitalyzer.com/5/index.asp ³ https://twitter.com/microplaza ⁵ http://twitturly.com/ ⁶ http://www.tweetstats.com/ TwitterFriends\(^7\) to analyze Twitter streams. Each of these tools serves specific purpose. They crawl and sift through Twitter streams; also, aggregate, rank and slice-and-dice data to deliver some insights on Twitter activities and trends. There’s no single best analytic tool available but for some cases a combination of these tools may extract interesting insights from Twitter streams [1]. Similarly, the tools for analyzing popular topics or trending topics on Twitter (e.g., Datasift\(^8\), What the Trend\(^9\), Trendsmap\(^10\)) fail to accurately classifying tweets based on general categories [11]. Even Twitter allows users to observe only limited number of trending topics where these topics are restricted to the top ten popular terms of discussion at any given moment. Due to the volume of tweets, it is natural to consider techniques like named-entity recognition, information extraction, and text mining over tweets. Not surprisingly, the performance of “off the shelf” natural language processing tools, which were trained on news corpora, is weak on tweet corpora [2]. To address this challenge we need to develop a technique that can identify types of the entities tweets may contain. This paper presents a new methodology, which helps improve the accuracy of tweets trending topics classification. Unlike most prior works which focused on lexical features at the word level, the methodology presented here attempts to include more contextual information by focusing on the whole tweet level by using a classifier that takes into account the language model. This is a dynamic classifier that accepts training events of categorized character sequences based on a multivariate estimator for the category distribution and dynamic language models for the per-category character sequence estimators. Luckily, we do not need to describe and implement this type Classifier\(^11\) has been developed by a research group at Carnegie of dynamic classifier as the LingPipe’s LanguageModel (LM) Mellon University where they provided a suite of Java libraries for the linguistic analysis of human language. Using the LingPipe classifier, we managed to more accurately classify the Twitter trending topics where it shows a substantial improvement over their state-of-the-art trending topics-trained counterparts. 2. Related Research Twitter, a popular micro-blogging site that present opportunities for research in natural language processing (NLP) and machine learning. One of such opportunities is trending topics classification [3]. There are number of research papers addressed twitter classification, sentiment classification as well as on trending topics classification. James Bernhard’s [4] classifies the text to a predefined set of generic classes such as News, Events, Opinions, Deals, and Private Messages based on domain-specific features extracted from the user profile and text. Alec Go [5] introduced a method to automatically classify of Twitter messages either positive or negative. In this direction they used machine learning algorithms for classifying the sentiment of Twitter messages using distant supervision. Sheila Kinsella \(^7\) http://stats.brandtweet.com/ \(^8\) http://datasift.com/ \(^9\) http://whatthetrend.com/ \(^10\) http://trendsmap.com/ [6] included external hyperlinks and metadata to classify Social Media data. She showed that external metadata has better descriptive power for topic classification than the original posts and gives better classification results. Sankaranarayanan [7] have introduced a method that can be used to automatically obtain breaking news from the tweets. In particular, this article uses noisy data along with a naïve Bayesian classifier to improve the quality of the noisy data by throwing away a large portion of the tweets noise. Backer [8], however, introduced an approach to distinguish between real-world events from a family of non-events messages. He used an online clustering technique that groups together topically similar tweets, and computed features that can be used to train a classifier to distinguish between event and non-event clusters. Arkaitz Zubiaga [9] introduces a typology to categorize the triggers that leverage trending topics: news, current events, memes, and commemoratives by defining a set of straightforward language-independent features that rely on the social spread of the trends to discriminate among those types of trending topics. Thongsuk [10] proposed a framework for classification by using Twitter posts from three business types, i.e., airline, food and computer & technology. They used feature transformation and feature expansion to classify business type tweets on Twitter. K Lee [11] classifies Twitter Trending Topics into 18 general categories. They used Bag-of-Words approach for text classification and network-based classification. In twellow 12 they collected publicly available messages and scans users profiles from the Twitter.com in order to categorize users and identify those users responsible for those messages into the various categories. However, none of these research attempts try to classify Twitter trending topics based on personalized attributes. 3. Identifying and Categorizing Trending Topics In our preparation to categorize and classify Twitter trending topics we collected a reasonable tweets dataset using the Twitter streaming API 13, with the filter tweet stream providing the input data and the trends/location stream providing the list of terms identified by Twitter as trending topics. The filter streaming API is a limited stream returns public statuses that match one or more filter predicates. The United States (New York) and Canada (Toronto) was used as the location for evaluation. Google Geocoding API 14 has been used to get location wise Twitter data. The streaming data was collected automatically using the Twitter4j API 15. The streaming data was stored in a tabular CSV formatted file. Data has been collected with different time interval for the same city and topics. In this direction, we have collected different topics dataset for different city with different time interval. For Labeling we identified 12 general classes for topic classification. These classes are Politics, Education, Health, Marketing, Music, News & Media, Recreation & Sports, Computers & Technology, Pets, Food, Family, and other. Since twitter is our primary source we have used the twitter search API to search topics and manually label the topics. If the tweets are related to political issues then they will be classified as politics. If the topic is not related to any Category then --- 12 http://www.twellow.com 13 twitter.com 14 developers.google.com/maps/documentation/geocoding 15 Twitter4j.org the topic will be classified as other category. The distribution of collected data over the 12 classes is provided in Figure 1. ![Label of 917 Topics Across 12 Classes](image) **Figure 1. Label of 917 Topics Across 12 Classes** Having collected a reasonable dataset for generally identified tweets across the 12 general classes, we can then use this dataset to training effective text classifiers for classifying newly collected tweets. Indeed machine learning [12] can be used as a general inductive process to automatically builds a text classifier by learning the characteristics of a set of previously classified documents. These characteristics are then used to classify new documents. Different types of Text Classification tasks can be defined [13] between single-label and multi-label classification. In Single-label (also called multi-class) Text Classification, exactly one category must be assigned to a document. In Multi-label Text Classification, any number of categories may be assigned to a document. Binary categorization is a special case of single-label categorization, in which there is only one category and each document can be assigned to it or not. Many classification methods, such as Naïve Bayes, Support Vector Machine, are of the single-label type. The most popular approach for multi-label classification is binary approach [14] but this method has two main problems. First, it assumes independence of categories, which is not always true and second problem is that a big number of binary classifiers have to be learned, which may cause memory problems, and take a lot of time. Hierarchical classification [15-16] has advantages compared to flat classification it enables easy location of required categories which makes it easier to search among large number of categories and sub categories. It also reflects the intuition of relatedness of topics that are close to each other in the hierarchy. Two hierarchical classification methods big-bang and top-down level based approach. In the big-bang approach, a document is classified into a category in the category tree by a classifier in one single step. In the top-down level-based approach, one or more classifiers are constructed at each level of the category tree, and each classifier works as a flat classifier at that level [14]. Koller [16] divide the hierarchical classification task into a set of smaller classification tasks, each of which corresponds to some split in the classification hierarchy. In their result the size of the classifier allow to obtain significantly higher accuracy, a reduction due both to increased robustness and to our ability to use more accurate classifiers. Figure 2 shows our Trendy Topics Classification approach. We have implemented hierarchical multi-label classification algorithm using a flat multi-class classifier provided by LingPipe API16. LingPipe’s LanguageModel (LM) Classifier performs joint probability-based classification of character sequences into non-overlapping categories based on language models for each category and a multivariate distribution over categories. The LingPipe’s LM classifier is a language model classifier that accepts training events of categorized character sequences. Training is based on a multivariate estimator for the category distribution and dynamic language models for the per-category character sequence estimators. It calculates conditional and joint probabilities of each category for the classified object. A scoring classifier goes one step further and assigns a (floating point) score to each category. These may then be sorted to provide a ranking and a first-best result, with higher scores taken to be better matches and LingPipe classifier returns one best category as result of classification process. For multi-label classification, we apply an approach based on estimations of probabilities of an item to belong to some category. To determine the threshold for multi-label classification we use the cross-entropy scores provided by LingPipe classifier, as they are better suited for cross document comparison. A Naive Bayes classifier assumes that the presence of a particular feature of a class is unrelated to the presence of any other feature given the class variable17. A classifier is constructed from a set of categories and a tokenizer factory. For this purpose we have used Whitespace Tokenizer Factory. Naive Bayes applied to tokenized text results in a so-called "bag of words" model where the tokens (words) are assumed to be independent of one another18. This classifier has been implemented as NaiveBayesClassifier class in Lingpipe direct derivative of the DynamicLMClassifier as per LingPipe API doc. The K Nearest Neighbor (Knn) Classifier uses the K Nearest Neighbor algorithm to classify Data. A KnnClassifier implements k-nearest-neighbor classification based on feature --- 16 http://alias-i.com/lingpipe/ 17 http://en.wikipedia.org/wiki/Naive_Bayes_classifier 18 http://alias-i.com/lingpipe/docs/api/index.html extraction and a vector proximity or distance. K-nearest-neighbor classification is a kind of memory-based learning in which every training instance is stored along with its category. In the training phase the algorithm stores feature vectors of the training examples along with their categories. The features are extracted with the "bag of words" model using Whitespace Tokenizer Factory. This classifier has been implemented using LingPipe KnnClassifier class. The TF-IDF Classifier is based on term frequency and inverse document frequency to classify data. LingPipe's TF-IDF classifier training phase is similar to that used for the Knn and Naive Bayes classifiers. The features are extracted with the "bag of words" model using Whitespace Tokenizer Factory. This classifier has been implemented using LingPipe TfidfClassifier class. The process of selecting the best of these classifiers can illustrated using the following code snippet: ```javascript var classifier = DynamicLMClassifier.createNGramProcess (CATEGORIES,Ngram_Size); for(int i=0; i<CATEGORIES.length; ++i) { var Dir = getCategoryDirectory(); var files = getListofFile(Dir); for (int j = 0; j < files.length; ++j) { var text = read(Dir,files[j]); text = applyWordNetSynonym(text); var classification= new Classification(CATEGORIES[i]); var classified= new Classified (text,classification); classifier.handle(classified); } } var compiledC = AbstractExternalizable.compile(classifier); var evaluator = new ClassifierEvaluator<> (compiledClassifier, CATEGORIES, storeCategories); for(int i = 0; i < CATEGORIES.length; ++i) { for(int k = 0; k < _listOFFiles.length; ++k){ var text = readFile(_listOFFiles[k]); var classification = new Classification(CATEGORIES[i]); var classified = new Classified (text,classification); evaluator.handle(classified); var jc = compiledClassifier.classify(text); String bestCategory = jc.bestCategory(); } var summery = evaluator.confusionMatrix().microAverage(); for(c=0; c<CATEGORIES.length; ++c) { var catSummery = evaluator.oneVersusAll(CATEGORIES[i]) } ``` First it initializes the classifier with category array and the n-gram size then the loop continues through the categories. The training data is organized into directories by category, and then the training files are read from the file using the LingPipe utility method. After that we applied the Word NetSynonym database to get synonym for each the tweet word wherever possible. The resulting data is used to train the classifier for the specified category. --- 20 http://wordnet.princeton.edu/wordnet Then it creates an evaluator from the classifier. Next for each category we have read all testing data and execute the provided LingPipe classifiers to get best category according to given training dataset. This will continue until the end of all categories. We repeat the same process for each category because each testing dataset can be assign to multiple categories. After classified the dataset into 12 different categories we then apply our T3C [17] method to personalize trending topics. This iterative process will return at the end the summery of testing result sets for each category. 4. Experiments and Results Our experimentation starts by collecting reasonable tweets samples on general topics like health, education, sports, economy, Family, Technology, Music and politics. First we collected random Tweets using Twitter Streaming API. For Labeling we build an Interface to label data into 12 different categories. We have labeled two different dataset to experiment our result. We have collected tweets using the Twitter Streaming API and label them into 12 different category and for second dataset we have apply T3C [17] to get trending topics and then labeled the topics. During labeling process tweets were preprocessed to remove URL’s, Unicode characters, usernames, and punctuation, html, etc. A stop word file containing common English stop words was used to filter out tweets from common words. For First experiment we have collected 100,000 Tweets. Language Model (LM) Classifier, Naive Bayes Classifier, K-Nearest Neighbor (Knn) Classifier, and TF-IDF Classifier were chosen for the experiment. Table 1 presents our results sets where we use the overall classifier accuracy for the classifier performance. Figure 3 shows the performance comparison graph. ### Table 1. Performance for Lingpipe Classification Experiment <table> <thead> <tr> <th>Category</th> <th>Language Model Classifier</th> <th>Naive Bayes Classifier</th> <th>K-Nearest Neighbor</th> <th>TF-IDF Classifier</th> </tr> </thead> <tbody> <tr> <td></td> <td>Accuracy</td> <td>Recall</td> <td>Precision</td> <td>Accuracy</td> </tr> <tr> <td>Politics</td> <td>0.8</td> <td>0.14</td> <td>0.08</td> <td>0.8</td> </tr> <tr> <td>Education</td> <td>0.88</td> <td>0.05</td> <td>0.08</td> <td>0.88</td> </tr> <tr> <td>Other</td> <td>0.86</td> <td>0.07</td> <td>0.08</td> <td>0.86</td> </tr> <tr> <td>Health</td> <td>0.83</td> <td>0.1</td> <td>0.08</td> <td>0.84</td> </tr> <tr> <td>Marketing</td> <td>0.88</td> <td>0.05</td> <td>0.08</td> <td>0.87</td> </tr> <tr> <td>Music</td> <td>0.84</td> <td>0.09</td> <td>0.08</td> <td>0.84</td> </tr> <tr> <td>News &amp; Media</td> <td>0.82</td> <td>0.11</td> <td>0.08</td> <td>0.81</td> </tr> <tr> <td>Recreation &amp; Sports</td> <td>0.83</td> <td>0.11</td> <td>0.08</td> <td>0.82</td> </tr> <tr> <td>Computers &amp; Technology</td> <td>0.86</td> <td>0.07</td> <td>0.08</td> <td>0.86</td> </tr> <tr> <td>Peta</td> <td>0.86</td> <td>0.07</td> <td>0.08</td> <td>0.86</td> </tr> <tr> <td>Food</td> <td>0.84</td> <td>0.09</td> <td>0.08</td> <td>0.84</td> </tr> <tr> <td>Family</td> <td>0.88</td> <td>0.04</td> <td>0.08</td> <td>0.88</td> </tr> </tbody> </table> 21 http://flash.lakeheadu.ca/~maislam/Mining-DataSet/TrainedData/ 22 http://flash.lakeheadu.ca/~maislam/Data/stopwords.txt 23 http://flash.lakeheadu.ca/~maislam/Mining-DataSet/TestingData/ Figure 3. Comparison Graph for Lingpipe 4 Classifier Results For the Language Model (LM) Classifier algorithm, the size of the n-gram needs to be set. N-gram is a sub-sequence of length n of the items given. The Language Model rule is to classify a newly given document based on prediction occurring n-grams. Figure 4. Performance Graph for n-gram Size LM Classifier This algorithm uses a character based n-gram to classify Tweets so an appropriate size should be the average length of a word. Figure 4 shows performance Graph for n-gram Size Language Model (LM) Classifier. Figure 5 show the overall performance accuracy Comparisons graph when we apply Lingpipe Classification algorithm on Twitter Trendy Topics dataset and General Tweets dataset. (a) Comparison Graph (b) Comparison Histogram Figure 5. Comparing Trendy Topics Categorization based on Two Different Trained Dataset 5. Conclusion In this article we used a high-dimensional multi-modal language features for tweets content representation and classifier training to accurately characterizing the diverse properties of the tweets and further result in higher discrimination power of the classifiers. However, training the classifiers in a high-dimensional multi-modal feature space requires a large number of labeled training tweets, which will further result in the problem of curse of dimensionality. To tackle this problem, a hierarchical feature subset selection algorithm need to be used to enable more accurate tweets classification; where the processes for feature selection and classifier training are seamlessly integrated in a single framework. For this purpose we have applied four supervised machine learning algorithms Language Model (LM) Classifier, Naive Bayes Classifier, K-Nearest Neighbor (Knn) Classifier, and TF-IDF Classifier. All the results of these experiments were published at our Lakehead University Flash server. We found that well trained machine learning algorithms can provides very good classifications to the Twitter Trending Topics. In terms of overall performance accuracy, all four algorithms can reach more than 75% of classification correctly. However, the Language Model (LM) Classifier in N-gram model performs better than the other three classification algorithm. Also our experiment show that a larger twitter training data set perform better in Trending Topic classifications over trending topics training dataset. Figure 6 shows the Multi-Label Twitter Trending Topics Classification diagram. For this purpose, we used the LingPipe classifier to classify the Twitter trending topics where it shows a substantial improvement over their state-of-the art trending topics-trained counterparts. Figure 7 shows GUI of our Twitter Trending Topics Classification --- 24 http://flash.lakeheadu.ca/~maislam/Mining-Dataset/TestSample/ Acknowledgements Dr. J. Fiaidhi would like to acknowledge the support of NSERC for the research conducted in this article. References Authors **Jinan Fiaidhi** is a Professor of Computer Science and Graduate Coordinators at Lakehead University of Canada. Professional Engineer of Ontario and Adjunct research Professor with University of Western Ontario. Research is on Collaborative Learning, Calm Computing and Machine Learning. **Aminul Islam** received his BSc degree in computer science and engineering from Darul Ihsan University, Dhaka, Bangladesh in 2006. Currently he is a Master’s student in computer science at Lakehead University, Thunder Bay, Canada. **Sabah Mohammed** is a Professor of Computer Science at Lakehead University of Canada. Professional Engineer of Ontario and Adjunct research Professor with University of Western Ontario. Research is on Web Intelligence and Medical Informatics. **Simon Fong** is a Professor with the Department of Computer and Information Science at Macau University of China. Research is on Data Analytics, E-Commerce technology, Business Intelligence and Data-mining. **Tai hoon Kim** is a Professor of Computer Science, Konkuk University, Korea. Also with GVSA and UTAS, Australia. Vice President of SERSC. Research is on Computer Security.
{"Source-Url": "http://www.sersc.org/journals/IJUNESST/vol6_no3/1.pdf", "len_cl100k_base": 5179, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 31191, "total-output-tokens": 7504, "length": "2e12", "weborganizer": {"__label__adult": 0.000347137451171875, "__label__art_design": 0.0006647109985351562, "__label__crime_law": 0.0004382133483886719, "__label__education_jobs": 0.002872467041015625, "__label__entertainment": 0.0003659725189208984, "__label__fashion_beauty": 0.0002560615539550781, "__label__finance_business": 0.0004544258117675781, "__label__food_dining": 0.000461578369140625, "__label__games": 0.0008263587951660156, "__label__hardware": 0.0016183853149414062, "__label__health": 0.0008006095886230469, "__label__history": 0.0003974437713623047, "__label__home_hobbies": 0.00014030933380126953, "__label__industrial": 0.00041365623474121094, "__label__literature": 0.0008368492126464844, "__label__politics": 0.0005764961242675781, "__label__religion": 0.0004277229309082031, "__label__science_tech": 0.3544921875, "__label__social_life": 0.0004222393035888672, "__label__software": 0.11029052734375, "__label__software_dev": 0.52197265625, "__label__sports_fitness": 0.00031638145446777344, "__label__transportation": 0.0003743171691894531, "__label__travel": 0.00019800662994384768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29410, 0.04283]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29410, 0.2238]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29410, 0.81994]], "google_gemma-3-12b-it_contains_pii": [[0, 2576, false], [2576, 6002, null], [6002, 9473, null], [9473, 12210, null], [12210, 14493, null], [14493, 17367, null], [17367, 21860, null], [21860, 22438, null], [22438, 22749, null], [22749, 24701, null], [24701, 27628, null], [27628, 29410, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2576, true], [2576, 6002, null], [6002, 9473, null], [9473, 12210, null], [12210, 14493, null], [14493, 17367, null], [17367, 21860, null], [21860, 22438, null], [22438, 22749, null], [22749, 24701, null], [24701, 27628, null], [27628, 29410, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29410, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29410, null]], "pdf_page_numbers": [[0, 2576, 1], [2576, 6002, 2], [6002, 9473, 3], [9473, 12210, 4], [12210, 14493, 5], [14493, 17367, 6], [17367, 21860, 7], [21860, 22438, 8], [22438, 22749, 9], [22749, 24701, 10], [24701, 27628, 11], [27628, 29410, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29410, 0.10563]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
70f31cfccc1aa8cd177ed555c4ffa32787523a7e
CSE 331 Software Design & Implementation Topic: ADTs + Rep. Invariants 💬 Discussion: What did you struggle with on HW2? Reminders • Great work on HW2! • We won’t have lecture on Monday 😞 Upcoming Deadlines • Prep. Quiz: HW3 due Tuesday (7/5) • HW3 due Thursday (7/7) Last Time... - Why Specifications? - JavaDoc - Comparing Specifications - weaker benefits implementer - stronger benefits client - Reasoning about Functions Today's Agenda - Abstract Data Types - ADTs in Java - Representation Invariants Function Calls Correctness Toolkit - Learned forward and backward reasoning for - assignment - if statement - while loop - One missing element: function calls - we needed specifications for that - now we have them Reasoning about Function Calls static int f(int a, int b) { ... } @requires P(a,b) -- some assertion about a & b @returns R(a,b,c) -- some assertion about a, b, & c (returned) Forward {{{ A }}} c = f(a, b); Reasoning about Function Calls ```c static int f(int a, int b) { ... } @requires P(a,b) -- some assertion about a & b @returns R(a,b,c) -- some assertion about a, b, & c (returned) Forward {{ A }} if A implies P(a,b) c = f(a, b); {{ A and R(a,b,c) }} ``` Reasoning about Function Calls \[ \text{static int } f(\text{int } a, \text{ int } b) \{ \ldots \} \] - \textbf{@requires} \ P(a, b) -- some assertion about \(a \) & \(b\) - \textbf{@returns} \ R(a, b, c) -- some assertion about \(a, b, \) & \(c\) (returned) **Backward** \[ c = f(a, b); \{ \text{ B and Q(a,b,c) } \} \] Reasoning about Function Calls ```c static int f(int a, int b) { ... } ``` - **@requires** P(a,b) -- some assertion about a & b - **@returns** R(a,b,c) -- some assertion about a, b, & c (returned) ### Backward ```c {{ B and P(a,b) }} c = f(a, b); {{ B and Q(a,b,c) }} ``` Reasoning about Function Calls ``` static int f(int a, int b) { ... } @requires P(a,b) -- some assertion about a & b @returns R(a,b,c) -- some assertion about a, b, & c (returned) ``` **Backward** ``` {{ B and P(a,b) }} c = f(a, b); if R(a,b,c) implies Q(a, b, c) {{ B and Q(a,b,c) }} ``` Reasoning about Function Calls ```c static int f(int a, int b) { ... } @requires P(a,b) -- some assertion about a & b @return R(a,b,c) -- some assertion about a, b, & c (returned) ``` Similar to assignment statements when the specification has @requires and @return – Gets a little trickier when we have @modifies or @effects Reasoning about Objects Previously looked at writing specifications for methods. The situation gets more complex with object-oriented code... This lecture: 1. What is an Abstract Data Type (ADT)? 2. How to write a specification for an ADT 3. Design methodology for ADTs 4. Reasoning about the implementation of an ADT Next lecture(s): • Documenting the implementation of an ADT Why we need Data Abstractions (ADTs) Manipulating and presenting data is pervasive - choosing how to organize that data is key design problem - inventing and describing algorithms is less common Often best to start your design by designing data... Bad programmers worry about the code. Good programmers worry about data structures and their relationships. -- Linus Torvalds Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won’t usually need your flowcharts; they’ll be obvious. -- Fred Brooks Designing Around Data Brooks says it is enough to decide what your data looks like – (don’t even need to say how it is organized) – can figure out the data structures & code from that In fact, even that is possibly too detailed... – leave room to change data structures over time – all we really need to know is what operations we need to perform with the data – the specs for those operations are the spec for the data An abstract data type defines a class of abstract objects which is completely characterized by the operations available on those objects … When a programmer makes use of an abstract data object, he [sic] is concerned only with the behavior which that object exhibits but not with any details of how that behavior is achieved by means of an implementation… Programming with Abstract Data Types by Barbara Liskov and Stephen Zilles Procedural and data abstractions Procedural abstraction: - abstract from implementation details of procedures (methods) - specification is the abstraction - satisfy the specification with an implementation Data abstraction: - abstract from details of data representation - way of thinking about programs and design Abstract Data Type (ADT) - invented by Barbara Liskov in the 1970s - one of the fundamental ideas of computer science - reduces data abstraction to procedural abstraction Why we need Data Abstractions (ADTs) Manipulating and presenting data is pervasive - choosing how to organize that data is key design problem - inventing and describing algorithms is less common Hard to always choose the right data structures ahead of time: - hard to know ahead of time what will be too slow - programmers are “notoriously” bad at this (Liskov) ADTs give us the freedom to change data structures later - data structure details are hidden from the clients Why we need Data Abstractions (ADTs) Manipulating and presenting data is pervasive - choosing how to organize that data is key design problem - inventing and describing algorithms is less common Often best to start your design by designing data - first, what **operations** will be permitted on the data (for clients) - next, decide how data be **organized** (data structures) - see CSE 332 & CSE 344 - lastly, write the **code** Is everything an ADT? - Purpose of an ADT is to hide the representation details - Some classes are not trying to hide their representation - Example: `Pair` with fields `first` and `second` - representation is very unlikely to change - reasonable to expose every field via a method - Some classes do not have a representation - they are more “processes” than data - Example: `Math` with various mathematical methods - it may store data, but client does not need to think about it ADTs in Java An ADT is a set of **operations** ADT abstracts from the *organization* to *meaning* of data - details of data structures are hidden from the client - client see only the operations that provided An ADT is a set of **operations** ADT abstracts from the *organization* to *meaning* of data - hide details of data structures such as ```java class RightTriangle { float base, altitude; } ``` ```java class RightTriangle { float hypot, angle; } ``` Think of each object as a mathematical triangle Usable via a set of operations - `create`, `getBase`, `getArea`, ... Force clients to use operations to access data Another Example class Point { public float x; public float y; } class Point { public float r; public float theta; } Different representations of the same concept - both classes implement the concept “2D point” Goal of Point ADT is to express the sameness: - clients should think in terms of the concept “2D point” - work with objects via operations not the representation - produces clients that can work with either representation Abstract data type = objects + operations We call this an “abstraction barrier” - a good thing to have and not cross (a.k.a. violate) - prevents clients from depending on implementation details Benefits of ADTs If clients are forced to respect data abstractions, ... • Can change how data is stored (and data structures) – fix bugs – improve performance • Can also change algorithms • Can delay decisions on how ADT is implemented Concept of 2D point, as an ADT class Point { // A 2D point exists in the plane, ... public float x(); public float y(); public float r(); public float theta(); // ... can be created, ... public Point(); // new point at (0,0) public Point centroid(Set<Point> points); // ... can be moved, ... public void translate(float delta_x, float delta_y); public void scaleAndRotate(float delta_r, float delta_theta); } ## Specifying an ADT <table> <thead> <tr> <th>Immutable</th> <th>Mutable</th> </tr> </thead> <tbody> <tr> <td>1. overview</td> <td>1. overview</td> </tr> <tr> <td>2. abstract state</td> <td>2. abstract state</td> </tr> <tr> <td>3. creators</td> <td>3. creators</td> </tr> <tr> <td>4. observers</td> <td>4. observers</td> </tr> <tr> <td>5. producers</td> <td>5. producers (rare)</td> </tr> <tr> <td>6. mutators</td> <td>6. mutators</td> </tr> </tbody> </table> - Creators: return new ADT values (e.g., Java constructors) - Observers / Getters: Return information about an ADT - Producers: ADT operations that return new values - Mutators: Modify a value of an ADT Specifying an ADT <table> <thead> <tr> <th>Immutable</th> <th>Mutable</th> </tr> </thead> <tbody> <tr> <td>1. overview</td> <td>1. overview</td> </tr> <tr> <td>2. abstract state</td> <td>2. abstract state</td> </tr> <tr> <td>3. creators</td> <td>3. creators</td> </tr> <tr> <td>4. observers</td> <td>4. observers</td> </tr> <tr> <td>5. producers</td> <td>5. producers (rare)</td> </tr> <tr> <td>6. mutators</td> <td>6. mutators</td> </tr> </tbody> </table> - No information about the implementation details - latter called the “concrete representation” - Note that **Point** has both field $x$ and method $x()$ - appears since it is part of the “2D point” concept - we are still able to change representations Specifying an ADT • Need a way write specifications for these procedures – need a vocabulary for talking about what the operations do (other than referencing the actual implementation) • Use “math” (when possible) not actual fields to describe the state – abstract description of a state is called an abstract state – describes what the state “means” not the implementation • give clients an abstract way to think about the state – each operation described in terms of “creating”, “observing”, “producing”, or “mutating” the abstract state • For familiar ideas from math (point, triangle, number, set, etc.), we can use those concepts as our abstract state – otherwise, we need to invent a concept for them Poly (immutable): overview /** * A Poly is an immutable polynomial with integer coefficients. A typical Poly is * \[ c_0 + c_1x + c_2x^2 + \ldots \] */ class Poly { Overview: provide high level information about the type - state if immutable (default not) - define abstract states for use in operation specifications • easy here, but sometimes difficult — always vital! - give an example (reuse it in operation definitions) Poly: creators // effects: makes a new Poly = 0 public Poly() // effects: makes a new Poly = cx^n // throws: NegExponentException if n < 0 public Poly(int c, int n) Creators - creates a new object Note: Javadoc above omits many details... - should be /** ... */ not // ... - should be @spec.effects not effects Poly: observers // returns: the degree of this polynomial, // i.e., the largest exponent with a // non-zero coefficient. // Returns 0 if this = 0. public int degree() // returns: the coefficient of the term // of this polynomial whose exponent is d // throws: NegExponentException if d < 0 public int coeff(int d) Observers - obtains information about objects of that type Notes on observers Observers - obtains information about objects of that type • Specification uses the abstract state from the overview • **Never** modifies the abstract state. Poly: producers // returns: this + q public Poly add(Poly q) // returns: this * q public Poly mul(Poly q) // returns: -this public Poly negate() Producers - creates other objects of the same type Notes on producers Producers - creates other objects of the same type • Common in immutable types like `java.lang.String` - `String substring(int offset, int len)` • No side effects - **never** modify the abstract state of existing objects Poly x = new Poly(4, 3); Poly y = new Poly(5, 3); Poly z = x.add(y); System.out.println(z.coeff(3)); // prints 9 // Overview: An IntSet is a mutable, unbounded set of integers. A typical IntSet is \{ x_1, \ldots, x_n \}. class IntSet { // effects: makes a new IntSet = {} public IntSet() IntSet: observers // returns: true if and only if x in this set public boolean contains(int x) // returns: the cardinality of this set public int size() // returns: some element of this set // throws: EmptyException when size()==0 public int choose() IntSet: mutators ```java // modifies: this // effects: change this to this + {x} public void add(int x) // modifies: this // effects: change this to this - {x} public void remove(int x) ``` Mutators - modify the abstract state of the object Notes on mutators Mutators - modify the abstract state of the object • Rarely modify anything (available to clients) other than this - list this in modifies clause • Typically have no return value - “do one thing and do it well” - (sometimes return “old” value that was replaced) Mutable ADTs may have producers too, but that is less common Specifying an ADT Different types of methods: 1. creators 2. observers 3. producers 4. mutators (if mutable) Described in terms of how they change the **abstract state** - abstract description of what the object means - difficult (unless concept is already familiar) but vital - specs have no information about concrete representation - leaves us free to change those in the future Implementing a Data Abstraction (ADT) To implement an ADT: - select the representation of instances - implement operations in terms of that representation Choose a representation so that: - it is possible to implement required operations - the most frequently used operations are efficient / simple / ... - abstraction allows the rep to change later - almost always better to start simple Then use reasoning to verify the operations are correct - two intellectual tools are helpful for this... Data abstraction outline ADT specification Abstract States Abstraction function (AF): Relationship between ADT specification and implementation Fields in our Java class Representation invariant (RI): Relationship among implementation fields CSE 331 Summer 2022 Connecting implementations to specs For implementers / debuggers / maintainers of the implementation: **Representation Invariant**: maps Object → boolean - defines the set of valid concrete values - must hold before and after any public method is called - no object should **ever** violate the rep invariant - such an object has no useful meaning **Abstraction Function**: maps Object → abstract state - we’ll discuss this more next time! Example: Circle /** Represents a mutable circle in the plane. For example, * it can be a circle with center (0,0) and radius 1. */ public class Circle { // Rep invariant: center != null and rad > 0 private Point center; private double rad; // Abstraction function: // AF(this) = a circle with center at this.center // and radius this.rad // ... } Example: Circle 2 /** Represents a mutable circle in the plane. For example, it can be a circle with center (0,0) and radius 1. */ public class Circle { // Rep invariant: center != null and edge != null // and !center.equals(edge) private Point center, edge; // Abstraction function: // AF(this) = a circle with center at this.center // and radius this.center.distanceTo(this.edge) // ... } Example: Polynomial /** An immutable polynomial with integer coefficients. * Examples include 0, 2x, and x + 3x^2 + 5x. */ public class IntPoly { // Rep invariant: coeffs != null private final int[] coeffs; // Abstraction function: // AF(this) = sum of this.coeffs[i] * x^i // for i = 0 .. this.coeffs.length // ... coeff, degree, etc. /** An immutable polynomial with integer coefficients. * Examples include 0, 2x, and x + 3x^2 + 5x. */ public class IntPoly { // Rep invariant: terms != null and // no two terms have the same degree and // terms is sorted in descending order by degree private final LinkedList<IntTerm> terms; // Abstraction function: // AF(this) = sum of monomials in this.terms // ... coeff, degree, etc. Example: Container /** A container which can reach but not exceed a given capacity */ public class Container { // RI: 0 <= curr <= capacity private int curr; private int capacity; // requires: x > 0 // modifies: this // effects: adds x to this if doing so does not exceed the capacity public void add(int x) { {{ pre and RI }} // your code here {{ post and RI }} } } Before next class... 1. Start on *Prep. Quiz: HW3* as early as possible! - Reminds you of integer base conversion - E.g. binary, decimal, hexadecimal - Reminds you how to submit your homework assignment 2. Enjoy the Monday holiday! - July 4\textsuperscript{th}, U.S. Independence Day - No lecture
{"Source-Url": "https://courses.cs.washington.edu/courses/cse331/22su/lectures/lec05-adt-and-ri.pdf", "len_cl100k_base": 4384, "olmocr-version": "0.1.53", "pdf-total-pages": 52, "total-fallback-pages": 0, "total-input-tokens": 59051, "total-output-tokens": 6277, "length": "2e12", "weborganizer": {"__label__adult": 0.0007333755493164062, "__label__art_design": 0.0005578994750976562, "__label__crime_law": 0.0005674362182617188, "__label__education_jobs": 0.0124053955078125, "__label__entertainment": 0.00011098384857177734, "__label__fashion_beauty": 0.0002875328063964844, "__label__finance_business": 0.00026154518127441406, "__label__food_dining": 0.0006814002990722656, "__label__games": 0.0013093948364257812, "__label__hardware": 0.0008878707885742188, "__label__health": 0.0006542205810546875, "__label__history": 0.00036525726318359375, "__label__home_hobbies": 0.0001786947250366211, "__label__industrial": 0.0006198883056640625, "__label__literature": 0.0005488395690917969, "__label__politics": 0.0005588531494140625, "__label__religion": 0.0008606910705566406, "__label__science_tech": 0.004520416259765625, "__label__social_life": 0.0003211498260498047, "__label__software": 0.0030841827392578125, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.0006933212280273438, "__label__transportation": 0.0010776519775390625, "__label__travel": 0.0003633499145507813}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17065, 0.00776]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17065, 0.54696]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17065, 0.799]], "google_gemma-3-12b-it_contains_pii": [[0, 121, false], [121, 271, null], [271, 515, null], [515, 530, null], [530, 741, null], [741, 953, null], [953, 1213, null], [1213, 1538, null], [1538, 1814, null], [1814, 2109, null], [2109, 2441, null], [2441, 2465, null], [2465, 2821, null], [2821, 3075, null], [3075, 3389, null], [3389, 3821, null], [3821, 4253, null], [4253, 4742, null], [4742, 5217, null], [5217, 5651, null], [5651, 6146, null], [6146, 6159, null], [6159, 6356, null], [6356, 6785, null], [6785, 7249, null], [7249, 7444, null], [7444, 7689, null], [7689, 8203, null], [8203, 8853, null], [8853, 9486, null], [9486, 10220, null], [10220, 10652, null], [10652, 10973, null], [10973, 11351, null], [11351, 11537, null], [11537, 11737, null], [11737, 11984, null], [11984, 12100, null], [12100, 12285, null], [12285, 12539, null], [12539, 12783, null], [12783, 13136, null], [13136, 13526, null], [13526, 14027, null], [14027, 14294, null], [14294, 14738, null], [14738, 15122, null], [15122, 15545, null], [15545, 15915, null], [15915, 16338, null], [16338, 16750, null], [16750, 17065, null]], "google_gemma-3-12b-it_is_public_document": [[0, 121, true], [121, 271, null], [271, 515, null], [515, 530, null], [530, 741, null], [741, 953, null], [953, 1213, null], [1213, 1538, null], [1538, 1814, null], [1814, 2109, null], [2109, 2441, null], [2441, 2465, null], [2465, 2821, null], [2821, 3075, null], [3075, 3389, null], [3389, 3821, null], [3821, 4253, null], [4253, 4742, null], [4742, 5217, null], [5217, 5651, null], [5651, 6146, null], [6146, 6159, null], [6159, 6356, null], [6356, 6785, null], [6785, 7249, null], [7249, 7444, null], [7444, 7689, null], [7689, 8203, null], [8203, 8853, null], [8853, 9486, null], [9486, 10220, null], [10220, 10652, null], [10652, 10973, null], [10973, 11351, null], [11351, 11537, null], [11537, 11737, null], [11737, 11984, null], [11984, 12100, null], [12100, 12285, null], [12285, 12539, null], [12539, 12783, null], [12783, 13136, null], [13136, 13526, null], [13526, 14027, null], [14027, 14294, null], [14294, 14738, null], [14738, 15122, null], [15122, 15545, null], [15545, 15915, null], [15915, 16338, null], [16338, 16750, null], [16750, 17065, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17065, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17065, null]], "pdf_page_numbers": [[0, 121, 1], [121, 271, 2], [271, 515, 3], [515, 530, 4], [530, 741, 5], [741, 953, 6], [953, 1213, 7], [1213, 1538, 8], [1538, 1814, 9], [1814, 2109, 10], [2109, 2441, 11], [2441, 2465, 12], [2465, 2821, 13], [2821, 3075, 14], [3075, 3389, 15], [3389, 3821, 16], [3821, 4253, 17], [4253, 4742, 18], [4742, 5217, 19], [5217, 5651, 20], [5651, 6146, 21], [6146, 6159, 22], [6159, 6356, 23], [6356, 6785, 24], [6785, 7249, 25], [7249, 7444, 26], [7444, 7689, 27], [7689, 8203, 28], [8203, 8853, 29], [8853, 9486, 30], [9486, 10220, 31], [10220, 10652, 32], [10652, 10973, 33], [10973, 11351, 34], [11351, 11537, 35], [11537, 11737, 36], [11737, 11984, 37], [11984, 12100, 38], [12100, 12285, 39], [12285, 12539, 40], [12539, 12783, 41], [12783, 13136, 42], [13136, 13526, 43], [13526, 14027, 44], [14027, 14294, 45], [14294, 14738, 46], [14738, 15122, 47], [15122, 15545, 48], [15545, 15915, 49], [15915, 16338, 50], [16338, 16750, 51], [16750, 17065, 52]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17065, 0.03478]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
b988190acd1086236800c0f93001091e4fb11cf7
European Public Sector Information Platform Topic Report No. 2015/11 Linked Data Validation and Quality Author: Jose Emilio Labra Gayo Published: November 2015 Table of Contents Table of Contents ........................................................................................................................................... 2 Keywords: .................................................................................................................................................. 3 Abstract/ Executive Summary: .................................................................................................................. 3 1 Introduction ................................................................................................................................................. 4 2 Linked Data ................................................................................................................................................. 6 3 Quality of Linked Data Portals .................................................................................................................. 9 4 RDF Validation .......................................................................................................................................... 11 5 Conclusions and recommendations ......................................................................................................... 14 References .................................................................................................................................................. 15 About the Author ....................................................................................................................................... 16 Copyright information ................................................................................................................................. 16 Keywords: Linked Data, Quality, Validation, RDF, SHACL, Shape Expressions Abstract/ Executive Summary: This report contains an overview of the linked data principles and the importance of linked data quality. It describes several initiatives with special emphasis on government and public sector data. Given the fact that RDF plays a central role in linked data, the report also includes a specific section on RDF validation, describing the new W3c proposal for data shapes description and validation. 1 Introduction Linked data was proposed in 2006 as a set of principles and best practices for data publishing on the Web (Heath, 2011). Since its conception, the number of linked data initiatives has been increasing and a large number of datasets have been added to the so called linked data cloud\(^1\). As can be seen in (Miller, 2010), some of those initiatives have been related to the government and public sector information domain. Following Schmachtenberg et al (2014), there were 199 datasets related to government, which represents an 18% of all the linked data cloud and an increase of 306% since 2011. However, the general adoption of linked open data has not yet arrived. The conclusions of a previous ePSI report (Dietrich, 2012) were: "Uptake by the public and private sector and real-world implementations, however clearly fall behind this academic enthusiasm for the technology. Although more and more private companies and Public Sector Bodies start embracing Linked Data Principles and Technologies it appears that real or anticipated barriers of adapting to these technologies remain as too big.". In practice, there have been lots of prototypes and pioneer projects that have been proposed in academic settings and were abandoned later on. Some reasons were already hinted by (Dietrich, 2012): - The technology appears to be complicated - The initial investment too big - The expected benefits too vague to convince both stakeholders in the private and public sector Another possibility is that linked data projects still lack some common tools and methodologies that are available in more conventional settings to assess and validate data quality. The underlying technology based on RDF was not intended for safe information exchange or application integration using linked data based services. Some techniques that are popular in relational databases or XML that enable to define data schemas and validate data according to them are not available in RDF, which is more inclined towards Open World Assumption where any system can assert anything about any topic. Although that vision is very interesting for the Web of Data, it is necessary to develop techniques that allow heterogeneous data producers/consumers to coexist while at the same time they can also safely validate the data that they are producing and consuming. \(^1\) Linked data cloud: [http://lod-cloud.net/](http://lod-cloud.net/) In this report, we will survey the main approaches that have been proposed for linked data quality assessment with special emphasis on RDF validation, which is a core part of this process. The report is structured as follows: in section 2 we review the linked data principles and give some examples of linked data initiatives related with eProcurement and public contracts. Section 3 contains some references and a justification of the importance of linked data quality. We cover the specific problem of linked data and RDF validation in section 4 and we finally present some conclusions in section 5. 2 Linked Data The original linked data principles were proposed as: - Use URIs as names for things. - Use HTTP URIs so that people can look up those names. - When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL). - Include links to other URIs so that they can discover more things. The first principle promotes the use of URIs (Uniform Resource Identifiers) or IRIs (Internationalized Resource Identifiers) to denote all the things that appear in the problem domain that a linked data application is modelling. It is important that those URIs are unambiguous in the sense that those URIs identify one specific thing and not another depending on some context. Also, if there is some declaration that two URIs represent the same thing, it is important to check that everything asserted about the first URI must be consistently asserted for the second. These two properties can improve the trustworthiness of the linked data portal (Ulicny, 2015). The second principle states that those URIs should be dereferenceable, which means that clients can look up the URI using the HTTP protocol and retrieve a description of the resource that is identified by the URI. An important quality metric for linked datasets is the percentage of dereferenceable URIs as well as the quality of the descriptions retrieved. It is important to differentiate between URIs that represent web documents and URIs that represent concepts or real-world objects. The third principle proposes to return useful information using web standards and specifically mentions RDF and SPARQL. The reason is that in the web of data, clients can not only be humans but also machines that have to process the contents in an automatic way. It is important to provide useful information for both types of agents. In the case of humans that access the information using a web browser, the preferred data format is HTML. However, in the case of machines it is necessary to provide other formats like RDF that can automatically and unambiguously be processed. The RDF graph data model, which is based on the use of URIs to represent properties, allows new data to be seamlessly aggregated and integrated into what has been called knowledge graphs or the web of data. An important aspect of RDF is the promotion of the Open World Assumption which allows data --- 2 See Design Issues: Linked data: [http://www.w3.org/DesignIssues/LinkedData.html](http://www.w3.org/DesignIssues/LinkedData.html) Linked Data Quality and Validation to be aggregated without a fixed schema of allowed relations as in relational databases. New relations (represented by URIs) can be added at any time without changing anything. RDF graphs can be combined freely, since the use of URIs guarantee that connections are only made between the same entities. This freedom on the use of RDF to represent anything can have its cost. Linked data producers may not describe properly the data they are publishing and linked data consumers usually have difficulties to know how to integrate that data in their applications. In practice, although the RDF toolset is growing, RDF has not yet been established as a popular technology for web developers and engineers. In the last decade, most of the data integration solutions opted for XML as the lingua franca for that purpose. A lot of technologies emerged around XML for validation, transformation and exchange. Nowadays, JSON is gaining in popularity in the web development community which consider it easier to manipulate and process by the existing tools and programming languages. The linked data principles do not depend on any particular data format as long as it is machine-processable. Furthermore, although the RDF format was originally based on XML, there are other RDF formats like Turtle, which is more human friendly, and JSON-LD³, which is based on JSON. The fourth principle of linked data promotes the use of links from resources to other resources so linked data consumers can discover new information. Those links are essential for the linked data project as they represent the glue between different datasets. As can be seem, the linked data principles are quite intuitive and easy to understand, and there have been a lot of projects and initiatives that successfully embraced the linked data project⁴. As a running example of linked data initiatives of special interest for the Public Sector Information, we will consider the eProcurement and public contracts domain (Ordóñez et al, 2012; Álvarez et al, 2014). Some pioneer initiatives to represent procurement notices as linked data were the MOLDEAS project (Álvarez et al, 2012) which was a prototype developed in collaboration with the EuroAlert service⁵ and the LOTED (Linked Open Tenders Electronic Daily) project⁶, which collects tenders in the European Union coming from the Tenders Electronic Daily portal. In the context of the LOD2 European Project, a Public Contract Filing Application was proposed aimed both for contract authorities issuing calls for tenders and for bidders responding to those calls. --- ⁵ Euroalert service: [http://euroalert.net/](http://euroalert.net/) ⁶ LOTED project: [http://loted.eu/](http://loted.eu/) The use of linked data can help in the matchmaking process of finding similar contracts (Necaský, 2014). A good example of a linked data portal is the Italian public contracts service provided by Nexa⁷, which translates to Linked Open Data the XML data released by the Italian Public Sector bodies following the “anti-corruption” Act (law no. 190/2012). The project offers a linked dataset with an SPARQL endpoint and a browseable interface. ⁷ Nexa public contracts project: http://nexa.polito.it/public-contracts 3 Quality of Linked Data Portals A very simple and clear attempt to assess open data quality was the 5-star model proposed by Tim Berners-Lee in 2010 as a way to encourage governments to adopt the linked data principles. The 5-star model classifies open data initiatives according to the type of open data that they publish: - One star: Available on the web (whatever format) but with an open licence, to be Open Data - Two stars: Available as machine-readable structured data (e.g. excel instead of image scan of a table) - Three stars: as (2) plus non-proprietary format (e.g. CSV instead of excel) - 4 stars: All the above plus use open standards from W3C (RDF and SPARQL) to identify things, so that people can point at your data. - 5 stars: All the above, plus: Link one’s data to other people’s data to provide context. The previous classification was very useful to motivate the adoption of linked data in different projects. However, once a project adopts the linked data model, it doesn’t go into further details to assess the quality of its linked data. Assessing the quality of linked data quality portals must take into account different aspects like maintainability, sustainability, usability, etc. that can be measured for data and web quality in general. As expressed in (Heath, 2011): "Linked Data might be outdated, imprecise, or simply wrong. Therefore, Linked Data applications should consider all RDF statements that they discover on the Web as claims by a specific source rather than as facts. Applications should contain a module to filter RDF spam and prefer data from sources that are known for good quality to data from others." The LOD2 project proposed the Linked Data Stack as a set of tools to manage the life-cycle of Linked Data. The life cycle was divided in several stages like: authoring, interlinking, enrichment, etc. One of those stages is called Quality Analysis and several tools are proposed like RDFUnit for RDF validation and Sieve for quality assessment and fusion. There is a growing interest to find metrics and methodologies to assess the quality of linked data portals which can be seen in the two international workshops organized about linked data. --- 8 5 stars model: http://5stardata.info 9 Linked Data Stack: http://stack.linkeddata.org/ quality. Zaperi et al (2012) include a systematic survey on literature related with Linked data quality and identify a set of data quality dimensions that can be applied to assess the quality of linked data. The dimensions are classified in 4 groups and each dimension is accompanied by several metrics. The dimensions are: - **Accessibility**: Availability, licensing, interlinking, security and performance. - **Intrinsic**: syntactic validity, semantic accuracy, consistency, conciseness and completeness. - **Contextual**: relevancy, trustworthiness, understandability and timeliness. - **Representational dimensions**: representational conciseness, interoperability, interpretability and versatility. There have appeared some recent initiatives to inspect and clean linked datasets. For example, Loupe is a tool which can be used to inspect which vocabularies (classes and properties) are used including statistics and frequent triple patterns and LOD Laundromat provides access to all Linked Open Data (LOD) in the world by crawling the LOD cloud and converting all its contents in a standards-compliant way, removing all data stains such as syntax errors, duplicates, and blank nodes. --- 10 Workshops on Linked Data Quality: http://ldq.semanticmultimedia.org/ and 11 Loupe: http://loupe.linkeddata.es/loupe/ 12 LOD Laundromat: http://lodlaundromat.org/ 4 RDF Validation RDF is a central part of any linked data project to provide information that can automatically be processed by machines. It is based on simple statements of the form "subject – predicate – object" where the predicates are uniquely identified by an IRI. An RDF dataset is comprised of a set of statements that can describe some information on a given domain. There have been several notations for RDF like Turtle, RDF/XML, N-Triples, etc. Using Turtle\textsuperscript{13}, we could describe, for example, some information about a public contract could be: ``` :c23 rdfs:label "Maintenance service" ; time:year 2015 ; pc:agreedPrice 259870 ; pc:tender :e45 ; pc:tender :e47 . :e45 rdf:type gr:BusinessEntity ; rdfs:label "Company ABC" ; :e47 rdf:type gr:BusinessEntity ; rdfs:label "Company XYZ" . ``` Figure 1. Example of RDF data represented in Turtle Figure 1 represents in RDF a public contract :c23 with a property rdfs:label with value "Maintenance service" that has been awarded in 2015 by a price of 259870 and has two tenders: the entity :e45 and the entity :e47 which are both of type gr:BusinessEntity. The previous information can be represented using the graph in figure 2. ![Figure 2: Example of RDF data that represents a public contract](image) \textsuperscript{13} Turtle notation is intended for human readability. It enables the replacement of full IRIs by qualified names preceded by an alias and a colon. The aliases employed in this example have been taken from http://prefix.cc. One of the main advantages of RDF is that it is possible to automatically merge data from different RDF data graphs based on the universality of using URIs to represent entities. At the same time, the flexibility of the graph model enables different systems to easily reuse data from heterogeneous sources. Although the benefits of RDF for data representation and integration are undisputable, its adoption by everyday programmers and system architects who care more about by creating and accessing well-structured data in databases than about inference has not yet taken off. In 2013, an RDF validation workshop\(^{14}\) was organized by the W3c to gather the requirements of the different stakeholders. A conclusion of the workshop was that, although SPARQL could be used to validate RDF, there was a need for a more high level and concise language. Shape Expressions (Prud'hommeaux et al, 2014) emerged as such a language. As an example, figure 3 contains a description of the previous RDF data using Shape Expressions\(^{15}\). ``` <PublicContract> { rdfs:label xsd:string , time:year xsd:year , pc:agreedPrice xsd:integer , pc:tender @<BusinessEntity> + } <BusinessEntity> { rdf:type { gr:BusinessEntity }, rdfs:label xsd:string } ``` Figure 3. Simplified Public contracts schema represented in ShEx The previous definition declares the shape of a `<PublicContract>` as having a property `rdfs:label` whose value must be of type `xsd:string`, two other properties `time:year` and `pc:agreedPrice` with values of type `xsd:year` and `xsd:integer`, a one or more properties `pc:tender` whose value must be a node with shape `<BusinessEntity>`. Finally, a `<BusinessEntity>` has type `gr:BusinessEntity` and `rdfs:label` of type `xsd:string`. The Shape Expressions language has been designed as an intuitive and human-friendly high level language for RDF validation. There are several implementations available\(^{16}\) and some online validators\(^{17}\). Shape Expressions can even be represented using data model diagrams as in figure 4. \(^{14}\) RDF validation workshop: [https://www.w3.org/2012/12/rdf-val/](https://www.w3.org/2012/12/rdf-val/) \(^{15}\) The example can be tested online using the RDFShape validator available at: [http://goo.gl/3pahF0](http://goo.gl/3pahF0) \(^{16}\) More information about Shape Expressions and implementations is available at: [http://shex.io/](http://shex.io/) \(^{17}\) RDFShape: online RDF validator available at: [http://rdfshape.herokuapp.com](http://rdfshape.herokuapp.com) In 2014, the W3c chartered a working group called RDF Data Shapes to produce a language for defining structural constraints on RDF graphs. The language has been called SHACL and in October, 2015, a first public working draft has been published\(^\text{18}\). Figure 5 contains a description of the simplified public contracts data model using SHACL. Notice that as SHACL is based on RDF, the example uses Turtle notation. The Working Group is currently considering the use of a more human friendly syntax for SHACL inspired by Shape Expressions. ```turtle <PublicContract> a sh:Shape ; sh:property [ sh:predicate rdfs:label ; sh:minCount 1 ; sh:maxCount 1 ; sh:dataType xsd:string ] ; sh:property [ sh:predicate time:year ; sh:minCount 1 ; sh:maxCount 1 ; sh:dataType xsd:year ] ; sh:property [ sh:predicate pc:agreedPrice ; sh:minCount 1 ; sh:maxCount 1 ; sh:dataType xsd:integer ] ; sh:property [ sh:predicate pc:tender ; sh:minCount 1 ; sh:valueShape <BusinessEntity> ]. <BusinessEntity> a sh:Shape ; sh:property [ sh:predicate rdf:type ; sh:minCount 1 ; sh:maxCount 1 ; sh:hasValue gr:BusinessEntity ] ; sh:property [ sh:predicate rdfs:label ; sh:minCount 1 ; sh:maxCount 1 ; sh:dataType xsd:string ]. ``` Figure 5. Simplified Public contracts schema represented in SHACL --- \(^{18}\) SHACL First Public Working Draft: [http://www.w3.org/TR/shacl/](http://www.w3.org/TR/shacl/) 5 Conclusions and recommendations Although the history of the linked data movement that emerged in 2007 has yet to be written and it is yet too early to assess its global impact, in these years, there are several lessons that can be learnt with a lot of linked data initiatives that have developed and have later been abandoned while other initiatives seem to have been established and maintain their datasets active for academic and industrial reuse. The tools and techniques needed for linked data publishing are gradually maturing. However, there is yet a lack of tools to measure and guarantee the quality of linked data solutions. In fact, the main piece of any linked data portal, RDF, still lacks a standard way to be described and validated. The current work developed by the W3c Data Shapes Working group and the Shape Expressions community may help to improve RDF adoption in industrial scenarios where there is a real need to ensure the structure of RDF data, both to produce and to consume it. These initiatives can be seen as a sign of the increased maturity of RDF and the linked data project. References About the Author Dr. Jose Emilio Labra Gayo is an Associate Professor from the University of Oviedo, Spain. He has been the Dean of the School of Computer Science Engineering at the University of Oviedo from 2004 until 2012. He founded and is the main researcher of the WESO (Web Semantics Oviedo) research group. The group collaborates on practical applications of semantic web and linked open data and has been involved in several projects with industrial partners and public administrations. His research interests are Semantic Web technologies, Declarative Programming Languages and Web Engineering. He is also chair of the W3c Best practices on Multilingual Linked Open Data Community Group and is member of the W3c RDF Data Shapes Working Group. Copyright information © 2013 European PSI Platform – This document and all material therein has been compiled with great care. However, the author, editor and/or publisher and/or any party within the European PSI Platform or its predecessor projects the ePSiplus Network project or ePSINet consortium cannot be held liable in any way for the consequences of using the content of this document and/or any material referenced therein. This report has been published under the auspices of the European Public Sector information Platform. The report may be reproduced providing acknowledgement is made to the European Public Sector Information (PSI) Platform. The European Public Sector Information (PSI) Platform is funded under the European Commission eContentplus programme.
{"Source-Url": "http://labra.weso.es/pdf/2015_LinkedDataQualityEPSI.pdf", "len_cl100k_base": 4743, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 34573, "total-output-tokens": 6429, "length": "2e12", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0011310577392578125, "__label__crime_law": 0.00152587890625, "__label__education_jobs": 0.0029773712158203125, "__label__entertainment": 0.00014126300811767578, "__label__fashion_beauty": 0.0002453327178955078, "__label__finance_business": 0.006916046142578125, "__label__food_dining": 0.0004520416259765625, "__label__games": 0.0005474090576171875, "__label__hardware": 0.0008478164672851562, "__label__health": 0.0006966590881347656, "__label__history": 0.0009207725524902344, "__label__home_hobbies": 0.0001569986343383789, "__label__industrial": 0.0010919570922851562, "__label__literature": 0.0006604194641113281, "__label__politics": 0.0019397735595703125, "__label__religion": 0.0005021095275878906, "__label__science_tech": 0.2052001953125, "__label__social_life": 0.0002419948577880859, "__label__software": 0.0855712890625, "__label__software_dev": 0.6865234375, "__label__sports_fitness": 0.0002015829086303711, "__label__transportation": 0.0010080337524414062, "__label__travel": 0.0003082752227783203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25771, 0.02212]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25771, 0.45274]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25771, 0.87084]], "google_gemma-3-12b-it_contains_pii": [[0, 163, false], [163, 1896, null], [1896, 2401, null], [2401, 4828, null], [4828, 5430, null], [5430, 7920, null], [7920, 10880, null], [10880, 11395, null], [11395, 13696, null], [13696, 15061, null], [15061, 16584, null], [16584, 19154, null], [19154, 20746, null], [20746, 21857, null], [21857, 24242, null], [24242, 25771, null]], "google_gemma-3-12b-it_is_public_document": [[0, 163, true], [163, 1896, null], [1896, 2401, null], [2401, 4828, null], [4828, 5430, null], [5430, 7920, null], [7920, 10880, null], [10880, 11395, null], [11395, 13696, null], [13696, 15061, null], [15061, 16584, null], [16584, 19154, null], [19154, 20746, null], [20746, 21857, null], [21857, 24242, null], [24242, 25771, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25771, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25771, null]], "pdf_page_numbers": [[0, 163, 1], [163, 1896, 2], [1896, 2401, 3], [2401, 4828, 4], [4828, 5430, 5], [5430, 7920, 6], [7920, 10880, 7], [10880, 11395, 8], [11395, 13696, 9], [13696, 15061, 10], [15061, 16584, 11], [16584, 19154, 12], [19154, 20746, 13], [20746, 21857, 14], [21857, 24242, 15], [24242, 25771, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25771, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
ae591a755c1443350b6591ae7d50df2394955b9b
Using Aspect-Oriented Programming to Enforce Architecture Paulo Merson September 2007 TECHNICAL NOTE CMU/SEI-2007-TN-019 Software Architecture Technology Initiative Unlimited distribution subject to the copyright. # Table of Contents Abstract v 1 Introduction 1 2 Compile-Time Declarations 2 3 Enforcing the Architecture 3 3.1 Enforcing Architectural Constraints Using AOP 4 3.2 A Concrete Example 6 3.3 Enforcing Patterns 8 4 Conformance to Coding Policies 10 5 Conclusion 13 References 15 # List of Figures <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Figure 1:</td> <td>Modules in a Layered Architecture</td> <td>3</td> </tr> <tr> <td>Figure 2:</td> <td>Layered Design from Figure 1 Showing the Corresponding Java Packages</td> <td>5</td> </tr> <tr> <td>Figure 3:</td> <td>Runtime View of the Architecture of Duke’s Bank Application [Bodoff 2007]</td> <td>8</td> </tr> <tr> <td>Figure 4:</td> <td>Abstract Factory Design Pattern [Gamma 1995] (Adapted)</td> <td>9</td> </tr> </tbody> </table> Abstract Using aspect-oriented programming (AOP), software developers can define customized compile-time error or warning messages that are issued when the code contains join points that match specified pointcuts. These customized messages are generated by compile-time declarations, which are an extremely simple but powerful AOP mechanism. Declarations that look for nonvalid interactions between modules can be used for architecture enforcement. Coding policies, best practices, design patterns, and code-naming conventions can also be enforced. Compile-time declarations operate as an additional verification in the build process, but they do not affect the compiled application and can be turned on and off at any time. That feature makes this approach an automated and nondisruptive solution for architecture enforcement and a risk-free first step towards AOP adoption. 1 Introduction Aspect-oriented programming (AOP) is a programming paradigm that facilitates modularization of crosscutting concerns. The AOP term and concept originated at Xerox PARC in the 1990s [Kiczales 1997]. AOP is gathering momentum in the software engineering community. On the research front, researchers actively investigate issues in the broader discipline of aspect-oriented software development. Research topics include type systems for aspects, composition models and operators for aspects, architecture design, requirements engineering, and the modeling and visualization of aspects. On the practitioner front, tools, frameworks, and aspect libraries are evolving fast with respect to usability and reliability. An active community of developers is enjoying the benefits of AOP in projects that span various business segments and development platforms.¹ Practitioners discover new uses for aspects every day. The goal of this report is to show, through examples, how you can use AOP to ensure • conformance to architectural design • the proper use of design patterns and programming best practices • conformance to coding policies and naming conventions The audience for this report consists of architects and developers who are familiar with AOP concepts. All the examples use the AspectJ syntax² [Xerox 2003]. The report is structured as follows: Section 2 describes the static AOP compile-time declaration mechanism. Section 3 briefly introduces the architecture conformance challenge and then shows how compile-time declarations can be used to enforce architectural constraints. Section 4 provides various examples of coding policies and best practices that can be enforced with AOP. In addition, that section describes how AOP can enforce naming conventions. Section 5 provides some concluding remarks. --- ¹ You can find examples of applications of AOP in the industry track of the annual Aspect-Oriented Software Development (AOSD) Conference and in emails to the aspectj-users@eclipse.org mailing list. To access those emails, go to http://www.eclipse.org/aspectj/userlists.php. ² To implement and test the examples shown in this report in your Java project, follow these steps: • Install AspectJ on your machine. • Copy and paste all code snippets into a single public aspect (e.g., public aspect Enforcement {...}). Then, save the file—for example, as Enforcement.aj. • Change the aspect code to target the packages of your project where applicable. (The examples in this report use com.foo.proj.) • Compile the Java code and the aspect together using the AspectJ compiler. 2 Compile-Time Declarations AOP mechanisms can use dynamic or static crosscutting. With dynamic crosscutting, at compile time or load time, aspect code is added to the target units through weaving at specified join points. Logging is a typical example of a crosscutting concern that can be implemented using dynamic crosscutting—calls to log methods are inserted through weaving at the beginning of methods whose execution should be logged. Dynamic crosscutting adds or modifies the executable code and hence the behavior of a program. In this report, we won’t use dynamic crosscutting. Static crosscutting modifies the static structure of the types in the application and their compile-time behavior [Laddad 2003]. It can be used, for example, to - add a method `void init(ServletConfig config)` with standard initialization code to all classes that implement the `javax.servlet.Servlet` interface in a given project. This mechanism is usually referred to as intertype member declaration [AspectJ 2003, Gradecki 2003] or member introduction [Laddad 2003]. - make all classes whose name ends in the letters “PK” (for “primary key”) implement the `java.io.Serializable` interface. This static crosscutting mechanism is called type-hierarchy modification [Laddad 2003]. - treat the checked exception `java.io.IOException` as an unchecked exception on all calls to `java.io.FileInputStream.close()`. This mechanism is exception softening [Gradecki 2003, Laddad 2003]. The other application of static crosscutting is the introduction of compile-time errors or warnings when join points that match the specified pointcut are found. This mechanism is generally called compile-time declaration or custom compilation messages and is the AOP mechanism used in this report for architecture enforcement. As an example, suppose you are using JUnit for automated unit testing and a policy states that all test case classes should have the prefix “Test.” The code snippet below using AspectJ syntax causes the compiler to issue a warning if it finds any class under package `com.foo.proj` that does not follow that rule: ``` declare warning: static initialization(junit.framework.TestCase+) && !static initialization(com.foo.proj..Test*): "JUnit test cases should start with 'Test'"; ``` Declaring compile-time errors and warnings this way is less intrusive, because the target code is not modified in any form. No new code is woven as in dynamic crosscutting, and no type is altered as in intertype member declaration or hierarchy modification. This fact brings a special value to compile-time declarations. If they are added to a project, they can be turned on and off, and the compiled code remains the same. --- 3 For more information about JUnit, go to www.junit.org. 3 Enforcing the Architecture The diagram in Figure 1 shows the top-level decomposition of an application into four layers. The architecture follows the basic design principle of separation of concerns. The User Interface layer has modules that render the screens and handle presentation logic and dialog flow. The implementation of this layer will vary substantially depending on the technology used (e.g., Web-based user interface [UI], Web 2.0, Windows application, Eclipse-based UI). The Core Logic layer contains the modules that implement the business logic of the system and that stay less dependent on the technology. Modules in the Data Access layer implement the logic to access the relational database, including object-relational mapping and classes that contain SQL statements. This layer allows the Core Logic layer to be independent of table schemas and peculiarities of types of databases. Finally, the JDBC layer is the standard Java Database Connectivity (JDBC) application program interface (API).\(^4\) It consists of off-the-shelf libraries that can be used uniformly to access different relational databases, such as Oracle or Microsoft SQL Server. ![Diagram](image) **Figure 1: Modules in a Layered Architecture** The dependency between layers is labeled as “can use.” This is the typical relation in layered designs and represents the fact that a module in the upper layer is allowed to use any of the public facilities provided by the lower layer [Clements 2003]. The “can use” relation is flexible—it doesn’t identify dependencies between specific modules that live inside each layer. In subsequent refinements of the architecture, these dependencies become explicit. Nonetheless, the top-level --- \(^4\) For more information, go to http://java.sun.com/jdbc. architectural design in Figure 1 imposes important restrictions: a module inside the User Interface layer is not allowed to use a module in the Data Access or JDBC layers, a module in Data Access can’t use a module in Core Logic, and so on. The layered architecture was created by the architect to satisfy modifiability, portability, and testability requirements. If the code introduces layer bridging that is not conformant to the architecture, these goals may be compromised. During implementation and maintenance, programmers sometimes introduce dependencies in the code that don’t follow the original architectural design. Enforcing that the code continues to conform to the architectural design is a major challenge, and, in fact, failing to do it causes many common software problems [Brown 1998]. There are at least five approaches that help to enforce the architecture or at least check for conformance between architecture and code: - **code inspections**: Code reviews have a very positive impact on software quality and are more efficient than testing with respect to detecting defects [Humphrey 1995]. However, this is a manual process. Extensive code reviews for checking if the code follows the architecture take time and require the reviewer to have a solid understanding of the architecture, which is not always the case. - **architecture reconstruction**: This consists of obtaining architectural representations by extracting information from implementation artifacts (e.g., source code, deployment descriptors) or traces of the system execution [Kazman 2002]. Reconstructed architectural views can then be compared with the original intended design to identify mismatches. Recovering the architecture to verify conformance with the original design is costly, but architecture reconstruction has other benefits, such as producing detailed and up-to-date architecture depictions. - **model driven architecture (MDA)**: If the MDA process (as described by Kleppe, Warmer, and Bast [Kleppe 2003]) is followed, code is generated by an MDA tool based on designs typically expressed in UML. Even if the code is later modified directly, the tool usually allows reversing it back to design without losing the modifications. Therefore, in theory, architecture conformance is easy to achieve, because code and design can be kept in synch by the MDA tool. - **enforcement tools**: Tools that help enforce that the implementation follows the architecture design are already available. Examples include Lattix, Sotograph, and Structure101. - The other alternative, which will be described next, is the use of AOP. ### 3.1 ENFORCING ARCHITECTURAL CONSTRAINTS USING AOP AOP lets us specify locations in the source code called *join points*. Some examples of join points are the invocation of a method or constructor; the declaration of a class, method, or constructor; and access to a member variable of a class. Wildcard patterns can be used to express a set of join points in the target code. For example, `call(* com.foo.proj..*.set*(String))` represents all calls to methods that --- 5 For more information on Lattix, go to www.lattix.com. 6 For more information on Sotograph, go to www.software-tomography.com. 7 For more information on Structure101, go to www.headwaysoftware.com. • return any data type • reside in any class that is part of package com.foo.proj or any subpackage • start with “set” (e.g., setName) • take a String object as an argument There are also constructs that delimit a lexical scope in the code. For example, within(com.foo.proj.ui..*Dialog) represents the code in all classes that • reside in package com.foo.proj.ui or any subpackage • end with Dialog (e.g., PlaceOrderDialog) These AOP mechanisms can be used to check whether there are relations in the code not prescribed by the architectural design. Going back to the example in Figure 1, the layers will eventually be implemented in Java as a set of Java packages. Figure 2 shows the same layered design with the actual names of the Java packages implementing the layers. Knowing the design restrictions imposed by the original layered design in Figure 1 and knowing how the layers map to Java packages in the code base (Figure 2), it is possible to create compile-time declarations to enforce the layered design. For example, the following aspect checks at compile time that business logic modules in the Core Logic layer do not make explicit calls to UI modules: ```java public aspect Enforcement { public pointcut inCore() : within(com.foo.proj.core..*); } ``` public pointcut callToUi() : call(* com.foo.proj.ui..*+.*(..)) || call(com.foo.proj.ui..new(..)); declare warning : inCore() && callToUi() : "Core logic layer can’t have calls to UI layer"; } In this aspect, there are two pointcuts: inCore and callToUi. A pointcut is simply a named construct that describes a set of join points. Pointcuts can be referred to in compile-time declarations and other AOP constructs. The first pointcut (inCore) defines a scope in the code base that consists of all the code inside package com.foo.proj.core or any subpackage. Pointcut callToUi has two parts. The first part refers to calls to any methods in the com.foo.proj.ui package or subpackages. The second part refers to calls to any constructors (keyword new) in the same set of packages. The compile-time declaration is the statement that starts with declare. It determines that, if there is a call to a class in the UI layer anywhere in core logic packages, the compiler will show a warning on that call. To be more strict with the enforcement rules, we can use declare error instead of declare warning and generate a compile error. Similar pointcut definitions and declare statements can be added to verify that only the dependencies depicted in Figure 2 are present in the code. Then, every time the application is built, the compiler will issue warnings if there are disallowed calls. In addition to the architectural design, component technologies have constraints that must be satisfied by the components. These contractual obligations ensure that independently developed components can interact in predictable ways and can be deployed into standard runtime environments [Bachmann 2000]. Take, for example, the Enterprise JavaBeans (EJB) component technology. The specifications [Sun 2001] determine that a stateless session bean class must define a single ejbCreate() method that takes no arguments. Such a rule is usually enforced by a deployment tool that is part of the Java 2 Platform, Enterprise Edition (J2EE) application server suite. Other rules and restrictions are usually stated in the specifications but are not enforced by the compiler or deployment tool. For example, an EJB must not make graphical user interface (GUI) calls, must not read or write to files in the file system, must not manage threads, and must not make calls to native code. Most of these restrictions can be checked using AOP [Laddad 2003]. The following declaration can help to prevent the use of native code in EJB classes: public pointcut inEJB() : within(javax.ejb.EnterpriseBean+); public pointcut callNative() : call(* System.loadLibrary(..)) || call(* System.load(..)) || call(* Runtime.loadLibrary(..)) || call(* Runtime.load(..)) || call(native * *.*(..)); declare error : inEJB() && callNative() : "EJBs cannot load native code"; 3.2 A CONCRETE EXAMPLE The J2EE 1.3 Tutorial published by Sun Microsystems [Bodoff 2007] includes an example of a multitier application called Duke’s Bank. Figure 3, a graphical representation of the Runtime view of that application’s architecture, was adapted from that tutorial. At runtime, the Web client and the application client call the session beans, the session beans invoke the entity beans, and the entity beans access the database tables on the back end. Restricting all database access to entity beans has some benefits. Portability and modifiability are improved, because changes related to porting to a new database or altering the structure of the database tables are confined to the entity beans. Assuming that constraint was the intent of the architect, we can create a compile-time declaration to check that all database calls occur within the entity beans: ```java public pointcut inEntityBean() : within(javax.ejb.EntityBean+); public pointcut callToJdbc() : call(* java.sql..+.*(..)) || call(java.sql..new(..)) || call(* javax.sql..+.*(..)) || call(javax.sql..new(..)); declare warning : !inEntityBean() && callToJdbc() : "Only entity beans should access the database"; ``` The `inEntityBean` pointcut delimits the scope of all entity beans. The wildcard pattern `EntityBean+` refers to any class that implements the `EntityBean` interface. This way, we get all entity beans in the code that will be compiled. Database calls would use the JDBC API and are caught by `callToJdbc`. The compile-time declaration gives a warning if there is a JDBC call that is not inside an entity bean. Surprisingly, the compile-time declaration above applied to the tutorial source code reveals a discrepancy between the code and the design in Figure 3. In the implementation, the session beans also access the database directly. Perhaps, these “undesigned” calls were created, because the developer opted to avoid entity beans by using the JDBC for Reading pattern [Marinescu 2002] to improve performance for some operations. In any case, the declaration reveals an inconsistency between the architectural design and the code. --- 8. In this example and those that follow, the surrounding `public aspect` declaration is removed to save space. 9. Character ‘+’ following an identifier may also denote “any subclasses” if that identifier is a class name. 3.3 ENFORCING PATTERNS Some design patterns can also be enforced using AOP compile-time declarations. Figure 4 shows a UML class diagram that exemplifies the application of the Abstract Factory design pattern [Gamma 1995]. Class SomeScreen represents a screen of an application that should be portable across the Java Swing and SWT\textsuperscript{10} user interface frameworks. SomeScreen uses the WidgetFactory abstract class to create instances of the widgets (window, scrollbars, buttons, etc.) that will be displayed to the user. The factory creates the concrete widgets using either the Swing or SWT framework, based on a selection made at initialization or build time. SomeScreen and similar classes that instantiate widgets should use the abstract factory, which is what we would like to enforce. If these client classes directly instantiate concrete widget classes or call concrete factories, portability will be impaired. The code snippet below enforces the pattern: ```java public pointcut inFactory() : within(com.foo.proj.ui.*WidgetFactory); public pointcut callBypassingFactory() : call(com.foo.proj.ui.Window+.new(..)) || call(com.foo.proj.ui.ScrollBar+.new(..)); public pointcut callToConcreteFactory() : 10 For more information on SWT, go to www.eclipse.org/swt. ``` call(!abstract * com.foo.proj.ui.WidgetFactory+.*(..)); declare warning: callBypassingFactory() && !inFactory() : "Use factory to instantiate this class." declare warning: callToConcreteFactory() : "Use abstract factory instead of concrete factory."; Similarly, other patterns that restrict the interactions allowed between elements can be enforced using compile-time declarations. Examples include Mediator [Gamma 1995], Session Façade [Marinescu 2002, Alur 2003], and Data Access Object [Alur 2003]. Figure 4: Abstract Factory Design Pattern [Gamma 1995] (Adapted) 4 Conformance to Coding Policies Numerous implementation policies and best practices can be enforced using compile-time declarations. For example, it is a common convention in Java to add the suffix “Exception” to all subclasses of Exception. Here’s how it can be checked using AOP for all subclasses of Exception under package com.foo.proj: ```java public pointcut misnamedException() : execution(Exception+.new(..)) && execution(com.foo.proj..new(..)) && !execution(com.foo.proj..*Exception.new(..)); declare warning : misnamedException(): "Subclasses of Exception should terminate in 'Exception';" ``` The difference between execution and call is subtle. The keyword execution represents join points at the body of the specified constructor or method. The keyword call represents join points wherever the specified method is called. The compile-time declaration above uses execution so that it issues a warning on any constructor of the Exception subclass with an illegal name. If it used call, the warning would appear on the calls to the constructor and hence would not be seen if the class was not being used yet. Still with respect to exceptions, in some projects, it is recommended that all exceptions be created with an error message or a Throwable object as an argument. The following declaration alerts if any type of exception is created without arguments: ```java public pointcut noArgsException() : call(Exception+.new()); declare warning : noArgsException(): "Shouldn’t create exception without cause or message."; ``` It is likely that, in a GUI application, exception stack traces are directed to a log file or handled in some way by the UI layer. Thus, we want to avoid calls to printStackTrace() in the code. Likewise, print statements using the default output streams (System.out, System.err) are not desirable in production code. In practice, we may permit such calls inside main() methods that are created in some classes just for tests. Here is the compile-time declaration to check for violations of these conventions: ```java public pointcut callToPrint() : call(* java..Throwable.printStackTrace(..)) || call(* System.out.print*(..)) || call(* System.err.print*(..)) || public pointcut inMainMethod() : withincode(public static void main(String[])); declare warning : callToPrint() && !inMainMethod(): "Print statements should not be in production code."; ``` Another common policy is to access member variables only through get and set methods to improve information hiding. A simple declaration identifies this kind of violation [Laddad 2003]: ```java public pointcut accessPublicVars() : get(public !final * *) || set(public !final * *); declare warning : accessPublicVars() : "Consider get/set methods instead of public member variable." ``` Enforcement declarations can also be used with Java 5.0 metadata annotations. If you are using Apache Beehive\(^\text{11}\) to implement Web Services, you add the annotation “@WebMethod” to the methods in your Java class that will be exposed as Web Services. Here’s an example: ```java @WebMethod public double getQuote(@WebParam String symbol) { double quote = 0.00; // obtain quote... return quote; } ``` The documentation of the @WebMethod annotation indicates that annotated methods must be public. This rule can be enforced using AOP: ```java public pointcut nonPublicWebMethod() : execution(@WebMethod !public * *.*(..)); declare error : nonPublicWebMethod() : "Methods with @WebMethod annotation must be public"; ``` AOP can also help to enforce naming conventions. For example, the usual convention for the name of member variables in Java is to start with a lowercase letter, unless the variable is a constant. The following AOP compile-time declaration ensures that the code does not contain a non-final member variable that starts with a capital letter: ```java public pointcut varStartingWithUpperCase() : get(!final * com.foo.proj..*+.A*) || set(!final * com.foo.proj..*+.A*) || get(!final * com.foo.proj..*+.B*) || set(!final * com.foo.proj..*+.B*) || ... get(!final * com.foo.proj..*+.Z*) || set(!final * com.foo.proj..*+.Z*); declare warning : varStartingWithUpperCase() : "Non-final variables should not begin with capital letter."; ``` \(^{11}\) For more information on Apache Beehive, go to http://beehive.apache.org/. The declaration of a member variable is not an exposed join point in AspectJ. For that reason, the pointcut does not target the declaration; instead, it points to any statements where the variable is accessed for read (“get(signature)”) or write (“set(signature)”). Many other policies, rules, or best practices can be enforced with compile-time declarations. Here are some examples: - If you don’t want a specific method or class to be used anymore, but you can’t remove it because it is used in legacy code, you can declare an error when it is used outside the scope of the legacy code. The compile-time declaration is more effective than using the “@deprecated” Javadoc tag, which is just a reminder in the code documentation for developers to avoid the tagged element. - Components that execute in a multithreaded environment (e.g., Servlets) should not store thread-specific state in instance variables. Otherwise, data from one thread can overwrite data from another [Gradecki 2003]. - Sometimes we use a pool of instances to avoid the time-consuming instantiation of objects. Database connections, images, Java Naming and Directory Interface (JNDI) contexts, and EJB home objects are examples of objects that are usually in a pool. Compile-time declarations can enforce that client classes get instances from the pool, instead of creating instances directly. - Some projects have specific naming rules that can be enforced with compile-time declarations. For example, classes that follow the Data Access Object pattern usually have the suffix “Dao.” JUnit test cases usually have suffix or prefix “Test.” - When a concrete class implements an interface, the usual intent is that the outside world will use the contract specified by that interface to interact with objects of that class. However, the class may offer other public operations beyond what is specified by the interface—a common situation when the class implements more than one interface. Compile-time declarations can enforce that a class is only accessed through the interface(s) it implements, so that trace-ability of contracts doesn’t get lost. 5 Conclusion Over the past 20 years, software engineers became aware that software architecture is critical to success in software projects. Techniques, languages, and patterns were developed to help us create, document, and evaluate architectural designs. Today, architects may have good confidence in the quality of the architectural designs they produce, but there is little confidence that the code created by developers will actually follow the design. When the code deviates from the design, quality attributes such as modifiability and performance can suffer. Architecture enforcement is a major challenge. When no automated solution is available, some organizations resort to manual code inspections to verify code conformance to the architecture. However, code inspections are prone to human error and don’t scale well to large systems and distributed teams. Some commercial tools already promise continuous architecture enforcement. Another approach that solves part of the architecture conformance problem is MDA. Code is generated from UML models and will necessarily follow the design expressed in UML. However, MDA has some barriers to overcome before it becomes mainstream, such as the tendency of software engineers to have syntactic and semantic discipline at the code level and not at the architecture level. At least for Java-based systems, AOP provides a relatively simple automated solution for architecture enforcement. One can create AOP compile-time declarations that will search the entire code base and flag invalid interactions. In any situation, the first step to be able to enforce the architecture over the lifetime of the system is to have a good architecture representation. If the documentation is incomplete, unclear, or out-of-date, it is hard to apply any architecture conformance technique. More importantly, it is hard for developers to faithfully obey the dictates of the architecture. The use of AOP for enforcement of coding policies and architecture is a low-hanging fruit that has been explored for a few years and suggested in books, papers, and presentations. Even an open source library with a few examples has been created. This report presented a sample of the variety of applications of compile-time declarations. The code snippets show how compile-time declarations are simple and powerful. The use of compile-time declaration of errors and warnings is the perfect first step to AOP adoption, because they don’t alter the binaries produced during compilation. Therefore, compile-time declarations can be turned on and off at any time, because the original code remains completely independent of the AOP code. 12 Go to http://patterntesting.sourceforge.net/. References URLs are valid as of the publication date of this document. [Alur 2003] [Bachmann 2000] www.sei.cmu.edu/publications/documents/00.reports/00tr008.html. [Bodoff 2007] [Brown 1998] [Clements 2003] [Gamma 1995] [Gradecki 2003] [Humphrey 1995] [Kazman 2002] www.sei.cmu.edu/publications/documents/02.reports/02tr034.html. [Kiczales 1997] [Kleppe 2003] [Laddad 2003] [Marinescu 2002] [Sun 2001] [Xerox 2003] Using Aspect-Oriented Programming to Enforce Architecture Paulo Merson Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213 HQ ESC/XPK 5 Eglin Street Hanscom AFB, MA 01731-2116 Using aspect-oriented programming (AOP), software developers can define customized compile-time error or warning messages that are issued when the code contains join points that match specified pointcuts. These customized messages are generated by compile-time declarations, which are an extremely simple but powerful AOP mechanism. Declarations that look for nonvalid interactions between modules can be used for architecture enforcement. Coding policies, best practices, design patterns, and code-naming conventions can also be enforced. Compile-time declarations operate as an additional verification in the build process, but they do not affect the compiled application and can be turned on and off at any time. That feature makes this approach an automated and nondisruptive solution for architecture enforcement and a risk-free first step towards AOP adoption.
{"Source-Url": "http://www.dtic.mil/get-tr-doc/pdf?AD=ADA479786", "len_cl100k_base": 6675, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 45333, "total-output-tokens": 8543, "length": "2e12", "weborganizer": {"__label__adult": 0.0003719329833984375, "__label__art_design": 0.00031304359436035156, "__label__crime_law": 0.00033283233642578125, "__label__education_jobs": 0.0004405975341796875, "__label__entertainment": 3.8504600524902344e-05, "__label__fashion_beauty": 0.0001424551010131836, "__label__finance_business": 0.0001531839370727539, "__label__food_dining": 0.0003142356872558594, "__label__games": 0.0003464221954345703, "__label__hardware": 0.0004925727844238281, "__label__health": 0.0002675056457519531, "__label__history": 0.0001589059829711914, "__label__home_hobbies": 6.252527236938477e-05, "__label__industrial": 0.0002760887145996094, "__label__literature": 0.0001691579818725586, "__label__politics": 0.00024378299713134768, "__label__religion": 0.0004656314849853515, "__label__science_tech": 0.0016794204711914062, "__label__social_life": 6.377696990966797e-05, "__label__software": 0.0028743743896484375, "__label__software_dev": 0.98974609375, "__label__sports_fitness": 0.00028705596923828125, "__label__transportation": 0.0003974437713623047, "__label__travel": 0.00019884109497070312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33750, 0.03492]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33750, 0.67571]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33750, 0.83367]], "google_gemma-3-12b-it_contains_pii": [[0, 218, false], [218, 218, null], [218, 511, null], [511, 511, null], [511, 1072, null], [1072, 1072, null], [1072, 1949, null], [1949, 1949, null], [1949, 4566, null], [4566, 7349, null], [7349, 9140, null], [9140, 12442, null], [12442, 13715, null], [13715, 16758, null], [16758, 18924, null], [18924, 20225, null], [20225, 20796, null], [20796, 23233, null], [23233, 25232, null], [25232, 27358, null], [27358, 30072, null], [30072, 30072, null], [30072, 31665, null], [31665, 32677, null], [32677, 33750, null]], "google_gemma-3-12b-it_is_public_document": [[0, 218, true], [218, 218, null], [218, 511, null], [511, 511, null], [511, 1072, null], [1072, 1072, null], [1072, 1949, null], [1949, 1949, null], [1949, 4566, null], [4566, 7349, null], [7349, 9140, null], [9140, 12442, null], [12442, 13715, null], [13715, 16758, null], [16758, 18924, null], [18924, 20225, null], [20225, 20796, null], [20796, 23233, null], [23233, 25232, null], [25232, 27358, null], [27358, 30072, null], [30072, 30072, null], [30072, 31665, null], [31665, 32677, null], [32677, 33750, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33750, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33750, null]], "pdf_page_numbers": [[0, 218, 1], [218, 218, 2], [218, 511, 3], [511, 511, 4], [511, 1072, 5], [1072, 1072, 6], [1072, 1949, 7], [1949, 1949, 8], [1949, 4566, 9], [4566, 7349, 10], [7349, 9140, 11], [9140, 12442, 12], [12442, 13715, 13], [13715, 16758, 14], [16758, 18924, 15], [18924, 20225, 16], [20225, 20796, 17], [20796, 23233, 18], [23233, 25232, 19], [25232, 27358, 20], [27358, 30072, 21], [30072, 30072, 22], [30072, 31665, 23], [31665, 32677, 24], [32677, 33750, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33750, 0.0223]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
5d4da0b6b06a01bf83989e4de45b539e6b59c904
All in one graphical tool for the management of DIET and GridRPC middleware Eddy Caron, Frédéric Desprez, David Loureiro To cite this version: Eddy Caron, Frédéric Desprez, David Loureiro. All in one graphical tool for the management of DIET and GridRPC middleware. [Research Report] LIP RR-2008-24, Laboratoire de l’informatique du parallélisme. 2008, 2+14p. hal-02102784 HAL Id: hal-02102784 https://hal-lara.archives-ouvertes.fr/hal-02102784 Submitted on 17 Apr 2019 All in one Graphical Tool for the management of DIET a GridRPC Middleware Eddy Caron , Frédéric Desprez , David Loureiro July 1, 2008 Research Report N° 2008-24 All in one Graphical Tool for the management of DIET a GridRPC Middleware Eddy Caron , Frédéric Desprez , David Loureiro July 1, 2008 Abstract Grid Middleware are the link between large scale (and distributed) platforms and applications. Managing such a software system and the Grid environment itself can be a hard task when no dedicated (and integrated) tool exist. Some can be used through nice graphical interfaces, but they are usually dedicated to one or some limited tasks. They do not fulfill all the needs of a Grid end-user who wants to deploy Grid applications easily and rapidly. The aim of this paper is to present the case study of an all-in-one software system, designed for the management of a Grid Middleware and gathering user-friendly graphical interfaces answering to the various needs of end-users. Moreover the software system eases the use of the Grid by avoiding the scripting layer under a nice GUI enabling the user a faster and more efficient use of the Grid environment. By this means we demonstrate how the DIET Dashboard fulfills all the needs of a unified tool for Grid management. This paper gives a comparison with existing and well-known tools dedicated to some specific tasks such as Grid resources management, Grid monitoring, or Middleware management. Keywords: Grid Middleware, Grid management, Grid monitoring, Deployment, Workflow management Résumé Les intergiciels de grille sont le lien entre les ressources des plates-formes à large échelle (et distribuées) et les applications. Gérer un tel système et l’environnement de grille en lui-même est une tâche compliquée lorsqu’aucun outil dédié est mis à disposition. Des outils avec des interfaces graphiques ergonomiques ont été conçus mais ils sont le plus souvent dédiés à une ou quelques tâches précises, ce qui limite la portée de tel outil. L’ensemble des besoins d’un utilisateur d’un environnement grille ne sont pas couverts pour offrir un déploiement des applications portées sur la grille de façon simple et efficace. L’objectif de ce rapport est de présenter une étude de cas d’un logiciel tout-en-un conçu pour la gestion d’un intergiciel de grille comprenant des interfaces graphiques dédiées aux utilisateurs. De plus ce logiciel facilite l’utilisation de la grille en rendant transparente la couche de scripts sous une interface apportant à l’utilisateur un usage plus efficace et rapide de l’environnement. Nous décrivons de quelle façon le DIETDashboard remplit les conditions d’un outil unifié. Ce rapport offre également une comparaison avec des outils existants et reconnus dédiés à certaines tâches spécifiques telles que la gestion des ressources, la surveillance de la plate-forme ou la gestion de l’intergiciel. Mots-clés: Intergiciel de grille, Gestion de grille, Monitoring de grille, Déploiement, Gestion de workflow 1 Introduction Large problems ranging from huge numerical simulations to large scale data processing can now be solved through the Internet using Grid Middleware software systems. Several approaches exist for porting applications to Grid platforms. Examples include classical message-passing, batch processing, web portals, and GridRPC systems. This last approach implements a Grid version of the classical Remote Procedure Call (RPC) model. A more sophisticated extension of this includes high level scheduling mechanisms and data management. Thus clients spread over the Internet submit computation requests to a scheduler that locates one or more servers available on the Grid using some performance measure. The aim of the DIET 1 (Distributed Interactive Engineering Toolbox) project is to develop a set of tools to build, deploy, and execute computational server daemons. It focuses on the development of scalable Middleware with initial efforts concentrated on distributing the scheduling problem across multiple agents. DIET consists of a set of elements that can be used together to build applications using the GridRPC paradigm. This Middleware is able to find an appropriate server according to the information given in the client’s request (e.g. problem to be solved, size of the data involved), the performance of the target platform (e.g. server load, available memory, communication performance) and the local availability of data stored during previous computations. The scheduler is distributed using several collaborating hierarchies connected either statically or dynamically (in a peer-to-peer fashion). Data management is provided to allow persistent data to stay within the system for future re-use. This feature avoids unnecessary communications when dependencies exist between different requests. In a Grid environment, we need several complex tools for the management of resources, Grid Middlewares, and client/server applications. Most Grid software systems use command-line interfaces without any Graphical User Interface (GUI). For the creation of a tool dedicated to the management of Grid Middleware and Grid environments, different functions are mandatory. We can consider three main graphical interfaces for such framework: one for resource management, one for Grid monitoring, and one for the management of the Grid Middleware. DIET Dashboard 2 answers to the need of an unified set of tools providing the user with a complete, modular, portable, and powerful way to manage Grid resources of the applications that run on it. The goal of this paper is to show the various aspects to be taken into account for the design of a graphical tool for Grid Middleware management and how it can ease the interaction with a Grid by avoiding the scripting layer. Thus we designed a tool to make the Grid as user-friendly as possible, in order to simplify its use. Many GUI tools dedicated to Grid management exist but they are all targeting one or two tasks. The aim of the DIET Dashboard is to provide an all-in-one and flexible software that gathers these tools in an efficient manner. We give a comparison with existing tools dedicated to some specific tasks such as Grid resources management, Grid monitoring, or Middleware management. By this way we demonstrate how the DIET Dashboard fulfilled all the needs of an unified tool making it easy to manage a Grid Middleware on Grid platforms. The rest of the paper is organized as follows. In Section 2, we briefly review existing works on graphical tools for the Grid. Sections 3 and 4 describes the architectures of DIET and DIET Dashboard. Section 4.1 presents the features related to the Grid resources management of DIET Dashboard. Section 4.2 presents the features of DIET Dashboard related to Grid monitoring. Section 4.3 describes how it can manage the DIET Grid Middleware. To illustrate the use the DIET Dashboard, we present an experiment in Section 5. Finally, Section 6 concludes the paper. 2 Related Work In this paper we focus on graphical tools designed for Grid environments. Here we will give a description of the three main families of tools dedicated to Grid Middleware software systems and --- 1 http://graal.ens-lyon.fr/DIET 2 http://graal.ens-lyon.fr/DIET/dietdashboard.html Grid environments. The first family concerns graphical tools for cluster resource management. They provide a Graphical User Interface (GUI) to check all information from batch schedulers. For example, QMON [16], the GUI designed for N1 Grid Engine from SUN, can examine the properties of any queue on the Grid (running, disabled, suspended, etc.). A second graphical menu provides a job submission interface with all the options available. A third interface monitors the jobs status (running, suspended, deleted, pending, etc.). To illustrate the second family, we can consider Ganglia [12], the graphical tool designed for Grid monitoring. Based on a protocol using announces, this tool monitors a cluster or a set of clusters using XML, XDR and RRDtool to represent, retrieve and display the data. For each node Ganglia provides instantaneous information and history about the load, memory, I/O, etc. through a web interface. The third family concerns tools designed for Grid Middleware software systems. Many tools exist for the visual specification and execution of scientific workflows as Kepler [1], Taverna [14], SGSDesigner [10], ScyFlow [13], or GridNexus [4]. For example, GridNexus is a graphical system for the creation and the execution of scientific workflows in a Grid environment. The user can assemble complex processes involving data retrieval, analysis and visualization by building a directed acyclic graph in a visual environment. Future works talk about the use of GridNexus to help creating and deploying new Grid services in addition to scripting existing services. This project plans to develop a generic module to provide interactive feedback while executing a workflow. Graphical tools mentioned here are all designed with a specific aim. DIET Dashboard combines workflow management, resources reservation, resources mapping, automatic configuration, visualization, and deployment tools in one integrated graphical application. 3 DIET Architecture The DIET component architecture is structured hierarchically for an improved scalability. Such an architecture is flexible and can be adapted to diverse environments including arbitrary heterogeneous computing platforms. The DIET toolkit [7] is implemented in CORBA and thus benefits from the many standardized, stable services provided by freely-available and high performance CORBA implementations. CORBA systems provide a remote method invocation facility with a high level of transparency. This transparency should not affect the performance substantially, as the communication layers in most CORBA implementations are highly optimized [8]. These factors motivate their decision to use CORBA as the communication and remote invocation fabric in DIET. The DIET framework comprises several components. A Client is an application that uses the DIET infrastructure to solve problems using an RPC approach. Clients access DIET through various interfaces: web portals or programs using C, C++, or Java APIs. A SeD, or server daemon, acts as the service provider, exporting a functionality through a standardized computational service interface. A single SeD can offer any number of computational services (depending on the capacity of the machine). A SeD can also serve as the interface and execution mechanism for either a stand-alone interactive machine or a parallel supercomputer (or cluster) using an interface with a batch scheduler. The third component of the DIET architecture, agents, facilitate the service location and invocation interactions between clients and SeDs. Collectively, a hierarchy of agents provides higher-level services such as scheduling and data management. These services are made scalable by distributing them across a hierarchy of agents composed of a single Master Agent (MA) and several Local Agents (LA). Figure 1 shows an example of a DIET hierarchy. 4 DIET Dashboard When the goal is to monitor a Grid, or deploy a Grid Middleware on it, several tasks are involved. • Managing the resources of a Grid: allocating resources, deploying nodes with several operating systems, etc. • Monitoring the Grid: getting the status of the clusters (number of available nodes in each state, number and main properties of each job, Gantt chart of the jobs history), the status of the jobs (number, status, owner, walltime, scheduled start, Ganglia information of the nodes) running on the platform, etc. • Managing the Grid Middleware software system within a Grid environment: designing hierarchies (manually or automatically by matching resources on patterns), deploying them directly or through workflows of applications, etc. The DIET Dashboard provides tools trying to answer these needs with an environment dedicated to the DIET GridRPC Middleware. It consists of a set of graphical tools that can be used separately or together. These tools can be divided in three categories: 1. Workflow tools: including workflow designer and workflow log service. 2. DIET tools: including tools to design and deploy DIET applications. 3. Grid tools (aka GRUDU 3): these tools are used to manage, monitor and access user Grid resources. 4.1 Grid Resources Management When deploying an application over a Grid a user should be able to allocate resources for computation tasks by specifying the number of nodes needed, the duration of the jobs (also called walltime), the date when each job will start, their priority, etc. But they should have the possibility to choose between the default environment of the node and a user-defined one if the parallel implementation or even the default operating system provided (for example) does not fit the application needs. This management should be easy to realize in order to improve the Grid usage. The following sections present how the Grid resources management was designed in the DIET Dashboard and an existing software dedicated to Sun Grid Engine called QMON. 3http://graal.ens-lyon.fr/GRUDU 4.1.1 DIET Dashboard functionalities The Grid resources management is realized inside GRUDU, the Grid resources module of DIET Dashboard. GRUDU can be easily configured to use different batch schedulers, or different Grids. GRUDU can be used inside DIET Dashboard, but also in a standalone mode for users that just want to monitor, manage, or realize reservations on the Grid. Grid’5000 project aims at building a highly reconfigurable, controlable and monitorable experimental Grid platform gathering 9 sites geographically distributed in France featuring a total of 5000 processors. The main purpose of this platform is to serve as an experimental testbed for research in Grid Computing. To allocate resources on Grid’5000, the resource tool offers a user-friendly interface allowing the selection of the number of nodes needed at each site, and the definition of the date, walltime of reservation and the queue where the job will be started. The user can select a job type (for example, deploy if you plan to change the operating system) for the reservation itself and launch a script on the reserved resources (see Figure 2). Concerning the clusters, the OAR batch scheduler uses properties for the reservations (for example, to select nodes with Myrinet interfaces) and the allocation tool provides an interface for the definition of these properties. To manage resources, the user can deploy images on nodes with the operating system needed for the computations. The resources tool also provides a GUI for the deployment of images over Grid’5000 clusters through Kadeploy. (The deployment through Kadeploy allows the user to have its own operating system that he/she can tune and configure as he/she wants.) The nodes and the images (if the user plans to deploy on different clusters, one image per cluster) needed for the experiment (see Figure 3). 4.1.2 Comparison with QMON QMON is the GUI to the N1 Sun Grid Engine (SGE). It provides an interface for the job submission and the resources management of a Grid and the SGE batch scheduler. --- 4 https://www.grid5000.fr 5 http://oar.imag.fr/ 6 http://kadeploy.imag.fr/ QMON allows the user to submit either simple or parallel jobs on queues\(^7\) that are run in a passive and non-interactive mode. The users can then monitor the jobs and the Grid status. But QMON does not provide an access to the computation nodes for interactive work, and a specific system can not be deployed to get a user-defined system for the duration of the reservation. Moreover, to use different queues, the user must use a parallel job with a defined parallel environment such as MPI or PVM, whereas different nodes can be used on different clusters without the mandatory use of some parallel environment with OAR and the DIET Dashboard. ### 4.2 Grid Monitoring Grid monitoring is important for a default user before he reserved resources, but also after he has reserved resources. Before submitting any job to a Grid, the user should be aware of the available nodes considering their states (free/already used/dead). Whenever there is not enough resources, the user should be able to know when these will available for computation. After having successfully submitted some jobs, the user should have some interface to get the information about his jobs but also the other jobs running on the Grid. Even if sometimes more information could be interesting for expert users, too lower level information could be unusable for the default user who only wants to perform computations on some resources for a given period of time. The following sections will present how the Grid monitoring is implemented within the DIET Dashboard and an existing software dealing with the monitoring called Ganglia. #### 4.2.1 Functionalities of DIET Dashboard Thanks to the resource tool we can monitor the state of the platform with charts presenting the load of the different clusters, the state of all clusters and all the users’ jobs on the Grid (see Figure 4). We are also able to monitor the status of a particular cluster with charts summarizing the nodes states and a table composed of the jobs (running or waiting) on that cluster. A Gantt chart is also available helping the user to define when he can reserve some resources. --- \(^7\)A QMON queue corresponds to a cluster in the DIET Dashboard for the batch scheduler OAR. The resource tool also provides the user with all necessary information about every job that are present on a cluster, with, among others, the job Name, the job State, the job hosts, etc. Finally a plugin generates instantaneous data and history concerning the main metrics (the CPU load, the disk/memory/swap used, the in/out bytes, etc.) of the user reserved nodes with information taken from the Ganglia data. 4.2.2 Comparison with Ganglia Ganglia is a scalable distributed monitoring system for high-performance computing systems such as clusters and Grids. Ganglia provides resources usage metrics (memory, CPU, jobs...) for individual sites or whole Grids. These are low level and can be used to monitor the hardware of sites of whole Grids. But Ganglia does not provide information of higher level such as the node states, the available resources of clusters or the information about the jobs existing in the clusters. From an user point of view that needs to reserve resources and realize some computations on that nodes, the information about the jobs and the clusters in DIET Dashboard can be sufficient, whereas the ones from Ganglia can be useless because of a too lower level for a standard use. These informations are to be considered as a complement to the monitoring part of the DIET Dashboard (and it is moreover the purpose of a plugin as described in Section 4.2.1). 4.3 Grid Middleware Management When using a tool managing Grids and Grid Middleware such as DIET, a user expects features such as the design a hierarchy of Middleware elements, the remote deployment of locally created hierarchies, or the discovery of online existing and usable services for further use in workflows. Others functionalities can also be offered like log service or real-time execution for running workflows, or resources dependent generation of hierarchies according to predefined existing models. The following sections present how the Grid Middleware management is implemented in the DIET Dashboard as well as an existing software with monitoring features called GridNexus. 4.3.1 Workflow tools Workflow designer A large number of scientific applications are represented by graphs of tasks which are connected based on their control and data dependencies. The workflow paradigm on Grids is well adapted for representing such applications and the development of several workflow engines [2, 11, 15] illustrates significant and growing interest in workflow management within the Grid community. The success of this paradigm in complex scientific applications can be explained by the ability to describe such applications in high levels of abstraction and in a way that makes it easy to understand, change, and execute them. Several techniques have been established in the Grid community to define workflows. The most commonly used model is the graph and especially the Directed Acyclic Graph (DAG). Since there is no standard language to describe scientific workflows, the description language is environment dependent and usually XML based, though some environments use scripts. In order to support workflow applications in the DIET environment, we have developed and integrated a workflow engine. Our approach has a simple and a high level API, the ability to use different advanced scheduling algorithms, and it should allow the management of multi-workflows sent concurrently to the DIET platform. In this context, a workflow designer was developed to help users to design workflow applications but also to execute them. Figure 5(a) shows an overview of this tool, where they can have a description of the available services (discovered with online mode) and design a workflow by a drag and drop mechanism. The user does not need to know details about the requested services neither to define them. Once the workflow designed, one can either save it to an XML format supported by the DIET workflow engine or execute it directly. In the second case, the workflow input must be defined. The XML representation of designed workflows describes required tasks and data dependencies. A task is a DIET service and a data dependency is a link between two parameters. The workflow designer checks and guarantees data type compatibility between source and target ports of each created link. The workflow description level used here is known as “abstract description”. This level of description does not include any runtime information but is sufficient for the workflow execution. DIET hierarchy and workflow engine manage automatically and transparently the user tasks scheduling and execution. Figure 5: Workflow tools. Workflow log service To improve workflow monitoring, we propose a tool dedicated to workflow monitoring that displays the real-time execution processes of different workflows. This graphical tool has two major roles: first it is a central event service that receives and handles the events related to tasks execution progression. Secondly it provides a graphical representation of workflow state. This tool, shown in Figure 5(b), displays the different workflows after they start their execution. Each node of the workflow can be in one of the following states: “waiting”, “running”, or “done”. 4.3.2 DIET tools A DIET platform can be represented by a hierarchy of agents and servers. Designing and deploying such a hierarchy of distributed and heterogeneous elements can be a hard task for the end user. In our previous works [6], we have defined a XML format to describe DIET platforms. This format describes a DIET hierarchy but also the information about used resources and environments. ![DIET designer](image) Figure 6: DIET designer. To deploy DIET hierarchies on a Grid environment the DIET Dashboard provides two methods: **In two steps:** First the user creates by hand his DIET hierarchy with the DIET designer. Instead of manipulating complex XML files, the user simply adds Local Agents or Server Daemons to the Master Agent or already added Local Agents. Concerning the Server Daemons you can define the binary to launch, the input parameters etc. This level describes only the application level, and the obtained application description can be extended with runtime information. The main frame of the DIET designer is presented in Figure 6. To extend this application level hierarchy the user should use the DIET mapping tool (see Figure 7). This tool allows the user to map the allocated Grid’5000 resources to a DIET application. For each Grid’5000 site, the nodes (or hosts) are used in a homogeneous manner but the user can select a particular host if needed. **In one step:** The XMLGoDIETGenerator builds a GoDIET XML file that can be used with the DIET deployment tool from a compact description and a reservation directory. For large experiments, writing the GoDIET file by hand is time consuming and if the user should redo this experiment with a different set of machines, the GoDIET file will be generated according to the available resources. The way hierarchies are described (through a framework from which their are created according to the available resources) have also to be the most flexible to let the user write all possible hierarchies. One should notice that the XMLGoDIETGenerator is “resources driven” because the final hierarchy will directly depend on the available resources provided, whereas the ones created with the DIET designer and mapping tools will not change if there is more or less available resources. When the DIET hierarchies are generated the user can deploy these hierarchies on the Grid thanks to the DIET deploy tool (see Figure 8). This tool is a graphical interface to GoDIET. It provides the basic GoDIET operations: open, launch, stop, and also a monitoring mechanism to check if DIET application elements are still alive (the states are the same as for the workflow log service). As the workflow log service, the DIET deployment tool can be used in a local or a remote mode. 4.3.3 Comparison with GridNexus GridNexus provides a GUI for the workflow construction and execution. This interface is a “Drag and Drop” environment that can be used to build workflows from generic Grid and web services. The output is XML-based and easy to modify or use from specialized tools around GridNexus. The user designs the workflow by linking elements as for the workflow designer of DIET Dashboard. After having designed the workflow it can be run and the user can see the results of the workflow or get the corresponding script of the workflow. The workflows can be abstracted to simplify the workflow design. These “composites” can then be used as elements of other workflows. GridNexus comes with a library of pre-defined elements that can be used from the GUI, but we can also generate workflows from URL of WSDL that define services. However GridNexus does not show the evolution of the workflow execution, and it does not provide some log functions in order to prevent from services failures or anything else. Moreover GridNexus does not discover online services but the user should provide him the services which could be complicated for the end-user that might not know where those services are located. Finally GridNexus only manages workflows of tasks, and does not allow the user to design and execute her/his own hierarchies of elements, in order to later execute clients (the ones that are not workflows of executions) on computations nodes. 5 Experiments An experiment has been realized to test the capabilities of DIET and DIET Dashboard for a large number of machines. This experiment has been realized on Grid’5000, and the chosen application was cosmological computations. For this experiment, the entire Grid’5000 platform was reserved which gave us 12 clusters used on 7 sites for a duration of 48 hours. Finally 979 machines were used with an user-defined environment containing all the needed software for the experiment. Figure 9 gives a bar chart representing the occupation of the cluster with the jobs for the experiment, taken from the resources tool of the DIET Dashboard. The aim of the experiment was also to start the largest machines reservation over the Grid, for the deployment of the largest DIET hierarchy in order to execute the maximum number of cosmological application jobs. The MPI code executed by the DIET servers called RAMSES\textsuperscript{8} \cite{ramses} was developed in Saclay (DAPNIA/CEA) to study large scale structures and galaxies formation. This code is a Grid-based hydro solver with adaptive mesh refinement. Thanks to GRUDU, reservations were done at the Grid level and not on each cluster in 20 seconds. To get an user-defined environment on each machine, GRUDU was able to realize the deployment of every machines of the 12 clusters involved at the same time in roughly 25 minutes. Finally the DIET hierarchy was created through the use of the XMLGoDIETGenerator in 5 seconds and deployed through the DIET Deploy tool and GoDIET in 23 seconds. If these tasks would have been done without GRUDU: - the reservation would have been realized with oargridsub (a non-graphical utility dedicated to OAR) by hand by reserving every nodes of each cluster at a time. Here is a dummy example of oargridsub command: ``` oargridsub cluster1:rdef="nodes=2",cluster2:rdef="nodes=1",cluster3:rdef="nodes=1", cluster4:rdef="nodes=2",cluster5:rdef="nodes=1",cluster6:rdef="nodes=1", cluster7:rdef="nodes=2",cluster8:rdef="nodes=1",cluster9:rdef="nodes=1", ``` \textsuperscript{8}among the uncrashed nodes. \textsuperscript{9}\url{http://irfu.cea.fr/Projets/COAST/ramses.htm} All-in-one Graphical Tool for the management of DIET a GridRPC Middleware Figure 9: Chart representing the occupation of the different clusters and the node repartition between the different job states (Free/Job/Dead/Absent). ``` cluster10: rdef="nodes=2", cluster11: rdef="nodes=1", cluster12: rdef="nodes=1", -s '2007-09-07 16:00:00' -w '0:10:00' -p ~/runhpl/runhpl ``` - The use of an user-defined environment would have been impossible without KaDeploy, it would have taken the same amount of time per cluster and not for all of them, and the configuration of the deployment would have been more difficult because of several conditional choices. - The DIET hierarchy would have been written by hand and not easily readable because of the resources-dependency of the hierarchy description file avoided by the pattern-matching realized by the XMLGoDIETGenerator. The DIET platform deployed was composed of one Master Agent, 12 Local Agents, and 29 Server Daemons. One job can be executed on each SeD at a given time. 816 nodes were used for the application jobs. As far as the different clusters do not provide the same compilation environment, an image of an environment specially created has been deployed on every reserved nodes. During the experiments, the main difficulties came from the hardware limitations (typically the disk space which was not large enough to backup data, or some no well defined permissions of /tmp directories on some clusters), and not from DIET or the DIET Dashboard that allowed a good dispatching of the Middleware requests and the fast and efficient management of these hardware problems. 6 Conclusion With the development of Grid technologies and the availability of large scale platforms, it becomes mandatory to manage Grid applications efficiently and easily. In this paper, they have presented the DIET Dashboard environment which is a complete, modular, portable, and powerful set of tools dedicated to a Grid Middleware. With this tool, a non-expert user can manage Grid resources, monitor the Grid itself and manage the Grid Middleware by designing its Grid applications or using workflows and then deploying these Grid applications over the Grid platform. The DIET Dashboard offers a large number of modules, created to answer the different needs of tools appearing in a Grid context. The software architecture design of DIET Dashboard makes its extensible (modules can easily be added to the core of the application). The performance of the DIET Dashboard and GRUDU (the tool dedicated to the Grid management) have been tested through the experiment realized on Grid’5000. This experiment showed that the resources tool is able to monitor the entire Grid, and reserve resources on a large number of sites and clusters. GRUDU is one answer to the need of an efficient tool for the management of both hardware and software part of the Grid. GRUDU abstracts the scripting part of the management of a Grid, in order to provide to the user a easy-to-use GUI where all the necessary operations are available. Users do not need to write obscure and complex command lines for the management of their resources anymore, which is often one of the main barriers in the use of Grid environments. All these elements prove that the DIET Dashboard is as stable and efficient tool that unifies different tools into one single modular graphical application. 7 Acknowledgments DIET was developed with financial support from the French Ministry of Research (RNTL GASP and ACI ASP) and the ANR (Agence Nationale de la Recherche) through the LEGO project referenced ANR-05-CIGC-11 and Gwendia project (ANR-06-MDCA-009). All experiments were done over the Grid’5000 platform. We would like to thank the developers of the DIET Middleware and in particular Abdelkader Amar for his work around DIET Dashboard. All-in-one Graphical Tool for the management of DIET a GridRPC Middleware References
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02102784v1/document", "len_cl100k_base": 6923, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 35724, "total-output-tokens": 9115, "length": "2e12", "weborganizer": {"__label__adult": 0.0002968311309814453, "__label__art_design": 0.0008769035339355469, "__label__crime_law": 0.0003192424774169922, "__label__education_jobs": 0.001941680908203125, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.000194549560546875, "__label__finance_business": 0.0005273818969726562, "__label__food_dining": 0.00030684471130371094, "__label__games": 0.0007371902465820312, "__label__hardware": 0.0020122528076171875, "__label__health": 0.00054168701171875, "__label__history": 0.0006465911865234375, "__label__home_hobbies": 0.00014340877532958984, "__label__industrial": 0.0008044242858886719, "__label__literature": 0.0003962516784667969, "__label__politics": 0.0003619194030761719, "__label__religion": 0.0006017684936523438, "__label__science_tech": 0.445068359375, "__label__social_life": 0.0001709461212158203, "__label__software": 0.06719970703125, "__label__software_dev": 0.4755859375, "__label__sports_fitness": 0.00025081634521484375, "__label__transportation": 0.0006008148193359375, "__label__travel": 0.0002608299255371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37753, 0.04216]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37753, 0.43854]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37753, 0.8709]], "google_gemma-3-12b-it_contains_pii": [[0, 475, false], [475, 639, null], [639, 3481, null], [3481, 7760, null], [7760, 11745, null], [11745, 13704, null], [13704, 15839, null], [15839, 18072, null], [18072, 19781, null], [19781, 22702, null], [22702, 25080, null], [25080, 27521, null], [27521, 29692, null], [29692, 31901, null], [31901, 33537, null], [33537, 36916, null], [36916, 37753, null]], "google_gemma-3-12b-it_is_public_document": [[0, 475, true], [475, 639, null], [639, 3481, null], [3481, 7760, null], [7760, 11745, null], [11745, 13704, null], [13704, 15839, null], [15839, 18072, null], [18072, 19781, null], [19781, 22702, null], [22702, 25080, null], [25080, 27521, null], [27521, 29692, null], [29692, 31901, null], [31901, 33537, null], [33537, 36916, null], [36916, 37753, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37753, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37753, null]], "pdf_page_numbers": [[0, 475, 1], [475, 639, 2], [639, 3481, 3], [3481, 7760, 4], [7760, 11745, 5], [11745, 13704, 6], [13704, 15839, 7], [15839, 18072, 8], [18072, 19781, 9], [19781, 22702, 10], [22702, 25080, 11], [25080, 27521, 12], [27521, 29692, 13], [29692, 31901, 14], [31901, 33537, 15], [33537, 36916, 16], [36916, 37753, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37753, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
325dea889d64a0efa589e95bb081abe24fdc0394
10 tips for generating reusable VHDL S Meiyappan - August 19, 1999 "Design reuse" is the process of migrating high-quality intellectual property (IP) from one ASIC design to another. With the tremendous advances in semiconductor technology, it is increasingly difficult to bridge the productivity gap between what technology offers and what silicon productivity allows. Designing full-custom ASICs to occupy as much silicon area as possible is becoming increasingly challenging. To achieve the highest level of silicon efficiency, designing semicustom ASICs with highly reusable design entities has become today's challenge. The use of predesigned and preverified design blocks to achieve a high level of design reuse is the most promising technique for bridging the gap between available gate count and designer productivity. Designing a complex chip requires an HDL-based design. Effective HDL generation with design reuse in mind will help you create IP cores that are usable in multiple chip designs. The design-reuse challenge Designing for reuse poses new and innovative challenges to a designer. Before being reusable, a design must be usable, which means design using good design practices. A reusable design must be designed with the mindset of solving a general problem; well-coded, commented, and documented; verified to a high level of confidence; and independent of technology, design tools, and applications. Because of mounting time-to-market pressures, designers often bypass some or all of these guidelines, rendering a design virtually nonreusable. However, following these guidelines speeds designing, verifying, and debugging a project by reducing iterations throughout the coding and verification loops. The efficient use and reuse of designs play a vital role in the creation of large ASICs with aggressive design schedules. Although chip designers have used HDLs for some time, most designs today do not use the built-in "design-reuse" features of the languages. In other words, designers do not thoroughly understand the purpose of the HDLs and misuse or underuse their features. On an average, only 20% of the designs in the industry are reusable. With an increasing need for design reuse, the emphasis on coding techniques for design reuse is on the rise. This article covers developing reusable designs using the native-language features of VHDL and design-reuse techniques pertaining to synthesizable VHDL. Unless stated otherwise, the VHDL discussed complies with the VHDL-87 standard. VHDL features promoting reusability Chip designers have used VHDL for more than a decade. One of the primary intents of developing designs in VHDL is reusability, although, designers, until recently, have not effectively employed this technique. You can exploit the feature-rich VHDL for reuse techniques. VHDL features include generics, packages of constants, generate statements, unconstrained arrays, VHDL attributes, block statements for inline-design partitioning, record data types for data bundling, configuration specifications, the ability to tie ports off to known constants, the ability to leave unused output ports open and unconnected, array aggregates, functions, and procedures. **Tip 1: Generics** You use generics to write parameterized models of varying structure and behavior ([Reference 1](#)). Listing 1 provides a simple example of a synchronous counter with modifiable structure and behavior. You accomplish this modification through the use of VHDL generics. This example illustrates the use of generics for modifying structure and behavior using the language's features for simulation and synthesis. You can enable and disable selective features by turning generics on and off. For example, if you set the COUNT_ENABLE generic to FALSE in line 8, then none of the logic described in lines 32 to 38 is elaborated or synthesized, but the parent design can still have a count enable. Using different values for OutDelay and DOWN_COUNT changes the structure of the design. Creating designs with generics enables design reuse in various circumstances where you need different structure or behavior. For example, a design may require two counters: one that counts to 1024 and another that counts to eight. Designing two separate counters—one that is 10 bits wide and one that is 3 bits wide—has drawbacks of unnecessary investment in design, verification, and synthesis time. If you use the generic approach to design a counter with reuse in mind, you save a great deal of design, synthesis, and verification time. The use of generics for parameterizing structure and behavior is essential for design reuse-applications. The following examples illustrate the instantiation of the counter in Listing 1 in an application that requires a 10-bit up-counter and a 3-bit down-counter The example in Listing 2 illustrates the following points: - Lines 3 to 14 instantiate the counter as a 10-bit up-counter with the count-enable-logic turned on. - The TenBit counter instantiation uses named association for its generics and ports. - Unmapped generic values in the instantiation assume default values. - Lines 18 to 30 instantiate the same counter as a 3-bit down counter with count-enable-logic turned off. - The ThreeBit counter instantiation uses positional association for its generics and ports. In general, it is not advisable to use positional association because changing a parameter or port in the reusable design requires the same modification in all instances of that design. - The use of generics can help greatly in resources and time when you need multiple instances of the same design. The use of generics to parameterize designs helps not only to create reusable design blocks, but also to remove unnecessary logic or to modify useful logic during synthesis. Some synthesis tools help create macros and templates when you parameterize designs through generics. You can use the feature thus created as a library element in subsequent designs for simulations or synthesis. Parameterizing bus and register widths through generics is a simple example of the use of generics. Consider the example of the counter in Listing 1 with the following generic values: When synthesis elaborates this design, the synthesis tool ignores the generic for OutDelay because the tool cannot handle time-delay elements in mapping logic. The synthesis tool creates a 2-bit down-counter with the count_enable logic, as the following examples illustrate. Consider another case of the same counter with the following generics: ``` BIT_WIDTH => 8 COUNT_ENABLE => false DOWN_COUNT => false ``` This code creates an 8-bit up-counter without the count-enable logic. If gate count is an important parameter, you can efficiently optimize unused logic using this method. You can modify the structure (changing the BIT_WIDTH) or behavior (up-or down-counter, count_enable disabled or enabled) during design, synthesis, and simulation using this elegant approach to parameterization. Generics are excellent for specifying widths of counters, buses, shift registers, and other designs, but as Listing 2 shows, you can also use generics to turn various features on and off. This technique lets you use only the features that apply to your current project. You can use generics to specify such features as FIFO depths; bus interface, such as PCI or ARM System Bus; architecture, such as up/down counter, flip-flop-based register versus latched-based register, and ripple-carry adder versus carry-look-ahead adder; register address; power-on-reset value for a register; supported and reserved bits in a register; clock-divide ratio for a clock-divider circuit; and number of buffers in a clock tree. If you make the design somewhat generic, others can more easily reuse it. One drawback of the generic approach occurs when you use generics in a hierarchy. To apply the generics to the lowest level of the hierarchy, the generics must pass down through the hierarchy. This passing down may involve generics having to go through blocks that do not use the value of the generics. Another drawback of using generics is that, as the list of generics grows, it becomes more cumbersome to carry them around at each point in the hierarchy. A third drawback is that some synthesis tools have limited support for generics. For example, a synthesis tool may require all generics to be of type integer. An efficient way to avoid these problems is to use a package of constants. **Tip 2: Constants** A VHDL package is a simple way of grouping a collection of related declarations that serve a common purpose. You can make the package visible to the appropriate design blocks by using library statements. Using library statements means that adding or changing a parameter requires you to modify only one package file. Also, some synthesis tools do not allow the use of Boolean, string, enumerated, or array types for generics. In such cases, using library statements allows you to use a constant in a package. Most synthesis tools allow most data types, and using library statements lets packages use TYPE statements for enumerated data types. A package of constants also lets you use the same package for design and simulation in "design-aware" testbenches. As an example of the use of a package of constants, consider changing the counter in Listing 2 to use such a package. Also, assume that the package resides in the "pkgs" VHDL library (Listing 3). This counter example shows that using a package of constants is similar to using generics for parameterization. In addition, using a package of constants allows any design entity to reference the parameters in the package without any overhead. Also, to change the structure of the design, you have to change only the parameter value in the package, and you can see the change in all the units referencing the parameter. A package of constants can also use subtypes and enumerated data types to reference the parameters for reusability and readability, and a central package can serve as a package of parameters to parameterize an entire design. Further, using a package makes it relatively simple to use arrays and other composite data types for parameterization. You can work on a package separately as a design unit, create the package independently of the design, and reuse the package in different parts of a model. Some nonsynthesizeable constructs in generic definitions, such as enumerated data types, become synthesizeable when you use them in a package. The package may contain other constants and information that you may use for parameterization, yet the design may still use this information. The package serves as a common place-holder for this type of shared information. Furthermore, a package of parameters provides better code structure, provides efficient organization, and is self-documenting. Figure 1 shows the parameterized counter for different values of generics and constants. The counter was synthesized using Synopsys' (www.synopsys.com) Design Compiler with a 0.2-µm standard-cell library with the BIT_WIDTH parameter set to 2 in all synthesis tests. In the counter of Figure 1, COUNT_ENABLE is false (unconnected en enable signal), BIT_WIDTH is 2, and DOWN_COUNT is false (a conventional up-counter). In the counter of Figure 2, an up-counter with count enable, COUNT_ENABLE is true (connected en enable signal), BIT_WIDTH is 2, and DOWN_COUNT is false. Also in the counter of Figure 3, a down-counter with no enable, COUNT_ENABLE is false (unconnected en enable signal), BIT_WIDTH is 2, and DOWN_COUNT is true. These three examples show how you can modify counter structure and behavior by using different values of generics and constants while eliminating unnecessary gates. A deferred constant is one in which you declare but do not initialize the constants in a package. Instead, you initialize the deferred constants in the design that uses the constants. In other words, you "defer" the binding of the constants. You must bound deferred constants before you reference them so that any change to the package does not require design-counter recompilation or resynthesis (Listing 4). Using a package of constants has the same effect as using generics modify structure or behavior during synthesis. The package of constants also allows you to effectively use composite data types for readability and still preserve design synthesizability. Furthermore, it is easier to synthesize a design that uses a package of constants than one that uses generics. In other words, it is easier for an engineer to learn how to get the synthesis tool to use the package of constants than to use a design that uses generics. Some synthesis tools have longer runtimes for designs with composite data types. You can use a package of constants in much the same way that you use generics. Packages of constants are easier to use than are generics if a lot of parameters are involved. Packages also typically have better support of synthesis tools than do generics. However, using a package of constants means that you cannot use multiple instances of a design with different parameters in a single design unit. Instead, you need a unique entity and a unique package for each recurring design unit. Also, a change in a package that uses nondeferred constants causes recompilation or resynthesis of the designs referring the package even if a parameter does not affect the design. Also, a package of constants requires you to maintain a separate file or library. Compare using a package of constants with using generics for parameterization after considering the intended scope of an application. As a general practice, use a package of constants for designs that have many parameters and are not instantiated multiple times within a large design. For example, a memory-controller design that translates host/CPU cycles into memory cycles is unlikely to be instantiated multiple times in a design. Such designs should use a package of constants. You should use generics for designs such as bus interfaces, counters, adders, and linear-feedback shift registers. **Tip 3: Generate statements** You can implement many digital systems, such as memories, as regular iterative compositions of subsystems. For example, memories comprise rectangular arrays of storage cells. Designers prefer such implementations, because they make it easier to produce compact, proven, area-efficient layouts, thus reducing cost. If you can express a design as a repetition of some subsystem, you should be able to describe the subsystem once and then describe how it is to be repeatedly instantiated, rather than describe each instantiation individually (Reference 2). You can use generate statements to effectively produce iterative structures of a design. Generate statements are concurrent VHDL constructs that may contain further concurrent statements for replication. When you use generate Statements in conjunction with generics or constants, they can efficiently generate repetitive structures. Consider a situation in which you need to drive a 32-bit off-chip data bus from on-chip using eight output enables through an output pad (Listing 5). This example instantiates 32 pad cells for the data bus. Note the use of the "range" and "length" attributes. These attributes also promote reuse in that they use the previously defined bus widths for the data bus. Also note the use of "i/4" in the assignment of the output-enable signals to the pad cell. The synthesis tool should be intelligent enough to truncate the division to an integer value to give to proper assignment of dataoe(3) to data(31:24), dataoe(2) to data(23:16), and so on. Listing 6 illustrates the use of generate statements with iterative structures of concurrent statements to create a register from a flip-flop. You can also use generate statements to conditionally create, modify, or remove structures. This technique involves code-level optimization, which removes unwanted structures during elaboration time. With the use of generics or packages of constants, this technique can be useful in creating a reusable design. Using conditional generate statements, you can enable or disable logic that implements certain features instead of manually removing the code or optimizing via synthesis. As an example of conditional code inclusion and exclusion, you can synchronize an output to the clock or combinatorially set it with the constant CONSTANT SYNC_OUTPUTS : BOOLEAN : TRUE; This technique lets you generate a synchronous or a combinatorial output (Listing 7). The generate statement is a powerful tool to control the inclusion or exclusion of logic. It is useful for designs that repeatedly use blocks of logic, such as flip-flops, in an iterative structure. These blocks form registers, pad cells, and many other structures. Many designers use generate statements to instantiate cells, as the pads example illustrates, but you can also use generate statements to conditionally create, modify, or remove sections of VHDL code. Generate statements are powerful tools promoting design reuse. A few more examples that show the application of generate statements are choosing implementation of a latch-based or flip-flop-based register; including fixed, round-robin, or another arbitration scheme in a bus-arbiter design; and including only those bits of an interrupt controller that you know you are going to use. Consider the case in which registered interrupts are entering the interrupt controller. If these inputs go through a substantial amount of combinatorial logic before being routed to other registers, then the use of generate statements to include only the necessary flip-flops will help a synthesis tool to significantly reduce the gate count. Be aware that some synthesis tools cannot optimize across flip-flops. In these cases, even if we know that an input, such as an unused interrupt, is always tied high, the synthesis tool can't use this information to reduce the gate count of the synthesized design. Tip 4: Ports In many instances, you can selectively disable logic by tying off certain ports to default values. When synthesized with a top-down approach, the synthesis tool uses "optimization by constant propagation"—optimizing that path and taking into consideration that tied-off value. You can later remove the tied-off ports from the entity. Consider a three-AND-gate design (Figure 4a). If you tie one of the inputs to a zero (Figure 4b), then the resulting logic eliminates all the AND gates and the output, F, is always at logic 0. \[ \begin{align*} \text{BIT_WIDTH} & \Rightarrow 8 \\ \text{COUNT_ENABLE} & \Rightarrow \text{false} \\ \text{DOWN_COUNT} & \Rightarrow \text{false} \end{align*} \] The same situation is true for port outputs. By leaving unused port outputs open (zo = > open), you can eliminate the logic that creates these outputs when you adopt a top-down synthesis approach. Tip 5: Unconstrained arrays Using unconstrained arrays is a helpful method of reusing designs for variable-width implementations. You should be careful when using attributes such as "range" and "length" in the design to avoid runtime and elaboration-time errors. Unconstrained arrays are particularly suitable for address, data, and register widths. You can use these arrays for formal parameters in functions and procedures as well as for entity ports. VHDL allows the use of unconstrained-array types that let you indicate the type of index values without specifying the index bounds. Unconstrained arrays are useful for making designs that you can reuse in different applications just by modifying their bit widths. The previous counter example uses unconstrained arrays for the count output (Listing 8). This technique lets you connect the counter entity to array signals of any size or with any range of index values. Note the use of the VHDL attribute "range" to create a signal of the same width and range specification as the port count. You cannot synthesize this design by itself, and you have to instantiate it in a top-level entity to bind the array values to a finite range (Listing 9). You must synthesize the code in Listing 9 in a top-down manner so that you can synthesize the counter along with the rest of the design. Another use of unconstrained arrays occurs in functions and procedures. You should write functions and procedures that you design for synthesis as generically as possible, independently of bit widths. Consider an example of a binary-code-to-gray-code converter. To create a gray code from a binary code, use the algorithm in Figure 5a. Figure 5b is an example of how to convert binary 100 to its gray-code equivalent of 110. Table 1 shows the gray codes for the 3-bit binary values that the algorithm of Figure 5a creates. You hard-code and optimize this algorithm for a 3-bit case. When the design has to accommodate more counts, the function has to change, requiring you to revalidate all the logic. Writing a generic function that is independent of the bit-vector lengths makes efficient reuse possible. Listing 10 is a bit-width-independent implementation for the binary-code-to-gray-code converter. As another example, consider the functions and procedures in the IEEE std_logic libraries. Most of these functions and procedures are implemented using unconstrained arrays to support efficient reuse. Tip 6: VHDL attributes A few attributes of composite types are useful in creating reusable designs. The attributes "left," "right," "range," "length," "low," and "high" are synthesizable and make the code independent of data type. Refer to the examples using unconstrained arrays (Listing 8 and Listing 9), where the function Gray2bin and the entity counter use the "range" attribute to promote reusability. Tip 7: Configuration specs You use configuration specifications to bind component instances to design entities. You can also use these configurations to pass parameters such as generics at the top-most level in a testbench, to select an architecture for an entity, or to override port mappings in an instantiation. Some synthesis tools do not support configuration specifications. Consider the previous counter example that illustrates the use of generics for parameterization. Listing 11 illustrates the same counter with another architecture that buffers the counter outputs with a generate statement. The counter is now instantiated in a top-level design using two instances of the counter (Listing 12). A configuration specification configures the counter in the entity top, as shown in Listing 13. Configuration specifications let you configure various levels of the design’s hierarchy. Tip 8: Block statements Block statements are VHDL constructs that allow inline design partitioning. For example, if you partition a design such that the datapath exists in a separate VHDL entity, then you can partition the architecture for that entity using block statements. Block statements are a method of grouping related logic. Block statements also provide the ability to declare signals within the blocks and, if you remove the block, unnecessary signals do not remain unconnected in the code. You can combine a generate statement with the block statement to selectively include or exclude blocks. Tip 9: Unused ports In a hierarchical design, if you do not use certain ports in an entity, then the usual practice is to connect them to a dummy signal. From a top-down synthesis approach, this scenario makes the synthesizer assume that you've connected the signal to a net. You can avoid this problem by leaving the port unconnected or by specifying with the VHDL keyword "open." Tip 10: Preprocessors In many situations, designers cannot accomplish what they want using the available features. In some cases, it is desirable to see only the code that is relevant to the design. In such cases, you can use a preprocessor to add, eliminate, or modify code for a specific application, through the use of preprocessor directives. Author info Subbu Meiyappan is a senior design engineer at VLSI Technology. He has worked for the company for nearly three years, designing, developing, synthesizing, simulating, and validating high-performance IP blocks for PCI, ARM-ASB-based devices, and high-performance ASICs. He has a BE from Annamalai University (Annamalai Nagar, India) and an MS from Tennessee Technological University (Cookeville, TN). His interests include computer architecture, design automation, volleyball, and travel. Ken Jaramillo is a staff engineer at VLSI Technology (www.vlsi.com). In his three years with the company, he has worked on high-speed networking designs, such as fiber-distributed data interfaces, Firewire, and high-speed satellite modems. He has a BSEE from the University of Missouri (Kansas City, MO) and a BScCoE from the University of Missouri (Columbia, MO). His hobbies include basketball, rock climbing, and travel. Peter Chambers is an engineering fellow at VLSI Technology, where he has worked for six years developing many PCI-based designs, ASICs, chip sets, and reusable IP cores. He has a BS from the University of Exeter (UK) and an MS from Arizona State University (Tempe, AZ). He is a member of both IEE and IEEE. REFERENCE
{"Source-Url": "https://www.edn.com/Pdf/ViewPdf?contentItemId=4360835", "len_cl100k_base": 5022, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 16863, "total-output-tokens": 5501, "length": "2e12", "weborganizer": {"__label__adult": 0.0010633468627929688, "__label__art_design": 0.0018472671508789065, "__label__crime_law": 0.0006918907165527344, "__label__education_jobs": 0.0011911392211914062, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.0005159378051757812, "__label__finance_business": 0.0004127025604248047, "__label__food_dining": 0.0007300376892089844, "__label__games": 0.00151824951171875, "__label__hardware": 0.08197021484375, "__label__health": 0.0012750625610351562, "__label__history": 0.0006136894226074219, "__label__home_hobbies": 0.0004193782806396485, "__label__industrial": 0.0040435791015625, "__label__literature": 0.0002658367156982422, "__label__politics": 0.0005679130554199219, "__label__religion": 0.0016336441040039062, "__label__science_tech": 0.28076171875, "__label__social_life": 0.0001131296157836914, "__label__software": 0.00887298583984375, "__label__software_dev": 0.6083984375, "__label__sports_fitness": 0.0008625984191894531, "__label__transportation": 0.0018777847290039065, "__label__travel": 0.0003676414489746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25654, 0.0153]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25654, 0.6419]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25654, 0.89419]], "google_gemma-3-12b-it_contains_pii": [[0, 2655, false], [2655, 6214, null], [6214, 9764, null], [9764, 13838, null], [13838, 18071, null], [18071, 21832, null], [21832, 25088, null], [25088, 25654, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2655, true], [2655, 6214, null], [6214, 9764, null], [9764, 13838, null], [13838, 18071, null], [18071, 21832, null], [21832, 25088, null], [25088, 25654, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25654, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25654, null]], "pdf_page_numbers": [[0, 2655, 1], [2655, 6214, 2], [6214, 9764, 3], [9764, 13838, 4], [13838, 18071, 5], [18071, 21832, 6], [21832, 25088, 7], [25088, 25654, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25654, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7dfdf1687c0cdb76e260d36fa5a33fae4b322abd
The problem - How to protect the confidentiality of sensitive data in the database? - Sensitive data examples: credit card numbers, medical data, new product specifications, etc. - Possible risks to data confidentiality: - Use of weak authorization (for example weak or blank passwords) leading to access to confidential information (for example, payrolls) by unauthorized persons - Misconfiguration of access control - Authorized backdoors into the database (read-only accounts, non-production databases, backups) - Database administrators can access, inadvertently or maliciously, online data and backup data - SQL injection attacks through a poorly coded Web application attackers & protection goals - Who could be the attackers? How to make their attacks more difficult (aka protection goal)? - System and database administrators: - They may have full access to everything in their administrative domain - Protection goal: make it difficult and time-consuming for them to read confidential information; use separation of duties; employment screening - Development staff: - They have an intimate knowledge of the code; they often obtain troubleshooting read-only rights to the production database to deal with emergency production problems - Protection goal: prevent compromise of the data even when they have access to the database The problem attackers & protection goals - Who could be the attackers? How to make their attacks more difficult (aka protection goal)? - Application crackers: - They try to circumvent application security to gain unauthorized access. They can be considered like unauthorized users, but they may also be able to impersonate a legitimate user. - Worst case: the cracker gains administrative privileges - Protection goal: make access to the database "difficult" and time-consuming; mitigate SQL injections - Legitimate users: - (s)he may try to elevate his privileges, or to impersonate another legitimate user - Protection goal: strong authentication controls External requirements - Legislations requiring the protection of data confidentiality: - Health Insurance Portability & Accountability Act (HIPPA) - Sarbanes-Oxley Act (SOX) - Gramm-Leach-Bliley Act (GLBA) - Children’s Online Privacy Protection Act (COPPA) - Business Compliance The problem attackers & protection goals - Who could be the attackers? How to make their attacks more difficult (aka protection goal)? - “traditional thieves”: - They might steal the database or the backup media - While database servers are typically kept in locked and limited-access data centres (physical security), backup media might leave the premises and are more exposed to theft External requirements - Health Insurance Portability & Accountability Act (HIPPA): - It requires data safeguards that protect against “intentional or unintentional use or disclosure of protected health information ”, and - it mandates “to ensure the confidentiality, integrity and availability of all electronic protected health information the covered entity creates, receives, maintains, or transmits” - It mandates “to implement a mechanism to encrypt and decrypt electronic protected health information” External requirements Business Compliance: • Payment Card Industry (PCI) Data Security Standard • Stored cardholder data must be rendered unreadable, and it includes cryptographic methods in the recommended controls • Adopted by American Express, Visa, MasterCard and several other payment card companies The solution • We have already discussed authentication and access control as means to allow access to the data to authorized persons only • However, authentication & access control may not be enough (DB administrators can still access and see the data) • If data are sensitive it is also possible to encrypt them • Data encryption is the last barrier to protect sensitive data confidentiality Encrypting the database • Which type of encryption (symmetric or asymmetric)? • Encryption vs. Obfuscation •Cryptographic risks • What should be encrypted? • Which component should perform the encryption? Which type of encryption? • Symmetric key cryptography • DES, AES • Faster than asymmetric cryptography • PROs • Performance • CON’s • Key management: • Since the same key is used both to encrypt and decrypt, the key must be distributed to every entity that needs to work with the data • If the key is obtained by an attacker, then confidentiality (and integrity) of data are at risk • Once the key is at the decrypting location, it must be secured so that an attacker can not steal it Which type of encryption? - Asymmetric key (i.e., public-private key) cryptography - The keys used to encrypt and decrypt the data are different. This doesn't require a shared secret, BUT - It still requires the owner of the keys to keep secret the private key Obfuscation - In cryptography, obfuscation refers to encoding the input data before it is sent to a hash function or other encryption scheme. - This technique helps to make brute force attacks unfeasible, as it is difficult to determine the correct cleartext Obfuscation - In certain cases, obfuscation would be preferable to encryption - Example: an audit report on a medical system - This report may be generated for an external auditor, and contain sensitive information. The auditor will be examining the report for information that indicates possible cases of fraud or abuse. - Assume that the management has required that Names, Social Security Numbers and other personal information should not be available to the auditor except on an as needed basis. - The data needs to be presented to the auditor, but in a way that allows the examination of all data, so that patterns in the data may be detected. - Encryption would be a poor choice in this case, as the data would be rendered into ASCII values outside of the range of normal ASCII characters. This would be impossible to read. - A better choice might be to obfuscate the data with a simple substitution cipher. While this is not considered encryption, it may be suitable for this situation. Transposition Ciphers example - Rail Fence cipher - STEP1: Write message letters out diagonally over a number n of rows - STEP 2: Then read off cipher row by row - Example - Original Message: meet me after the toga party - STEP1 trasforms it into: m e m a t r h t g p r y e t e f e t e o a a t - STEP2: ciphertext MEMATRHTGPRYETEFETEOAAT Obfuscation Example of an obfuscation function using transposition (in Oracle PL/SQL): ```sql create or replace package body obfs as xlate_from varchar2(62) := '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz'; xlate_to varchar2(62) := 'nopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklm'; function obfs ( clear_text_in varchar2 ) return varchar2 is begin return translate( clear_text_in, xlate_from, xlate_to ) end; function unobfs ( obfs_text_in varchar2 ) return varchar2 is begin return translate( obfs_text_in, xlate_to, xlate_from ) end; end; ``` NOTE 1: This obfuscation is reversible, that is, it is possible to generate the cleartext data from the obfuscated one. NOTE 2: The Oracle `Translate` function replaces a sequence of characters in a string with another set of characters. It replaces the 1st character in the `string_to_replace` with the 1st character in the `replacement_string`. Then it replaces the 2nd character in the `string_to_replace` with the 2nd character in the `replacement_string`, and so on. Obfuscation - Another obfuscation technique: masking - It is useful in situations where it is only necessary to display a portion of the data. - Examples: the receipts printed at gas stations and convenience stores, the e-receipt from Expedia. - the last 4 digits of the credit are often displayed as clear text, while the rest of the credit card number has been masked with a series of X's. - This is different from the previous technique in that the clear text cannot be reconstructed from the displayed data. Cryptographic risks - Main risk: lost keys - When the encryption key is lost, there is no “undelete” or “data recovery” program that can undo the encryption! Key management is fundamental - If an attacker can access the key, or insert a known key into the system, the encryption is broken - If the key generation routines do not use “enough random” numbers, the attacker can guess the key - All in all, the cryptographic infrastructure must be designed and implemented correctly What to encrypt? - The full database (i.e. all tables) - Cells (i.e., the value of a specific row or field within a row) **Where should encryption/decryption be performed?** - The application must request encryption and decryption. - CASE 1 - In this case, keys are managed outside the DBMS (i.e., encrypt/decryption performed by some package) - CASE 2 - In this case, keys are managed by the DBMS (i.e., encrypt/decryption performed by functions provided by the DBMS); however, encryption/decryption has to be required by the application - CASE 3 - Encryption and decryption and key management are performed by the DBMS engine "automatically" **CASE 1** **Encryption at the application level** Application-level encryption (naïve solution): - the database application developer uses an existing encryption library and embeds the key in the code - Keys are generated outside the DBMS (i.e., by the encryption library). - Hence the DBMS does not know the encryption keys **CASE 1** **Encryption at the application level** - The application encrypts data before inserting them in the DB. Schematically: - Key = Cryptopackage.generatekey(param) - Encdata = Cryptopackage.Encrypt(data, key, algo) - SQL INSERT Encdata - The application decrypts data after having read them from the DB - SQL SELECT data from Table - Cryptopackage.Decrypt data CASE1 example (Javax.crypto) 1 – generate key ```java import javax.crypto.KeyGenerator; import java.security.Key; import java.security.NoSuchAlgorithmException; public class DESExample { public static void main(String[] args) throws SecurityException, NoSuchAlgorithmException { try { KeyGenerator kg = KeyGenerator.getInstance("DES"); Key key = kg.generateKey(); System.out.println("Key: "+ key.getEncoded()); String data = "Hello World!"; Cipher cipher = Cipher.getInstance("DES"); byte[] bytes = cipher.doFinal(data.getBytes()); System.out.println("Encrypted: "+ new String(bytes)); } catch (NoSuchAlgorithmException | NoSuchProviderException e) { System.out.println("Error: "+ e.getMessage()); } } } ``` CASE1 example (Javax.crypto) 2 – generate a cipher (provider) ```java import javax.crypto.KeyAgreement; import java.security.Key; import java.security.NoSuchAlgorithmException; public class DESExample { public static void main(String[] args) throws SecurityException, NoSuchAlgorithmException { try { KeyGenerator kg = KeyGenerator.getInstance("DES"); Key key = kg.generateKey(); System.out.println("Key: "+ key.getEncoded()); System.out.println("Encrypted: "+ new String(bytes)); } catch (NoSuchAlgorithmException | NoSuchProviderException e) { System.out.println("Error: "+ e.getMessage()); } } } ``` CASE1 example (Javax.crypto) 3 – encrypt data ```java public class DESExample { public static void main(String[] args) throws SecurityException, NoSuchAlgorithmException { try { KeyGenerator kg = KeyGenerator.getInstance("DES"); Key key = kg.generateKey(); Cipher cipher = Cipher.getInstance("DES"); byte[] data = "Hello World!".getBytes(); System.out.println("Original: "+ new String(data)); byte[] bytes = cipher.doFinal(data.getBytes()); System.out.println("Encrypted: "+ new String(bytes)); } catch (NoSuchAlgorithmException | NoSuchProviderException e) { System.out.println("Error: "+ e.getMessage()); } } } ``` CASE1 example (Javax.crypto) 3 – decrypt data ```java public class DESExample { public static void main(String[] args) throws SecurityException, NoSuchAlgorithmException { try { KeyGenerator kg = KeyGenerator.getInstance("DES"); Key key = kg.generateKey(); Cipher cipher = Cipher.getInstance("DES"); byte[] bytes = cipher.doFinal(data.getBytes()); System.out.println("Decrypted: "+ new String(bytes)); } catch (NoSuchAlgorithmException | NoSuchProviderException e) { System.out.println("Error: "+ e.getMessage()); } } } ``` See the Javax.crypto package at: http://java.sun.com/j2se/1.4.2/docs/api/javax/crypto/package-summary.html **CASE1 - issues** What may happen? - As more applications need access to encrypted data, the key is duplicated in those applications. - So, the number of people which know the key may become very large. - An attacker can easily extract the key from the code. - Moreover, what happens if the organization decide to change the key? find all applications using the key and modify them. **CASE2 – Encryption/decryption are called by the DB application** - The application encrypts/decrypts data using a symmetric key created (and stored by) the DBMS. - How to create a symmetric key in the DBMS (SQL server): - `CREATE SYMMETRIC KEY` (Transact-SQL statement) ```sql CREATE SYMMETRIC KEY SSN_Key_01 WITH ALGORITHM = AES_256 ENCRYPTION BY CERTIFICATE HealthC; ``` - The previous example creates a symmetric key called SSN_Key_01 by using the AES 256 algorithm, and then encrypts the new key with the key in certificate HealthC. - NOTE: in this example the DBMS protects the symmetric key by encrypting it using the key contained in a certificate. - The application must obtain the key from the DBMS before using it: - `OPEN SYMMETRIC KEY` (Decrypts and loads the key into memory) ```sql OPEN SYMMETRIC KEY SSN_Key_01 DECRYPTION BY CERTIFICATE HealthC; ``` - `TRANSACT-SQL statement`: - `OPEN SYMMETRIC KEY Key_name DECRYPTION BY <decryption_mechanism>` - `Key_name` is the name of the symmetric key to be opened - `Decryption mechanism` is the mechanism used to encrypt the symmetric key - Example: - `OPEN SYMMETRIC KEY SSN_Key_01 DECRYPTION BY CERTIFICATE HealthC;` **CASE2** the application encrypts the data ``` USE trialdb GO -- Create a column in which to store the encrypted data. ALTER TABLE HumanResources.Employee ADD EncryptedNationalIDNumber varbinary(128); SEE NOTE1 on next slide GO -- Open the symmetric key with which to encrypt the data. OPEN SYMMETRIC KEY SSN_Key_01 DECRYPTION BY CERTIFICATE HealthC; -- Encrypt the value in column NationalIDNumber with symmetric key -- SSN_Key_01. Save the result in column EncryptedNationalIDNumber. UPDATE HumanResources.Employee SET EncryptedNationalIDNumber = EncryptByKey(Key_GUID('SSN_Key_01'), NationalIDNumber); ``` SEE NOTE1 on next slide --- The `Key_GUID` function returns the GUID of a symmetric key in the database. The GUID serves as an identifier for the key and it is stored in metadata (SELECT key_guid FROM sys.symmetric_keys). It is used for finding the corresponding key. --- **NOTE1** - The symmetric key encryption functions all return varbinary data with a maximum size of 8,000 bytes. - The `Decrypt` functions return up to 8,000 bytes of clear text varbinary data from encrypted cipher text, which also limits the amount of data you can encrypt without breaking it into chunks. - Since the `Decrypt` functions also return varbinary data, it is necessary to cast the decrypted data back to the original data type for use. --- **CASE3 – Encryption/decryption are transparent to the BD application** --- **CASE3 SQL Server – Transparent Data Encryption** - Transparent Data Encryption (TDE) allows users to encrypt the sensitive data in the database and protect the keys that are used to encrypt the data with a certificate. - TDE performs real-time I/O encryption and decryption of the data and log files. - The encryption uses a database encryption key (DEK), which is stored in the database boot record for availability during recovery. - The DEK is a symmetric key secured by using a certificate stored in the master database of the server. - Encryption of the database file is performed at the page level. The pages in an encrypted database are encrypted before they are written to disk and decrypted when read into memory. With TDE, the user DB application does not have to encrypt/decrypt data by itself With TDE, all the database tables are encrypted **TDE – encryption key hierarchy** - Purpose: to organize encryption keys in a cryptography hierarchy - A Service Master Key (SKM) is associated with each DB server instance. This SKM is protected by the Windows OS via Windows Data Protection API. **CASE3 SQL Server – Transparent Data Encryption** **CASE3 SQL Server – TDE Architecture** The SMK protects the database master key (DMK), which is stored at the user database level and which in turn protects certificates and asymmetric keys. These in turn protect symmetric keys, which protect the data. The following example illustrates encrypting the AdventureWorks database using a certificate installed on the server named MyServerCert. ```sql CREATE MASTER KEY ENCRYPTION BY PASSWORD CREATE CERTIFICATE MyServerCert CREATE ASYMMETRIC KEY MyServerCert ``` What is Encrypted - TDE operates at the I/O level through the buffer pool. Thus any data that is written into the database file is encrypted. - Snapshots and backups are encrypted. - The transaction log is also encrypted (but additional caveats apply). - Data that is in use, however, is not encrypted because TDE does not provide protection at the memory or transit level. What is Encrypted - TDE operates at the I/O level through the buffer pool. Thus any data that is written into the database file is encrypted. - Snapshots and backups are encrypted. - The transaction log is also encrypted (but additional caveats apply). - Data that is in use, however, is not encrypted because TDE does not provide protection at the memory or transit level. Comparison with cell-level encryption - Cell-level encryption has some advantages over DB-level encryption: - It offers a more granular level of encryption; one needs only to encrypt the data that are sensitive - Data is not decrypted until it is used so that even if a page is loaded in memory, sensitive data is not in clear text - It supports explicit key management; users can have their own keys for their own data - And some disadvantages: - Applications have to be changed - The domains of columns storing encrypted data need to be changed to varbinary - Performance is affected because indexes on encrypted columns offer no advantage so equality and range queries result in full table scans. The performance of a basic query (selects and decrypts a single encrypted column) tends to be around 20% worse (3-5% on average for TDE) Additional Considerations TDE and cell-level encryption accomplish two different objectives: - if the amount of data that must be encrypted is very small or if the application can be custom designed to use it and performance is not a concern, cell-level encryption is to be preferred - Otherwise, TDE is to be preferred
{"Source-Url": "https://www.cs.purdue.edu/homes/bertino/426Fall2009/lecture22-DBEncryption.pdf", "len_cl100k_base": 4273, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24875, "total-output-tokens": 5166, "length": "2e12", "weborganizer": {"__label__adult": 0.0005173683166503906, "__label__art_design": 0.00033092498779296875, "__label__crime_law": 0.00658416748046875, "__label__education_jobs": 0.0009393692016601562, "__label__entertainment": 7.557868957519531e-05, "__label__fashion_beauty": 0.0001957416534423828, "__label__finance_business": 0.0014429092407226562, "__label__food_dining": 0.0003762245178222656, "__label__games": 0.0010480880737304688, "__label__hardware": 0.002101898193359375, "__label__health": 0.0007343292236328125, "__label__history": 0.0002460479736328125, "__label__home_hobbies": 0.00015032291412353516, "__label__industrial": 0.0006918907165527344, "__label__literature": 0.00023448467254638672, "__label__politics": 0.0004000663757324219, "__label__religion": 0.0004472732543945313, "__label__science_tech": 0.04931640625, "__label__social_life": 0.00010722875595092772, "__label__software": 0.0556640625, "__label__software_dev": 0.87744140625, "__label__sports_fitness": 0.00027680397033691406, "__label__transportation": 0.0004210472106933594, "__label__travel": 0.00017344951629638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19904, 0.00765]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19904, 0.8634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19904, 0.82196]], "google_gemma-3-12b-it_contains_pii": [[0, 1369, false], [1369, 3256, null], [3256, 4671, null], [4671, 6591, null], [6591, 8800, null], [8800, 10043, null], [10043, 12864, null], [12864, 14459, null], [14459, 16612, null], [16612, 17300, null], [17300, 19483, null], [19483, 19904, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1369, true], [1369, 3256, null], [3256, 4671, null], [4671, 6591, null], [6591, 8800, null], [8800, 10043, null], [10043, 12864, null], [12864, 14459, null], [14459, 16612, null], [16612, 17300, null], [17300, 19483, null], [19483, 19904, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19904, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19904, null]], "pdf_page_numbers": [[0, 1369, 1], [1369, 3256, 2], [3256, 4671, 3], [4671, 6591, 4], [6591, 8800, 5], [8800, 10043, 6], [10043, 12864, 7], [12864, 14459, 8], [14459, 16612, 9], [16612, 17300, 10], [17300, 19483, 11], [19483, 19904, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19904, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
0276cdd65dd0116eeccea768978cdf1cca4f3eeb
Bart Kienhuis UC Berkeley Cory Hall 524, Berkeley California, 94720 USA kienhuis@eecs.berkeley.edu Edwin Rijpkema Leiden University P.O. Box 9512 Leiden, The Netherlands rijpkema@liacs.nl Ed Deprettere Leiden University P.O. Box 9512 Leiden, The Netherlands edd@liacs.nl ABSTRACT This paper presents the Compaan tool that automatically transforms a nested loop program written in Matlab into a process network specification. The process network model of computation fits better with the new emerging kind of embedded architectures that use coprocessors. Process networks can describe both fine-grained and coarse-grained parallelism, making the mapping of the applications easier. Keywords Process Networks, Matlab, Mapping, Embedded Architectures. 1. INTRODUCTION A new kind of embedded architectures is emerging that is composed of a microprocessor, some memory, and a number of dedicated coprocessors that are linked together via some kind of programmable interconnect (See Figure 1). These architectures are devised to be used in real-time, high-performance signal processing applications. Examples of these new architectures are the Prophid architecture [12], The Jacobium architecture [15], and the Pleiades architecture [1], to be used in respectively, video consumer appliances, adaptive radar processing, and mobile communication devices. These architectures have in common that they exploit parallelism using instruction level parallelism offered by the microprocessor and coarse-grained parallelism offered by the coprocessors. Given a set of applications, the hardware/software codesign problem is to determine what needs to execute on the microprocessor and what on the coprocessors and furthermore, what should each coprocessor contain, while being programmable enough to support the set of applications. The applications that need to execute on the architectures are typically specified using an imperative model of computation, most commonly C or Matlab. In Figure 1, for example, we show an algorithm written in Matlab. Although the imperative model of computation is well suited to specify applications, it does not reveal parallelism due to its inherent sequential nature. Compilers exist that are able to extract instruction level parallelism from the original specifications at a very fine level of granularity. They are, however, unable to exploit coarse-grained parallelism offered by the coprocessors of the architectures. This makes the mapping of the applications onto the architecture difficult. Instead, a better specification format would be to use an inherently parallel model of computation like Process Networks [7; 11]. This describes an application as a network of concurrently executing processes. It describes parallelism naturally from the very fine-grained to the very coarse-grained, it does not pre-impose any particular schedule, and it describes each process in a process network using an imperative language. The mapping then becomes putting the processes either on a microprocessor or on a coprocessor as shown by tools like ORAS [10], or SPADE [13]. Using these tools, a Y-chart [8] can be constructed, allowing the quality assessment of mappings on architectures. This paper describes the Compaan tool that automatically transforms a Matlab application into a process network description, as shown in Figure 1. It converts a Matlab application into a polyhedral reduced dependence graph, that is subsequently converted into a process network description. The Compaan tool is confined to operate on affine nested loop programs (NLP) [6], but the applications of interest are often described this way. The Compaan tool describes applications as a process network, which is at a much more coarse-grained level description than a Control Data Flow Graph (CDFG). Moreover, it does a data-dependency analysis on the array domain that goes far beyond the conventional data- dependence analysis performed on CFDGs. Finally, Compaan synthesizes the processes in a way, that each process is a possible implementation model for a coprocessor [8] or a piece of code that executes on the microprocessor. The outline of the paper is as follows. Section 2 describes the way we decompose the transformation task that Compaan performs into smaller tasks. Section 3 deals with the polyhedral reduced dependence graph (PRDG) that is the model from which Compaan generates a process network. Section 4 explains how processes are structured in so called SBF objects. Section 5 and Section 6 describe the tools inside Compaan in more detail. Section 7 describes how we make the process networks available. Section 8 gives some results and Section 9 gives conclusions. 2. THE COMPAAN TOOL We developed the Compilation of Matlab to Process Networks (Compaan) tool, which transforms a nested loop program written in Matlab into a process network specification. The tool does this transformation in a number of steps, shown in Figure 2, leveraging a lot of techniques available in the Systolic Array community [16]. In Figure 2, a box represents a result and an ellipsoid represents an action or tool. Figure 2: Compaan consists of three tools that transform a Matlab specification into a process network specification. Compaan starts the transformation by converting a Matlab specification into a single-assignment code (SAC) specification. This describes all parallelism available in the original Matlab specification. Next, it derives the polyhedral reduced dependence graph (PRDG) specification from the SAC. From this PRDG, the network description and the individual processes are derived. The three steps done in Compaan are realized by separate tools, respectively, MatParser, DgParser, and Panda. The last mentioned tool, Panda, uses the PRDG description to generate the network description and the contents of the processes that make up the process network description. The processes are structured in a particular way based on the SBF model, which is explained in Section 4. The SBF model is equivalent to Process Networks [7], with the exception that processes in the SBF model are more structured. The generation of the processes is further decomposed into domain scanning, domain reconstruction, and linearization. In Section 5 and Section 6, the three tools are discussed in more detail. In the next two sections, we discuss what a PRDG is as well as what the SBF model is. 3. POLYHEDRAL REDUCED DEPENDENCE GRAPH A polyhedral reduced dependence graph is a compact representation of a dependence graph (DG) using parameterized polyhedra, making a DG description more amenable to further mathematical manipulation. A polyhedral reduced dependence graph (PRDG) is a directed graph $G = (V, E)$, where $V$ is a set of node domains and where $E$ is a set of edge domains. In Figure 3, an PRDG is shown consisting of 5 node domains and 12 edge domains. It is the PRDG representation of the algorithm given in Figure 1. Figure 3: An example of a polyhedral reduced dependence graph. 3.1 Node domain A node domain is a collection of polytopes [17], a function, and a set of port domains. An iteration domain is defined by a polytope in which each point contained corresponds to a node in the original DG. With every point inside this iteration domain, the same function is associated. A function has a number of input ports and output ports. An input port corresponds with an argument of the function; an output port corresponds with a value the function returns. The points of a node domain of which one input port reads data from, or the points of a node domain to which an output port writes data to, form respectively the input port domain (IPD) and the output port domain (OPD). 3.2 Edge domain An edge domain is the ordered pair $(v_i, v_j)$ of node domains together with the ordered pair $(p_i, p_j)$ of port domains where $p_i$ is the OPD of $v_i$ and $p_j$ the IPD of $v_j$. This ordered pair corresponds with a data dependency in a DG, which is expressed using an affine mapping $M$. 3.3 Example To illustrate the notion of node and port domains, we show in Figure 4 a node that represents node C in Figure 3. The figure shows the node domain $(a)$, its iteration domain with the iterators $i$ and $j$ (b), its port domains $(c)-(f)$, and its view as it appears in the PRDG (g). Thus, the four port domains $(c)-(f)$ partition the node domain $(a)$ of node C. In (c) and (d), we show IPDs and in (e) and (f), we show OPDs. In (c) we identify two IPD functions, $ipd_1(i, j)$ and $ipd_2(i, j)$. In (e) we identify two OPD functions, $opd_1(i, j)$ and $opd_2(i, j)$. The figure shows one dependency between $opd_1$ of port domain (e) and pressed in terms of an affine function $M(i,j)$ between the different IPDs and OPDs. The SBF model is basis for the construction of the network description in the SBF model, which is explained next. 4. THE SBF MODEL The SBF Model [8] describes an application as a network of SBF objects that are interconnected by channels. A channel is an unbounded FIFO queue that can contain an infinite sequence of tokens, i.e. a stream. SBF objects can write to a channel unconditionally, but can only read from the channel when the queue is non-empty, i.e. a bounded FIFO queue that can contain an infinite sequence of tokens. An SBF object, however, describes a process in terms of a controller, a state, and a set of functions, as illustrated in Figure 5. The SBF model consists of a network of SBF objects, which are connected by channels. Each SBF object has a controller, a state, and a set of functions, which are defined as follows: $$\omega : C \rightarrow C$$ $$\mu : C \rightarrow F,$$ where $C$ is the space of all possible values of $c$. The transition function $\omega$ determines the new state $s'$ from the current state $s$. The binding function $\mu$ determines what function has to be enabled for the current state $s$ and exactly one function is associated with a state. When a function fires, it consumes data from the read ports, from the state, or on both. Each function knows where to get its input data from and where to send its output data. This leads to so-called function variants, which are functions with the same functionality but bind differently to read and write ports, and state. 5. MATPARSER & DGPARSER In the path from Matlab to the PRDG, Compaan uses the tools MatParser [9; 6] and DgParser [6]. MatParser is an array dataflow analysis compiler that finds all parallelism available in NLPs written in Matlab using a very aggressive data-dependency analysis technique based on integer linear programming [4]. We focus on Matlab since many signal-processing algorithms are written in this language. Just by writing another language front-end, MatParser can also operate on NLPs written in other languages, for example C. MatParser finds whether two variables are dependent on each other, and moreover, at which iteration. It partitions the iteration space defined by the for-next loops, and gives the dependence vector between partitions. For the simple program given in Figure 1, MatParser solves about a hundred parametric integer program problems to find all data-dependencies. In Figure 6, part of the output of MatParser is shown for the algorithm given in Figure 1. It shows how the iteration space spanned by the for-next loops for $k$ and $j$ is partitioned using ifelse statements. Consequently, for different partitions, different data-dependencies may apply. In case of input argument $in_0$ of function $Vect$, a value previously defined by function $Vect$ should be used (i.e., $r(k, j)$), defining the data-dependency vector $M(\cdot)$ or a value from the original $r$ matrix (i.e., $r(k, j)$). DgParser converts the SAC description into the PRDG description, which is a straightforward conversion. Accordingly, the shape of the node domain is given by the way the for-next loops are defined and the partitioning of the node-domain corresponds with the ifelse conditions. In addition, the terms ipd and opd used in Figure 6 relate to the IPD and OPD defined in section 3. 6. PANDA Once DgParser has established a PRDG model of an algorithm, the Panda tool can generate a network description and the individual processes. The network description is straightforward, as it follows the topology of the PRDG. Each node in the PRDG is mapped onto a single SBF object and each edge represents an unbounded FIFO. In case of Figure 3, nodes A, B, C, D, and E define an SBF Object and the edges e until if define an unbounded FIFO. As shown in Figure 2, the Panda tool divides the generation of an SBF object into three different steps; domain scanning, domain reconstruction, and linearization, which we now discuss in more detail. 6.1 Domain Scanning Panda needs to derive a transition function \( \omega \) for each SBF object, a process we call domain scanning. For now, Panda constructs \( \omega \) such that it follows the lexicographical order imposed by the original nested-loop program. Nevertheless, another ordering could have been selected. This may, however, lead to out-of-order problems. 6.2 Domain Reconstruction MatParser generates a SAC description in which only the IPDs are explicitly specified. This means that the input arguments \( i_{in} \) and \( i_{in} \) in Program 6, are surrounded by if/else statements, while the output values \( o_{out} \), \( o_{out} \), and \( o_{out} \) are not. A consequence of this is that output values can be generated that are never used by some input domain. Hence, Panda needs to reconstruct the OPD. Making the output port domains explicit is illustrated in Figure 7. It shows two communicating node domains \( N D_p \) and \( N D_c \). The tokens produced by port domain \( P_p \), node domain \( N D_p \), are to be consumed by port domain \( P_c \) of node domain \( N D_c \), as described by the data-dependency with mapping \( M \). Port domain \( P_p \) is an OPD and port domain \( P_c \) is an IPD. To make \( P_p \) explicit, Panda applies \( M(\cdot) \) that is derived by MatParser, to IPD \( P_p \), which is a operation on \( \varpi \)-polyhedra [14]. ![Figure 7: Making the output port domain explicit.](image) 6.3 Linearization The channels between processes are FIFO buffers and the processes operate using blocking reads. Therefore, the order in which a consuming process reads token from a channel should be the same as the order in which tokens are written onto the channel by the producing process. Now, the way tokens are written on and read from channels is determined by the \( \omega \) of each process, and unfortunately easily be chosen in such a way that an out-of-order consumption pattern results. That is, tokens need to be read too early, to allow the process to make progress. Panda solves the out-of-order problem by storing tokens temporarily in the state of an SBF object, thus operating as a piece of random access memory. This requires that Panda is able to find the proper read and write address for this piece of memory, a process that is called linearization. The linearization method in Panda relies on methods to count the number of integral points contained by a polytope using so-called Ehrhart Polynomials [2]. Using such polynomial, and the \( \omega \) of both the producing and consuming processes, Panda is able to statically derive the read and write address solving the out-of-order processing. Ideally, it should do this under some constraint like throughput or trying to keep the amount of memory needed inside the state of SBF objects to a minimum, as well as the memory required in the FIFO buffers between processes. For the situation shown in Figure 7, the solution with the least memory is the one with the traversal of \( P_p \) and \( P_c \) the same, requiring no additional state and a very small FIFO. 7. PROCESS NETWORKS The resulting process network need to be made accessible in some kind, such that it can be simulated. We generate the process network description for two PN-simulators. One simulator is \( SBFSim \), which is a very fast, very simple simulator in C++ based on threads [8]. In this case, the SBF Objects are generated as C++ classes. The other simulator is the Ptolemy II framework [3]. In this case, we make a process network available in the PN-domain. Compaan generates the network description in MoML, which is a modeling markup language based on XML [5] used in Ptolemy II for specifying interconnections of parameterized components. The process generation step in this case, generates the Ptolemy II actors in the PN-domain. A MoML description can be executed as an application using a command-line interface or as a visual rendion in the Ptolemy II block diagram editor Vergil, as shown in Figure 8. ![Figure 8: The derived PN network in the Ptolemy II framework.](image) The Ptolemy II framework enables us to combine the derived process network descriptions with predefined actors like sources to read Matrices and sinks to read and visualize Matrices. It also let us combine process networks with other domains, enabling the description and simulation of more complex systems. 8. RESULTS We have executed the PN network shown in Figure 8, with the parameter values $N=6$ and $K=100$. This gives us the number of times a particular SBF object fired and how many tokens were transported over FIFO buffers between nodes as shown in Figure 9, which describes the same network as given in Figure 3. Thus SBF object A and E fired 21 times, SBF object B and C fired 600 times, and SBF object D fired 1500 times. Furthermore, we see that for example edge $b$ transported 15 tokens, while edge $g$ transported 500 tokens and edge $r$ transported 594 tokens. In Figure 9, the SBF objects that fire more frequently are colored darker and the edges have a different width depending on their communication load. Figure 9: The firing rates found for SBF objects and the communication load found for the FIFO buffers when executing the PN network in Ptolemy II. From the Figure, we see that some SBF objects fired many times (i.e., node $B$, $C$, and $D$), while others do so sporadically (i.e., $A$ and $E$). Based on this insight, we can suggest a partition for the architecture shown in Figure 1: the frequently fired SBF objects become candidates for coprocessors whereas the incidental fired SBF objects are put on the microprocessor. This could mean that SBF object $C$, $D$ and $E$ become coprocessors, while SBF object $A$ and $E$ are mapped onto the microprocessor. Consequently, edges $\{a, b, i, k\}$ map onto the low-bandwidth communication structure that connects the coprocessors with the microprocessor. Edges $\{c, d, f, g\}$ map onto the programmable interconnect network, which is the high-bandwidth communication structure. Edge $r$ and edges $\{i, j, h\}$ map onto internal communication structures inside the coprocessor for node $C$ and $D$, respectively. This very high-bandwidth communication is thus kept local to the coprocessors. Suggesting such a partition on the basis of the original Matlab program is unlikely. To further determine the quality of this partition, especially in context of time and limited resources, we can rely on tools like ORAS [10] or SPADE [13]. Because Compaan obtains the network of SBF objects automatically, it could be used in combination with a design space exploration tool. 9. CONCLUSIONS In this paper, we have described the Compaan tool that can automatically derive a process network description in the SBF model from a nested loop program written in Matlab. Such a network description reveals the parallelism present in the original sequential program. This network description makes the mapping onto the new emerging architectures easier as the granularity and model of computation better fit. A lot of effort is in the synthesis of the SBF objects. An SBF object can now serve as a possible implementation model for a coprocessor, or equally, be put onto a microprocessor. The PRDG model gives us a good mathematical framework to structure SBF objects. We hope, we can exploit this PRDG model to get, for example, SBF objects that use limited state memory inside and require small sized FIFO buffers between processes as shown in Section 6. All elements of the Compaan tool are implemented in Java. With respect to the Panda tool, we are still working on further improvement of the linearization problem. Nevertheless, we have shown for some Matlab programs, that we can automatically compile it, using the trajectory illustrated in Figure 2. For more information about the Compaan work, see http://www.gigascale.org/compaan. This work was supported in part by the MARCO/DARPA Gigascale Silicon Research Center. Their support is gratefully acknowledged. 10. REFERENCES
{"Source-Url": "http://www.cecs.uci.edu/~papers/compendium94-03/papers/2000/codes00/pdffiles/01_2.pdf", "len_cl100k_base": 4680, "olmocr-version": "0.1.49", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19370, "total-output-tokens": 5868, "length": "2e12", "weborganizer": {"__label__adult": 0.0005369186401367188, "__label__art_design": 0.000881195068359375, "__label__crime_law": 0.0005121231079101562, "__label__education_jobs": 0.0006213188171386719, "__label__entertainment": 0.00015664100646972656, "__label__fashion_beauty": 0.0002605915069580078, "__label__finance_business": 0.0003345012664794922, "__label__food_dining": 0.000530242919921875, "__label__games": 0.0007395744323730469, "__label__hardware": 0.01200103759765625, "__label__health": 0.0007953643798828125, "__label__history": 0.0004992485046386719, "__label__home_hobbies": 0.00020062923431396484, "__label__industrial": 0.0018901824951171875, "__label__literature": 0.00023853778839111328, "__label__politics": 0.0005097389221191406, "__label__religion": 0.0008683204650878906, "__label__science_tech": 0.412353515625, "__label__social_life": 9.21487808227539e-05, "__label__software": 0.008758544921875, "__label__software_dev": 0.5546875, "__label__sports_fitness": 0.000560760498046875, "__label__transportation": 0.0015316009521484375, "__label__travel": 0.00027751922607421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23514, 0.03675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23514, 0.79134]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23514, 0.88311]], "google_gemma-3-12b-it_contains_pii": [[0, 4043, false], [4043, 8819, null], [8819, 12132, null], [12132, 17326, null], [17326, 23514, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4043, true], [4043, 8819, null], [8819, 12132, null], [12132, 17326, null], [17326, 23514, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23514, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23514, null]], "pdf_page_numbers": [[0, 4043, 1], [4043, 8819, 2], [8819, 12132, 3], [12132, 17326, 4], [17326, 23514, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23514, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
80e7c18ff0b5800b57b50eff213a85148b4fbc63
Trace replay with change propagation impact in client/server applications Raafat Zarka, Amélie Cordier, Előd Egyed-Zsigmond, Alain Mille To cite this version: Raafat Zarka, Amélie Cordier, Előd Egyed-Zsigmond, Alain Mille. Trace replay with change propagation impact in client/server applications. IC 2011, 22èmes Journées francophones d’Ingénierie des Connaissances, May 2012, Chambéry, France. pp.607-622. hal-00746727 HAL Id: hal-00746727 https://hal.archives-ouvertes.fr/hal-00746727 Submitted on 29 Oct 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Trace replay with change propagation impact in client/server applications Raafat Zarka$^{1,2}$, Amélie Cordier$^{1,3}$, Előd Egyed-Zsigmond$^{1,2}$, Alain Mille$^{1,3}$ $^1$ Université de Lyon, CNRS $^2$ INSA-Lyon, LIRIS, UMR5205, F-69621, France $^3$ Université Lyon 1, LIRIS, UMR5205, F-69622, France {raafat.zarka, amelie.cordier, elod.egyed-zsigmond, alain.mille}@liris.cnrs.fr Abstract: To help end-users mastering complex applications, it is often efficient to enable them to “replay” what they have done so far. In some cases, it is even more useful to enable them to modify some values of the actions they are replaying. However, while doing so, it very important to deal with the consequences of these changes on the remaining of the replay process. In this paper, we describe our models to enable replay of user’s interactions and to manage impact propagation of changes during the replay process. These models are built upon traces, i.e. digital objects that enable us to record user interactions and to reuse them in different ways. We have implemented the replay process in a Web application called SAP-BO Explorer, an application helping business users to access large amounts of information. Our tool helps users to better understand the application. Keywords: impact propagation, macro recording, bookmarks, replay traces, human computer interaction. 1. Introduction With the multiplication and the rapid development of software systems and applications, we now have access to more and more tools, which are usually more and more complicated. While using these tools, we are often lost, usually because we lack time to understand applications, to get used to them and to exploit them efficiently. In response to this problem, some application designers came up with solutions for helping users either to discover the application or to learn how to be more efficient while using it. Providing a relevant assistance to users becomes a real challenge for application designers. Among the proposals for assistance strategies, we usually find tutorials, how-to, videos, assistants, training courses, etc. However, all these assistance strategies rest upon a static description of the application, hard-coded a priori. They are proposed to users in an identical way and thus, are not always well suited to specific needs of specific users. To overcome this issue, we have proposed, in a previous work (Zarka et al. 2010) to use interaction traces in order to provide user with a personalized and contextualized assistance based on previous experiences. Interaction traces are relatively new digital objects. An interaction trace is a record of the actions performed by a user on a system. In other words, a trace is a story of the user’s actions, step by step. Hence, traces enable us to capture users’ experiences. Traces are recorded according to a pre-established model, so that they can be reused in different ways: replay, exploration, modification, modification plus replay, etc. Working with traces raises numerous research issues. How to collect, represent, store, and visualize traces? What mechanisms have to be implemented in order to allow user to browse their personal traces? How to implement a replay mechanism in a pre-existing system? How to take into account privacy issues when working with traces? Recent researches provide us with solutions to some of these problems and enable us to work within an existing framework for manipulating traces (see (Champin et al. 2004), (Cordier et al. 2009) and (Settouti et al. 2009)). In this paper, we focus on a specific research question: how to replay a trace in a system and which issues are raised by the replay when the initial situation has been modified? To better understand this problem, let us consider the following example. A user makes a sequence of manipulations to improve a colored picture: transformation in gray-scale, selection of a scale of gray, luminosity attenuation for the selection, blur effect on the selection. Not satisfied with the result, he decides to go back to the initial state (the original picture) and to replay the whole set of actions, except from the transformation in gray-scale. The question is: “is the remaining of the actions still possible?” The issue we address in this paper is then: how to enable a trace replay while monitoring the impact of a modification in the trace on the remaining of the process? In order to address this issue, we have firstly elaborated a mechanism enabling to do a simple replay of a trace (i.e. with no modification) from any point in the trace. Then, we have defined a model for impact analysis in order to manage impact propagation after a modification of the trace. Both models are described in this paper. The trace-replay mechanism has been implemented in the widely used SAP-BO Explorer application (SAP 2010), a web application enabling user to load, explore, visualize and export large quantities of data. SAP-BO needed a tool to help their users better understand the tool and this is the solution we designed for them. We have instrumented the initial application in order to collect interaction traces and we have developed a graphical interface in order to display the traces according to an ad-hoc representation. We have also instrumented the application in order to enable replay of recorded traces. The application is operational and a demo video is available\(^1\). \(^1\) A demo video of trace replay and visualization is available at: https://liris.cnrs.fr/~rzarka/ReplayTraceDemo/ This paper is organized as follows. In section 2, we survey related work. Then, in section 3, we show how we use traces in order to enable replay of user’s interactions. In section 4, we discuss the consequences of a change during the replay, and we propose an impact propagation model. Section 5 gives implementation details. Evaluation and discussion of our proposal are made in section 6. The paper ends with a conclusion and a description of future research issues. 2. Related work In most of existing macro recording systems, users have to be proactive: they need to start and stop macro recording. Bookmark systems are one of the most common macro recording systems. They enable users to “replay” web pages. With Koala (Little et al. 2007), the user can record a sequence of actions and generate a script of keyword commands that can be replayed later. Recorded scripts are stored automatically on a wiki, which might be shared by a workgroup, allowing easy exchange and improvement of scripts. CoScripter (Leshed et al. 2008) is a Firefox plug-in created by IBM Research. It allows users to record and share interactions with websites. It records user actions and saves them in semi-natural language scripts. The scripts made are saved in a central wiki for sharing with other users. WebVCR (Anupam et al. 2000) and WebMacros (Safonov et al. 2001) record web browser actions as a low-level internal representation, which is not editable by the user or displayed in the interface. All these systems require planning to enable recording while Smart Bookmarks (Hupp & Miller 2007) supports retroactive recording: it automatically captures users’ interactions while they navigate the web and displays them through a graphical presentation. When users want to bookmark a webpage, the system automatically determines the sequence of commands needed to return to the page, and saves the sequence as a bookmark. While Smart Bookmarks lets users save or share actions from ongoing browsing sessions, ActionShot (Li et al. 2010) enables users to share actions they have performed before by providing them with a visual interface for browsing their entire history. ActionShot system is built on top of the CoScripter platform. History data is reused through the re-execution of recorded steps. Sharing also is supported through Facebook, Twitter or via email. Both ActionShot and Smart Bookmarks are generic, but they are implemented as Firefox extensions which is a limit. Besides, they cannot work with dynamic pages (e.g. Ajax or Flash based). In Smart Bookmarks, users can modify parameters values before the bookmark starts running. However, these new values may affect commands and cause inconsistent states in the application. Hence, it seems relevant to study impact propagation of these changes. Impact propagation analysis is widely studied in software engineering and database domains. In (Briand et al. 2003), the authors propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. Most techniques to predict the effects of schema changes upon applications that use the database can be expensive and error-prone, making the change process expensive and difficult. In (Maule 2010), the authors present a novel analysis for extracting potential database queries from a program, called query analysis. The impacts of a schema change can be predicted by analyzing the results of query analysis, using a process they call impact calculation. Many systems also support impact analysis. One of them is Sybase Power Designer Modeling Tool that provides powerful methods for analyzing the dependencies between object models (Sybase 2010). Table 1. Comparison table of related work <table> <thead> <tr> <th>System</th> <th>Representation</th> <th>Simple Replay</th> <th>Replay with change</th> <th>Adaptation</th> </tr> </thead> <tbody> <tr> <td>WebMacros</td> <td>No</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>WebVCR</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Koala</td> <td>Wiki Scripts</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>CoScripter</td> <td>Text, Firefox Extension</td> <td>Proactive</td> <td>No</td> <td>No</td> </tr> <tr> <td>Smart bookmarks</td> <td>Graphical (screenshots), Firefox extension</td> <td>Retroactive</td> <td>Yes, without impact propagation</td> <td>Classify buttons for side-effecting</td> </tr> <tr> <td>ActionShot</td> <td>Graphical text explanations, Firefox extension</td> <td>Retroactive</td> <td>Yes, without impact propagation</td> <td>No</td> </tr> <tr> <td>Photoshop</td> <td>Actions list</td> <td>Macro and undo command</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Power Designer</td> <td>Does not trace</td> <td>Undo command</td> <td>No</td> <td>Impact rules</td> </tr> <tr> <td>Trace Replay</td> <td>M-Trace with text explanations</td> <td>Retroactive</td> <td>Yes</td> <td>Impact rules and adapted values</td> </tr> </tbody> </table> Some applications allow users to replay their actions like Photoshop (Harrington 2009), by using undo or playback commands. In Photoshop, graphics designers and photographers have a number of processes they frequently perform on their images. By creating macros called “actions” they can automate many routine tasks using simple text files that are recorded in a macro-style. Whether is the goal is to convert an image for the Web or to transform a color photo into a black and white photo, designers can reduce several steps to a click on a single button. Users can create their own macro scripts which are mini recordings of commands. This is also what we would like to provide, but in our case we need to apply macro recording for systems that do not support undo commands like most of client-server applications. In addition, we do not want to ask the user to start or stop recording his actions. **Table 1** shows a comparison between all the presented works according to the way they allow visualization of past actions and if they support the replay with or without change of values. ### 3. Simple trace replay (go back to a previous state) In client-server applications, simple undo commands imply data interchange between client and server. This may take a lot of time (especially if the undo has to be repeated many times) and can cause server overload. Besides, such a problem may face loss of data issues. Last, it is not a scalable solution for situations where a lot of users access the server at the same time. For all these reasons, undo commands are hard to implement. Instead, to enable users to go back to a previous state, we propose to implement a “trace replay mechanism”. This mechanism enables users to replay their interaction until they reach the expected state of the application. In order to implement this mechanism, we have defined a trace model (see **Fig. 1**). ![Modified trace model to support trace replay](image) **Fig. 1** Modified trace model to support trace replay Each user’s session is represented by a M-Trace which consists of a set of observed elements (obsels). Each obsel has a type and two timestamps representing its beginning and ending instants. Each obsel type has a domain... of attributes and indicates the values of its attributes respecting the range of the attribute type. An obsel can affect many elements at the same time. For example, pressing a “delete all” button can erase the values of many elements together. By using the obsel attribute values, we can calculate the new values for the related elements, where each obsel attribute concerns only one element. Using this model we can get all the obsels that can modify every element and all the elements that can be affected by an obsel. When capturing the traces we don’t need to store the values of elements at each time. We only store attributes and values of each obsel. For example if a user selects a chart, the value of the obsel will be the ID of the chart and not the whole information about the chart, so we need an element called “selected chart” that contains all the information about the selected chart. 3.1. Playback trace process Our solution to go back to a previous state of the system is to playback users’ actions from a starting point (session start) and not by undoing last ones. When a user chooses to go back to a past state, he can choose the obsel that he wants to return to. The system will automatically go back to this state by replaying all the obsels that happened from the beginning of the session until the selected obsel; let’s call it the triggered obsel (the obsel where we want the system to play back to). Fig. 2 [A] shows a simple trace replay, a list of obsels starting from A to R, where R is the replay obsel and C is the triggered obsel. In R the user asked to replay traces to back with the system to its state when clicking on C. We can see that all the obsels that happened between C and R will be ignored (EDA). This replay will be done by one command which means one call from the client to the server. After replaying traces the system will go back to the past state and the user will continue his usage to the system, and new obsels will be collected. An Obsel R means that at this point a replay action happened. ![Fig. 2](image) **Fig. 2** [A] Simple Trace Replay, [B] Trace Replay with change By replaying the obsels we can calculate the values of these elements at the replaying point. **SimpleReplay** algorithm gets M-Trace and the triggered obsel as input and goes back to a previous state. Firstly, it gets the subset of the trace that should be replayed starting from the first obsel to the triggered one by a chronological order. Then this trace will be optimized by using the optimization algorithm to delete extra obsels. Each element gets its default values and then a loop on all the obsels runs, where at each time the element values are updated according to the attributes of the current obsel. At the end, the new element values are updated making the system going back to this state. The replay event is also captured as a new obsel and taken in consideration during the analysis. ``` Program simpleReplay (M-Trace, TriggerredObsel) ReplayedTrace := getSubTrace(0, position(TriggeredObsel)) optimize(ReplayedTrace) Elements := getDefaultValues() For pos := 0 to getObselCount(ReplayedTrace)-1 Obsel := ReplayedTrace[pos] Attributes := getAttributes(Obsel.Type) For each attribute in Attributes Value := getAttributeValue(Obsel, attribute) Elem := getAffectedElement(attribute) Elements[elem] := GenerateElementValue(value) End For each End For update(Elements) End Program ``` 3.2. Optimized trace replay process As not all the obsels play a role for changing the state of the system, the replay process can be optimized by reducing the number of replayed obsels. In addition, in some cases many obsels can be ignored, either because they have been canceled by other obsels or because of reset values. According to that, we don’t need to go through all the obsels in order to go back to the triggered one. Analyzing the previous obsels to get the right values of the elements enables us to optimize the replay process. We can get an optimized chronological list of obsels from the beginning of the session to the triggered obsel; this list will be used to generate the values for each element. Optimize algorithm tries to delete all unnecessary obsels that induce loops in the trace, For example, in the simple replay obsel, the subTrace from replay obsel to triggered obsel should be deleted. The same thing is also done for a reset obsel which means deleting all the obsels from the beginning to the reset obsel. So we consider that there is a list of unnecessary loop obsels in the trace, and in this algorithm all these loops will be deleted as shown in Fig. 3. 4. Replay traces with impact propagation In this section we describe how we can replay traces after modifying an input element by handling the consequences of changes on elements before actually performing these changes. This is illustrated on Fig. 2. R is a replay trace of element C that triggers a replay after doing a change on the values of the triggered element C. Because of a change in one of the attribute values of C, the values of some other elements could be inconsistently modified, like E and A, while other elements may remain consistent, like D. We need to calculate the new values, in order to take into account this modification. Then the trace can be replayed with these new values. After that the user can continue to use the system. We face many questions like: how can we determine the elements affected by a change? Can we be proactive and specify the appropriate new values, without asking the user to enter the new values? How can we replay the next traces after applying this change? To answer these questions we propose to define impact rules of dependencies between the elements for manipulating the consequences of a change. 4.1. Impact rules for element dependencies Impact rules define the dependencies between the elements in the system in order to be able to identify the elements that are affected by a change in another element, and to specify the modifications that could be done on the affected elements to stay consistent and valid. Each rule includes a source element and the condition on its values that specifies the dependence with a destination element and the condition on its values. A rule says that if specific conditions for the values of the source element are fulfilled then some of the values of the destination element determined by the destination condition cannot exist, which requires replacing these values by an adapted value. **Definition:** Impact rule Let $\mathcal{E}$ be a set of elements. Each element has a name and some values. Let $\mathcal{O}$ be a set of operations and $\mathcal{F}$ be a set of functions. We can define an impact rule $\mathcal{R}$ as an implication of the form: $$\mathcal{R} = (E_S, C_S) \rightarrow (E_D, C_D) : \mathcal{A}_E,$$ where $E_S$, $E_D$, $\mathcal{A}_E \in \mathcal{E}$, and $E_S$ is the source element, $C_S$ is the source condition, $E_D$ is the destination element, $C_D$ is the destination condition, and $\mathcal{A}_E$ is the adapted element. $C_S$ and $C_D$ are conditions based on operations and functions on the values of the elements. Conditions are composed of operations ($\mathcal{O}$) and functions ($\mathcal{F}$) on elements values. Operations can be logical (and, or, not, etc), mathematical (+, -, *, /, etc) or others. Functions can be grouping functions like (max, sum, min, count, avg) or custom functions like (isNumber, isHoliday, etc). For each application, system’s experts define impact rules for the dependencies between the elements, to determine the consequences of modifying a past obsel. We can get all the impacted obsels for each rule from the entity of the relations between elements and obsels. If we find impact rules having the elements of the modified obsel as source elements and their values satisfying the source conditions, then, for each destination element, if its value satisfies the destination condition, we need to replace the destination element by the adapted one. Adapted values can be specified manually as default values or can be generated automatically using past traces. For example, in SAP-BO Explorer, we consider an impact rule like: if the number of selected measures is greater than one, the element “Chart” cannot be of type “Pie”. If a user asks to replay a trace after modifying the number of selected measures that activated this rule, and if there was a successor obsel for changing the chart type to “Pie”, then this obsel will not be valid anymore because of this rule, and the chart type will be automatically changed according to the adapted value to be “Vertical Bars”. The rule will be as following: $$E_S = \text{Selected Measures} \quad C_S = (\text{Count}() > 1)$$ $$E_D = \text{Chart} \quad C_D = (\text{type} = \text{"Pie"})$$ $$\mathcal{A}_E = (\text{Type} = \text{"Vertical Bar"})$$ The user can replay a part of his session after modifying some of the obsels values. These modifications can be of many types like shifting obsel by changing their timestamps, thus causing a change in the order between obsels, updating a value for an attribute of an obsel, or even deleting an obsel. By using impact rules we can determine the consequences of a change and the adapted values. In case of not finding an adapted value of an element or the absence of an impact rule, the corresponding obsels will be invalid. Then the user will have to select the suitable value manually; otherwise the replay process will fail. 4.2. Retrieving adapted value from past traces When a user adds a new impact rule, the system asks him to choose the adapted value from a list of possible values, or to keep the system calculating it automatically using past traces. For this purpose, we propose to use a retrieval algorithm similar to the algorithm we presented in (Zarka et al. 2010). In the original algorithm, we tried to retrieve episodes similar to the current one without taking obsels values in consideration because we just wanted to know the next recommended obsels. So, in order to make this algorithm useful for finding the adapted values, we need to make a comparison between the values of the obsels. In addition, we want to retrieve the adapted value for the destination element and not the next recommended obsels. Get adapted value algorithm starts by selecting a subset of the trace from its beginning to the modified triggered obsel. Then it retrieves all the past similar episodes to the current one. Similarity includes values comparison. For each similar episode, it calculates the final value of the corresponding element (destination element in the impact rule) as we did in the simple replay, without updating the system. If there is more than one value, we take the one that occurs the most often and we consider it as the adapted one. If we are not able to retrieve any episode, we keep this element as an invalid element until another obsel modifies its value, otherwise the replay process will fail and the system will ask the user to choose the value manually. 5. Implementation In the previous sections, we have described the models that we have defined to support replay of user interactions by exploiting traces. In this section, we show how we have implemented our trace replay model into the SAP-BO Explore application. 5.1. Trace collecting and visualization Firstly we modified SAP-BO Explorer for being able to collect obsels. SAP-BO Explorer is divided into two parts. Server part is implemented in Java. The management of users’ sessions is done in this part, thus enabling many users to work on the system at the same time. The client part is a Flex application; each user has a web application where he can do his exploration. The traces are collected in the client side. Fig. 4 shows a snapshot of the user interface. Each time a user tries to use the system, a new session is opened. Each session contains many obsels, and each action of the user is collected as an obsel presented in a XML format specifying the obsel type, timestamps, and the values of this obsel. We consider that the interface of SAP-BO Explorer is divided into task-oriented blocks, where each block contains obsels dedicated to similar kinds of tasks. The interface consists of blocks for measures, categories, visualization, export, search, etc. For example, the measures block contains many types of obsels like select measure, add calculation, edit calculation, etc. For example, when a user tries to select a measure, we capture this action as an obsel of the type “Select measure” from the second block “Measures block”. The obsel has for value “Trade USD” and is time stamped with the current timestamp. Each session is presented as a M-Trace stored in XML and has a unique ID, contains the ID of the user who did this session, and the temporal list of obsels that happened in this session. When a user logs himself in SAP-BO Explorer, a request to the server-side is sent in order to open a new session. This triggers the creation of a new XML output file for this session. Each time a new obsel is collected, it is formatted in XML format and sent to the server in order to be added to the session file. Each user can open and manipulate many Information Spaces at the same time. An Information Space is a collection of objects mapped to data for a specific business operations or activities. All the obsels of a session, whatever the Information Spaces they belong to, are stored in the same file. We have developed a new interface to visualize users’ traces displaying a graphical representation of what they have done so far (see Fig. 5). Each obsel is captured according to our model classified according the available types and represented as colored bullets. Obsels appear on the left side of the interface as a chronologically ordered list from the beginning of the session to the most recent obsel. By clicking on an obsel, we can see its description on the right side of the interface. Obsel’s values are visualized in the form of a tree of attributes and their values. ![Trace Visualization Interface](image) **Fig. 5** Trace Visualization Interface ### 5.2. Trace replay implementation If a user wants to go back to a previous state, he can at any time select the triggered obsel from the list of captured obsels and click on replay button (see **Fig. 5**). The system will automatically replay traces to go back to this state. A new obsel will be added to the obsels list of type ‘Replay’. Its values are set according to the values of triggered obsel. This new obsel indicates that a replay action has occurred here and has triggered a previous obsel. As we explained before, the optimization algorithm uses replay obsels to minimize the number of the replayed obsels by deleting the obsels that are skipped in the replay action. Each element has different type and number of values from other elements. For not analyzing each element in a different way, we need to make it more general. By using introspection we can determine the type of an object at runtime. Introspection refers to the ability to examine something to determine what it is, what it knows, and what it is capable of doing. Introspection gives us a great deal of flexibility and control. To do that we used Object as type of the values attribute of an obsel, which means that this attribute can have any type of values. We do introspection on this attribute in order to determine the content of it and then to manipulate it in a general way. 6. Evaluation and Discussion We have implemented our replay method within the SAP-BO Explorer application. However, this method can be applied in any system. To enable trace replay, the first step is to collect traces. For this purpose, we use a model, the M-Trace, that enables us to collect all the traces according to the same abstract model. We have experimented with our system by using many types of datasets and by considering all obsels types, opening many sessions together and trying to go back to previous states many times in the same session. We even succeeded to go back to all sessions at the same time by one single go-back command. The execution time of the replay process is very fast, it is like any other action in the application, which means the time of message exchanging between the client and the server. Systems like ours face number of challenges like replaying traces for already closed sessions, optimizing replay after modifying past obsels and rechecking impact rules after modifying elements values. But they also face more general problems, as mentioned in the discussion section. For example, in (Hupp & Miller 2007), the following issues are raise: privacy of the user and his permission to trace him, security of the system while collecting and visualizing traces, protection of users from undesirable side-effects triggered by the replay, and the robustness of the replay after doing some changes in the system. When implementing our system, we also faced specific problems. For example, in SAP-BO Explorer the same user can open many sessions at the same time. We had to deal with the problem of replaying the trace of a closed session. Our replay process can handle this case by reopening the session, with default values and by applying all the replayed obsels until the triggered one. As we have not implemented yet the replay with changes, we have not faced the problem of optimizing the replay after these modifications. Application of impact rules can be recursive; a modification on an obsel value can have an impact on other obsel values if obsels are related. To deal with these problems we need to develop a graph of impact propagation to be able to solve loops problems and to know the dependencies between different obsels and elements. This will be one of our future works. When the trace includes obsels that have secure and sensitive information like passwords and credit card numbers, our system detects and obscures the password when visualizing it. But it still needs a lot of enhancements and rules to detect this information and secure it, by notifying the user about it or even asking him to re-enter it again. Our system continuously collects and records user’s interactions which constitute a potential risk to privacy and security. This problem is share by all the systems that record rich history. traces (web browsers, recommendation systems, etc.). Dealing with this issue is out of the scope of our study. However we do notify our users that all their interactions are recorded. Side-effects are another issue we have to deal with. Indeed, replaying a trace may have unexpected consequences and can damage the system or cause deletion. In our current implementation, we do not deal with this problem. However, we think that the proposition described in (Hupp & Miller 2007) is relevant to solve such a problem. The idea is to classify obsels into two classes: side-effecting and non side-effecting. This makes easier the annotation of critical obsels. Last, we have to face robustness issues. Indeed, we have to make sure that the trace system is still usable after major changes either on data or on processes of the system. This question is also out of the scope of our study because it is mainly related to the trace collecting phase. We make the assumption that robustness issues are handled by the trace-based system, responsible for traces management. 7. Conclusion and future work In this paper we have described an approach using interaction traces to allow users to return to a particular state of an application. This approach is an alternative way of undoing actions in applications where undo commands are not available (such as client-server applications). For this purpose, we use play-back of traces. Playback can be identical to the original trace or can introduce different action parameters. We analyze the impact propagation of changes performed on past actions. This work has been conducted in collaboration with SAP Business Objects and the application we used to implement our approach is SAP-BO Explorer. The aim of our contribution within this project was to support replay process in a client-server application, where classical undo commands cannot be implemented. The main contribution of this paper shows how we can playback interaction traces, in an optimized way, in order to go back to a particular state of the application. For that purpose, we have introduced the concept of predefined impact rules and we have built an algorithm that discovers adapted values of obsels affected by changes. At the time being, the collect process and the simple replay process are implemented. In future work, we plan to address issues mentioned in the discussion concerning side-effects, robustness, and security. In addition, we are interested in studying how we can extract users’ experiences in order to reuse them for assistance purpose. ACKNOWLEDGMENTS We thank Françoise Corvaisier, member of SAP-BO enterprise for her support, thoughts and for giving us the opportunity to do this work in SAP-BO. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsors. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00746727/file/hal-00746727.pdf", "len_cl100k_base": 7630, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 34147, "total-output-tokens": 9338, "length": "2e12", "weborganizer": {"__label__adult": 0.0002765655517578125, "__label__art_design": 0.0006532669067382812, "__label__crime_law": 0.0003113746643066406, "__label__education_jobs": 0.0017786026000976562, "__label__entertainment": 0.00012177228927612303, "__label__fashion_beauty": 0.00016808509826660156, "__label__finance_business": 0.0005064010620117188, "__label__food_dining": 0.0002684593200683594, "__label__games": 0.0006289482116699219, "__label__hardware": 0.0008144378662109375, "__label__health": 0.00035071372985839844, "__label__history": 0.0003170967102050781, "__label__home_hobbies": 0.00010126829147338869, "__label__industrial": 0.00033855438232421875, "__label__literature": 0.000396728515625, "__label__politics": 0.00019800662994384768, "__label__religion": 0.0003046989440917969, "__label__science_tech": 0.0679931640625, "__label__social_life": 0.00014162063598632812, "__label__software": 0.055511474609375, "__label__software_dev": 0.8681640625, "__label__sports_fitness": 0.00017058849334716797, "__label__transportation": 0.00032711029052734375, "__label__travel": 0.0001697540283203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39096, 0.02763]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39096, 0.37902]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39096, 0.89051]], "google_gemma-3-12b-it_contains_pii": [[0, 1059, false], [1059, 3629, null], [3629, 6617, null], [6617, 9379, null], [9379, 12838, null], [12838, 14408, null], [14408, 16542, null], [16542, 19079, null], [19079, 20966, null], [20966, 23978, null], [23978, 26595, null], [26595, 28280, null], [28280, 30005, null], [30005, 32870, null], [32870, 35437, null], [35437, 38027, null], [38027, 39096, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1059, true], [1059, 3629, null], [3629, 6617, null], [6617, 9379, null], [9379, 12838, null], [12838, 14408, null], [14408, 16542, null], [16542, 19079, null], [19079, 20966, null], [20966, 23978, null], [23978, 26595, null], [26595, 28280, null], [28280, 30005, null], [30005, 32870, null], [32870, 35437, null], [35437, 38027, null], [38027, 39096, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39096, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39096, null]], "pdf_page_numbers": [[0, 1059, 1], [1059, 3629, 2], [3629, 6617, 3], [6617, 9379, 4], [9379, 12838, 5], [12838, 14408, 6], [14408, 16542, 7], [16542, 19079, 8], [19079, 20966, 9], [20966, 23978, 10], [23978, 26595, 11], [26595, 28280, 12], [28280, 30005, 13], [30005, 32870, 14], [32870, 35437, 15], [35437, 38027, 16], [38027, 39096, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39096, 0.08527]]}
olmocr_science_pdfs
2024-12-07
2024-12-07